diff --git "a/test.csv" "b/test.csv" --- "a/test.csv" +++ "b/test.csv" @@ -1,7339 +1,4 @@ Q_CreationDate,Title,Question,Answer,Score,Is_accepted,N_answers,Q_Id -2019-09-20 13:43:24.297,Nvenc session limit per GPU,"I'm using Imageio, the python library that wraps around ffmpeg to do hardware encoding via nvenc. My issue is that I can't get more than 2 sessions to launch (I am using non-quadro GPUs). Even using multiple GPUs. I looked over NVIDIA's support matrix and they state only 2 sessions per gpu, but it seems to be per system. -For example I have 2 GPUs in a system. I can either use the env variable CUDA_VISIBLE_DEVICES or set the ffmpeg flag -gpu to select the GPU. I've verified gpu usage using Nvidia-smi cli. I can get 2 encoding sessions working on a single gpu. Or 1 session working on 2 separate gpus each. But I can't get 2 encoding sessions working on 2 gpus. -Even more strangely if I add more gpus I am still stuck at 2 sessions. I can't launch a third encoding session on a 3rd gpu. I am always stuck at 2 regardless of the # of gpus. Any ideas on how to fix this?","Nvidia limits it 2 per system Not 2 per GPU. The limitation is in the driver, not the hardware. There have been unofficially drivers posted to github which remove the limitation",1.2,True,1,6310 -2019-09-21 07:16:21.710,Setup of the Divio CMS Repositories,"The Divio Django CMS offers two servers: TEST and LIVE. Are these also two separate repositories? Or how is this done in the background? -I'm wondering because I would have the feeling the LIVE server is its own repository that just pulls from the TEST whenever I press deploy. Is that correct?","All Divio projects (django CMS, Python, PHP, whatever) have a Live and Test environment. -By default, both build the project from its repository's master branch (in older projects, develop). -On request, custom tracking branches can be enabled, so that the Live and Test environments will build from separate branches. -When a build successfully completes, the Docker image can be reused until changes are made to the project's repository. This means that after a successful deployment on Test, the Docker image doesn't need to be rebuilt, and the Live environment can be deployed much faster from the pre-built image. (Obviously this is only possible when they are on the same branch.)",0.3869120172231254,False,1,6311 -2019-09-22 12:12:44.420,How do i retrain the model without losing the earlier model data with new set of data,"for my current requirement, I'm having a dataset of 10k+ faces from 100 different people from which I have trained a model for recognizing the face(s). The model was trained by getting the 128 vectors from the facenet_keras.h5 model and feeding those vector value to the Dense layer for classifying the faces. -But the issue I'm facing currently is - -if want to train one person face, I have to retrain the whole model once again. - -How should I get on with this challenge? I have read about a concept called transfer learning but I have no clues about how to implement it. Please give your suggestion on this issue. What can be the possible solutions to it?","With transfer learning you would copy an existing pre-trained model and use it for a different, but similar, dataset from the original one. In your case this would be what you need to do if you want to train the model to recognize your specific 100 people. -If you already did this and you want to add another person to the database without having to retrain the complete model, then I would freeze all layers (set layer.trainable = False for all layers) except for the final fully-connected layer (or the final few layers). Then I would replace the last layer (which had 100 nodes) to a layer with 101 nodes. You could even copy the weights to the first 100 nodes and maybe freeze those too (I'm not sure if this is possible in Keras). In this case you would re-use all the trained convolutional layers etc. and teach the model to recognise this new face.",0.2012947653214861,False,1,6312 -2019-09-22 13:48:06.487,How to debug (500) Internal Server Error on Python Waitress server?,"I'm using Python and Flask, served by Waitress, to host a POST API. I'm calling the API from a C# program that posts data and gets a string response. At least 95% of the time, it works fine, but sometimes the C# program reports an error: -(500) Internal Server Error. -There is no further description of the error or why it occurs. The only clue is that it usually happens in clusters -- when the error occurs once, it likely occurs several times in a row. Without any intervention, it then goes back to running normally. -Since the error is so rare, it is hard to troubleshoot. Any ideas as to how to debug or get more information? Is there error handling I can do from either the C# side or the Flask/Waitress side?","Your flask application should be logging the exception when it occurs. Aside from combing through your logs (which should be stored somewhere centrally) you could consider something like Sentry.io, which is pretty easy to setup with Flask apps.",0.0,False,1,6313 -2019-09-23 05:52:42.417,Check inputs in csv file,"I`m new to python. I have a csv file. I need to check whether the inputs are correct or not. The ode should scan through each rows. -All columns for a particular row should contain values of same type: Eg: -All columns of second row should contain only string, -All columns of third row should contain only numbers... etc -I tried the following approach, (it may seem blunder): -I have only 15 rows, but no idea on number of columns(Its user choice) -df.iloc[1].str.isalpha() -This checks for string. I don`t know how to check ??","Simple approach that can be modified: - -Open df using df = pandas.from_csv() -For each column, use df[''] = df[''].astype(str) (str = string, int = integer, float = float64, ..etc). - -You can check column types using df.dtypes",0.3869120172231254,False,1,6314 -2019-09-23 11:00:06.643,how do I upgrade pip on Mac?,"I cannot upgrade pip on my Mac from the Terminal. -According to the documentation I have to type the command: -pip install -U pip -I get the error message in the Terminal: -pip: command not found -I have Mac OS 10.14.2, python 3.7.2 and pip 18.1. -I want to upgrade to pip 19.2.3",I came on here to figure out the same thing but none of this things seemed to work. so I went back and looked how they were telling me to upgrade it but I still did not get it. So I just started trying things and next thing you know I seen the downloading lines and it told me that my pip was upgraded. what I used was (pip3 install -- upgrade pip). I hope this can help anyone else in need.,0.0,False,3,6315 -2019-09-23 11:00:06.643,how do I upgrade pip on Mac?,"I cannot upgrade pip on my Mac from the Terminal. -According to the documentation I have to type the command: -pip install -U pip -I get the error message in the Terminal: -pip: command not found -I have Mac OS 10.14.2, python 3.7.2 and pip 18.1. -I want to upgrade to pip 19.2.3","pip3 install --upgrade pip - -this works for me!",0.4247838355242418,False,3,6315 -2019-09-23 11:00:06.643,how do I upgrade pip on Mac?,"I cannot upgrade pip on my Mac from the Terminal. -According to the documentation I have to type the command: -pip install -U pip -I get the error message in the Terminal: -pip: command not found -I have Mac OS 10.14.2, python 3.7.2 and pip 18.1. -I want to upgrade to pip 19.2.3","I have found an answer that worked for me: -sudo pip3 install -U pip --ignore-installed pip -This installed pip version 19.2.3 correctly. -It was very hard to find the correct command on the internet...glad I can share it now. -Thanks.",0.1352210990936997,False,3,6315 -2019-09-23 22:18:51.993,how to remove duplicates when using pandas concat to combine two dataframe,"I have two data from. -df1 with columns: id,x1,x2,x3,x4,....xn -df2 with columns: id,y. -df3 =pd.concat([df1,df2],axis=1) -when I use pandas concat to combine them, it became -id,y,id,x1,x2,x3...xn. -there are two id here.How can I get rid of one. -I have tried : -df3=pd.concat([df1,df2],axis=1).drop_duplicates().reset_index(drop=True). -but not work.","drop_duplicates() only removes rows that are completely identical. -what you're looking for is pd.merge(). -pd.merge(df1, df2, on='id)",0.0,False,1,6316 -2019-09-25 00:25:17.317,Supremum Metric in Python for Knn with Uncertain Data,"I'm trying to make a classifier for uncertain data (e.g ranged data) using python. in certain dataset, the list is a 2D array or array of record (contains float numbers for data and a string for labels), where in uncertain dataset the list is a 3D array (contains range of float numbers for data and a string for labels). i managed to manipulate a certain dataset to be uncertain using uniform probability distribution. A research paper says that i have to use supremum distance metric. how do i implement this metric in python? note that in uncertain dataset, both test set and training set is uncertain",I found out using scipy spatial distance and tweaking for-loops in standard knn helps a lot,1.2,True,1,6317 -2019-09-25 13:06:45.637,Dataflow Sideinputs - Worker Cache Size in SDK 2.x,"I am experiencing performance issues in my pipeline in a DoFn that uses large side input of ~ 1GB. The side input is passed using the pvalue.AsList(), which forces materialization of the side input. -The execution graph of the pipeline shows that the particular step spends most of the time for reading the side input. The total amount of data read exceeds the size of the side input by far. Consequently, I conclude that the side input does not fit into memory / cache of the workers even though their RAM is sufficient (using n1-highmem4 workers with 26 GB RAM). -How do I know how big this cache actually is? Is there a way to control its size using Beam Python SDK 2.15.0 (like there was the pipeline option --workerCacheMb=200 for Java 1.x SDK)? -There is no easy way of shrinking my side input more than 10%.","If you are using AsList, you are correct that the whole side input should be loaded into memory. It may be that your worker has enough memory available, but it just takes very long to read 1GB of data into the list. Also, the size of the data that is read depends on the encoding of it. If you can share more details about your algorithm, we can try to figure out how to write a pipeline that may run more efficiently. - -Another option may be to have an external service to keep your side input - for instance, a Redis instance that you write to on one side, and red from on the other side.",0.0,False,1,6318 -2019-09-26 08:40:43.480,Install packages with Conda for a second Python installation,"I recently installed Anaconda in my Windows. I did that to use some packages from some specific channels required by an application that is using Python 3.5 as its scripting language. -I adjusted my PATH variable to use Conda, pointing to the Python environment of the particular program, but now I would like to use Conda as well for a different Python installation that I have on my Windows. -When installing Anaconda then it isn't asking for a Python version to be related to. So, how can I use Conda to install into the other Python installation. Both Python installations are 'physical' installations - not virtual in any way.","Uninstall the other python installation and create different conda environments, that is what conda is great at. -Using conda from your anaconda installation to manage packages from another, independent python installation is not possible and not very feasible. -Something like this could serve your needs: - -Create one env for python 3.5 conda create -n py35 python=3.5 -Create one env for some other python version you would like to use, e.g. 3.6: conda create -n py36 python=3.6 -Use conda activate py35, conda deactivate, conda activate py36 to switch between your virtual environments.",1.2,True,1,6319 -2019-09-26 14:54:39.137,S3 file to Mysql AWS via Airflow,"I been learning how to use Apache-Airflow the last couple of months and wanted to see if anybody has any experience with transferring CSV files from S3 to a Mysql database in AWS(RDS). Or from my Local drive to MySQL. -I managed to send everything to an S3 bucket to store them in the cloud using airflow.hooks.S3_hook and it works great. I used boto3 to do this. -Now I want to push this file to a MySQL database I created in RDS, but I have no idea how to do it. Do I need to use the MySQL hook and add my credentials there and then write a python function? -Also, It doesn't have to be S3 to Mysql, I can also try from my local drive to Mysql if it's easier. -Any help would be amazing!","were you able to resolve the 'MySQLdb._exceptions.OperationalError: (2068, 'LOAD DATA LOCAL INFILE file request rejected due to restrictions on access' issue",0.0,False,1,6320 -2019-09-27 16:26:03.963,Change column from Pandas date object to python datetime,"I have a dataset with the first column as date in the format: 2011-01-01 and type(data_raw['pandas_date']) gives me pandas.core.series.Series -I want to convert the whole column into date time object so I can extract and process year/month/day from each row as required. -I used pd.to_datetime(data_raw['pandas_date']) and it printed output with dtype: datetime64[ns] in the last line of the output. I assume that values were converted to datetime. -but when I run type(data_raw['pandas_date']) again, it still says pandas.core.series.Series and anytime I try to run .dt function on it, it gives me an error saying this is not a datetime object. -So, my question is - it looks like to_datetime function changed my data into datetime object, but how to I apply/save it to the pandas_date column? I tried -data_raw['pandas_date'] = pd.to_datetime(data_raw['pandas_date']) -but this doesn't work either, I get the same result when I check the type. Sorry if this is too basic.","type(data_raw['pandas_date']) will always return pandas.core.series.Series, because the object data_raw['pandas_date'] is of type pandas.core.series.Series. What you want is to get the dtype, so you could just do data_raw['pandas_date'].dtype. - -data_raw['pandas_date'] = pd.to_datetime(data_raw['pandas_date']) - -This is correct, and if you do data_raw['pandas_date'].dtype again afterwards, you will see that it is datetime[64].",1.2,True,1,6321 -2019-09-28 00:05:03.313,Using BFS/DFS To Find Path With Maximum Weight in Directed Acyclic Graph,"You have a 2005 Honda Accord with 50 miles (weight max) left in the tank. Which McDonalds locations (graph nodes) can you visit within a 50 mile radius? This is my question. -If you have a weighted directed acyclic graph, how can you find all the nodes that can be visited within a given weight restriction? -I am aware of Dijkstra's algorithm but I can't seem to find any documentation of its uses outside of min-path problems. In my example, theres no node in particular that we want to end at, we just want to go as far as we can without going over the maximum weight. It seems like you should be able to use BFS/DFS in order to solve this, but I cant find documentation for implementing those in graphs with edge weights (again, outside of min-path problems).","Finding the longest path to a vertex V (a McDonald's in this case) can be accomplished using topological sort. We can start by sorting our nodes topologically, since sorting topologically will always return the source node U, before the endpoint, V, of a weighted path. Then, since we would now have access to an array in which each source vertex precedes all of its adjacent vertices, we can search through every path beginning with vertex U and ending with vertex V and set a value in an array with an index corresponding to U to the maximum edge weight we find connecting U to V. If the sum of the maximal distances exceeds 50 without reaching a McDonalds, we can backtrack and explore the second highest weight path going from U to V, and continue backtracking should we exhaust every path exiting from vertex U. Eventually we will arrive at a McDonalds, which will be the McDonalds with the maximal distance from our original source node while maintaining a total spanning distance under 50.",0.0,False,2,6322 -2019-09-28 00:05:03.313,Using BFS/DFS To Find Path With Maximum Weight in Directed Acyclic Graph,"You have a 2005 Honda Accord with 50 miles (weight max) left in the tank. Which McDonalds locations (graph nodes) can you visit within a 50 mile radius? This is my question. -If you have a weighted directed acyclic graph, how can you find all the nodes that can be visited within a given weight restriction? -I am aware of Dijkstra's algorithm but I can't seem to find any documentation of its uses outside of min-path problems. In my example, theres no node in particular that we want to end at, we just want to go as far as we can without going over the maximum weight. It seems like you should be able to use BFS/DFS in order to solve this, but I cant find documentation for implementing those in graphs with edge weights (again, outside of min-path problems).","For this problem, you will want to run a DFS from the starting node. Recurse down the graph from each child of the starting node until a total weight of over 50 is reached. If a McDonalds is encountered along the traversal record the node reached in a list or set. By doing so, you will achieve the most efficient algorithm possible as you will not have to create a complete topological sort as the other answer to this question proposes. Even though this algorithm still technically runs in O(ElogV) time, by recursing back on the DFS when a path distance of over 50 is reached you avoid traversing through the entire graph when not necessary.",0.0,False,2,6322 -2019-09-29 23:15:06.167,How does Qt Designer work in terms of creating more than 1 dialog per file?,"I'm starting to use Qt Designer. -I am trying to create a game, and the first task that I want to do is to create a window where you have to input the name of the map that you want to load. If the map exists, I then switch to the main game window, and if the name of the map doesn't exist, I want to display a popup window that tells the user that the name of the map they wrote is not valid. -I'm a bit confused with the part of showing the ""not valid"" pop-up window. -I realized that I have two options: - -Creating 2 separated .ui files, and with the help of the .show() and .hide() commands show the correspoding window if the user input is invalid. -The other option that I'm thinking of creating both windows in the same .ui file, which seems to be a better option, but I don't really know how to work with windows that come from the same file. Should I create a separate class for each of the windows that come from the Qt Designer file? If not, how can I access both windows from the same class?","Your second option seems impossible, it would be great to share the .ui since in my years that I have worked with Qt Designer I have not been able to implement what you point out. -An .ui is an XML file that describes the elements and their properties that will be used to create a class that is used to fill a particular widget. So considering the above, your second option is impossible. -This concludes that the only viable option is its first method.",1.2,True,1,6323 -2019-10-01 02:27:00.000,Start at 100 and count up till 999,"So, this is for my assignment and I have to create a flight booking system. One of the requirements is that it should create 3 digit passenger code that does not start with zeros (e.g. 100 is the smallest acceptable value) and I have no idea how I can do it since I am a beginner and I just started to learn Python. I have made classes for Passenger, Flight, Seating Area so far because I just started on it today. Please help. Thank you.","I like list comprehension for making a list of 100 to 999: -flights = [i for i in range(100, 1000)] -For the random version, there is probably a better way, but Random.randint(x, y) creates a random in, inclusive of the endpoints: -from random import Random -rand = Random() -flight = rand.randint(100,999) -Hope this helps with your homework, but do try to understand the assignment and how the code works...lest you get wrecked on the final!",0.0,False,1,6324 -2019-10-01 07:26:35.203,String problem / Select all values > 8000 in pandas dataframe,"I want to select all values bigger than 8000 within a pandas dataframe. -new_df = df.loc[df['GM'] > 8000] -However, it is not working. I think the problem is, that the value comes from an Excel file and the number is interpreted as string e.g. ""1.111,52"". Do you know how I can convert such a string to float / int in order to compare it properly?","You can see value of df.dtypes to see what is the type of each column. Then, if the column type is not as you want to, you can change it by df['GM'].astype(float), and then new_df = df.loc[df['GM'].astype(float) > 8000] should work as you want to.",0.2012947653214861,False,1,6325 -2019-10-03 19:17:11.890,Can we detect multiple objects in image using caltech101 dataset containing label wise images?,"I have a caltech101 dataset for object detection. Can we detect multiple objects in single image using model trained on caltech101 dataset? -This dataset contains only folders (label-wise) and in each folder, some images label wise. -I have trained model on caltech101 dataset using keras and it predicts single object in image. Results are satisfactory but is it possible to detect multiple objects in single image? -As I know some how regarding this. for detecting multiple objects in single image, we should have dataset containing images and bounding boxes with name of objects in images. -Thanks in advance","The dataset can be used for detecting multiple objects but with below steps to be followed: - -The dataset has to be annotated with bounding boxes on the object present in the image -After the annotations are done, you can use any of the Object detectors to do transfer learning and train on the annotated caltech 101 dataset - -Note: - Without annotations, with just the caltech 101 dataset, detecting multiple objects in a single image is not possible",1.2,True,1,6326 -2019-10-04 13:40:16.797,Data type to save expanding data for data logging in Python,"I am writing a serial data logger in Python and am wondering which data type would be best suited for this. Every few milliseconds a new value is read from the serial interface and is saved into my variable along with the current time. I don't know how long the logger is going to run, so I can't preallocate for a known size. -Intuitively I would use an numpy array for this, but appending / concatenating elements creates a new array each time from what I've read. -So what would be the appropriate data type to use for this? -Also, what would be the proper vocabulary to describe this problem?","Python doesn't have arrays as you think of them in most languages. It has ""lists"", which use the standard array syntax myList[0] but unlike arrays, lists can change size as needed. using myList.append(newItem) you can add more data to the list without any trouble on your part. -Since you asked for proper vocabulary in a useful concept to you would be ""linked lists"" which is a way of implementing array like things with varying lengths in other languages.",0.0,False,1,6327 -2019-10-04 20:01:45.247,How do you push in pycharm if the commit was already done?,Once you commit in pycharm it takes you to a second window to go through with the push. But if you only hit commit and not commit/push then how do you bring up the push option. You can't do another commit unless changes are made.,In the upper menu [VCS] -> [Git...] -> [Push],0.6730655149877884,False,1,6328 -2019-10-06 17:33:10.463,ModuleNotFoundError: No module named 'telegram',"Trying to run the python-telegram-bot library through Jupyter Notebook I get this question error. I tried many ways to reinstall it, but nothing from answers at any forums helped me. What should be a mistake and how to avoid it while installing?","Do you have a directory with ""telegram"" name? If you do,rename your directory and try it again to prevent import conflict. -good luck:)",0.3869120172231254,False,1,6329 -2019-10-07 20:48:55.507,argparse.print_help() ArgumentParser message string,"I am writing a slack bot, and I am using argsparse to parse the arguments sent into the slackbot, but I am trying to figure out how to get the help message string so I can send it back to the user via the slack bot. -I know that ArgumentParser has a print_help() method, but that is printed via console and I need a way to get that string.",I just found out that there's a method called format_help() that generates that help string,0.3869120172231254,False,1,6330 -2019-10-07 22:25:21.107,"Is it possible to have a c++ dll run a python program in background and have it populate a map of vectors? If so, how?","There will be an unordered_map in c++ dll containing some 'vectors' mapped to its 'names'. For each of these 'names', the python code will keep on collecting data from a web server every 5 seconds and fill the vectors with it. -Is such a dll possible? If so, how to do it?","You can make the Python code into an executable. Run the executable file from the DLL as a separate process and communicate with it via TCP localhost socket - or some other Windows utility that allows to share data between different processes. -That's a slow mess. I agree, but it works. -You can also embed Python interpreter and run the script it on the dll... I suppose.",0.0,False,1,6331 -2019-10-08 00:10:57.677,What is the difference between spline filtering and spline interpolation?,"I'm having trouble connecting the mathematical concept of spline interpolation with the application of a spline filter in python. My very basic understanding of spline interpolation is that it's fitting the data in a piece-wise fashion, and the piece-wise polynomials fitted are called splines. But its applications in image processing involve pre-filtering the image and then performing interpolation, which I'm having trouble understanding. -To give an example, I want to interpolate an image using scipy.ndimage.map_coordinates(input, coordinates, prefilter=True), and the keyword prefilter according to the documentation: - -Determines if the input array is prefiltered with spline_filter before interpolation - -And the documentation for scipy.ndimage.interpolation.spline_filter simply says the input is filtered by a spline filter. So what exactly is a spline filter and how does it alter the input data to allow spline interpolation?","I'm guessing a bit here. In order to calculate a 2nd order spline, you need the 1st derivative of the data. To calculate a 3rd order spline, you need the second derivative. I've not implemented an interpolation motor beyond 3rd order, but I suppose the 4th and 5th order splines will require at least the 3rd and 4th derivatives. -Rather than recalculating these derivatives every time you want to perform an interpolation, it is best to calculate them just once. My guess is that spline_filter is doing this pre-calculation of the derivatives which then get used later for the interpolation calculations.",0.3869120172231254,False,1,6332 -2019-10-08 08:59:39.373,How to show a highlighted label when The mouse is on widget,"I need to know how to make a highlighted label(or small box )appears when the mouse is on widget like when you are using browser and put the mouse on (reload/back/etc...) button a small box will appear and tell you what this button do -and i want that for any widget not only widgets on toolbar","As the comment of @ekhumoro says -setToolTip is the solution",1.2,True,1,6333 -2019-10-08 14:18:17.240,xmlsec1 not found on ibm-cloud deployment,"I am having hard time to install a python lib called python3-saml -To narrow down the problem I created a very simple application on ibm-cloud and I can deploy it without any problem, but when I add as a requirement the lib python3-saml -I got an exception saying: -pkgconfig.pkgconfig.PackageNotFoundError: xmlsec1 not found -The above was a deployment on ibm-cloud, but I did try to install the same python lib locally and I got the same error message, locally I can see that I have the xmlsec1 installed. -Any help on how to successfully deploy it on ibm-cloud using python3-saml? -Thanks in advance","I had a similar issue and I had to install the ""xmlsec1-devel"" on my CentOS system before installing the python package.",0.3869120172231254,False,1,6334 -2019-10-10 09:57:25.667,Using a function from a built-in module in your own module - Python,"I'm new with Python and new on Stackoverflow, so please let me know if this question should be posted somewhere else or you need any other info :). But I hope someone can help me out with what seems to be a rather simple mistake... -I'm working with Python in Jupyter Notebook and am trying to create my own module with some selfmade functions/loops that I often use. However, when I try to some of the functions from my module, I get an error related to the import of the built-in module that is used in my own module. -The way I created my own module was by: - -creating different blocks of code in a notebook and downloading it -as 'Functions.py' file. -saving this Functions.py file in the folder that i'm currently working in (with another notebook file) -in my current notebook file (where i'm doing my analysis), I import my module with 'import Functions'. - -So far, the import of my own module seems to work. However, some of my self-made functions use functions from built-in modules. E.g. my plot_lines() function uses math.ceil() somewhere in the code. Therefore, I imported 'math' in my analysis notebook as well. But when I try to run the function plot_lines() in my notebook, I get the error ""NameError: name 'math' is not defined"". -I tried to solve this error by adding the code 'import math' to the function in my module as well, but this did not resolve the issue. -So my question is: how can I use functions from built-in Python modules in my own modules? -Thanks so much in advance for any help!","If anyone encounters the same issue: -add 'import math' to your own module. -Make sure that you actually reload your adjusted module, e.g. by restarting your kernell!",0.0,False,1,6335 -2019-10-10 14:40:43.443,how to post-process raw images using rawpy to have the same effect with default output like ISP in camera?,"I use rawpy module in python to post-process raw images, however, no matter how I set the Params, the output is different from the default RGB in camera ISP, so anyone know how to operate on this please? -I have tried the following ways: -Default: -output = raw.postprocess() -Use Camera White balance: -output = raw.postprocess(use_camera_wb=True) -No auto bright: -output = raw.postprocess(use_camera_wb=True, no_auto_bright=True) -None of these could recover the RGB image as the camera ISP output.","The dcraw/libraw/rawpy stack is based on publicly available (reverse-engineered) documentation of the various raw formats, i.e., it's not using any proprietary libraries provided by the camera vendors. As such, it can only make an educated guess at what the original camera ISP would do with any given image. Even if you have a supposedly vendor-neutral DNG file, chances are the camera is not exporting everything there in full detail. -So, in general, you won't be able to get the same output.",0.0,False,1,6336 -2019-10-11 00:23:12.790,How does TF know what object you are finetuning for,"I am trying to improve mobilenet_v2's detection of boats with about 400 images I have annotated myself, but keep on getting an underfitted model when I freeze the graphs, (detections are random does not actually seem to be detecting rather just randomly placing an inference). I performed 20,000 steps and had a loss of 2.3. -I was wondering how TF knows that what I am training it on with my custom label map -ID:1 -Name: 'boat' -Is the same as what it regards as a boat ( with an ID of 9) in the mscoco label map. -Or whether, by using an ID of 1, I am training the models' idea of what a person looks like to be a boat? -Thank you in advance for any advice.","so I managed to figure out the issue. -We created the annotation tool from scratch and the issue that was causing underfitting whenever we trained regardless of the number of steps or various fixes I tried to implement was that When creating bounding boxes there was no check to identify whether the xmin and ymin coordinates were less than the xmax and ymax I did not realize this would be such a large issue but after creating a very simple check to ensure the coordinates are correct training ran smoothly.",0.0,False,2,6337 -2019-10-11 00:23:12.790,How does TF know what object you are finetuning for,"I am trying to improve mobilenet_v2's detection of boats with about 400 images I have annotated myself, but keep on getting an underfitted model when I freeze the graphs, (detections are random does not actually seem to be detecting rather just randomly placing an inference). I performed 20,000 steps and had a loss of 2.3. -I was wondering how TF knows that what I am training it on with my custom label map -ID:1 -Name: 'boat' -Is the same as what it regards as a boat ( with an ID of 9) in the mscoco label map. -Or whether, by using an ID of 1, I am training the models' idea of what a person looks like to be a boat? -Thank you in advance for any advice.","The model works with the category labels (numbers) you give it. The string ""boat"" is only a translation for human convenience in reading the output. -If you have a model that has learned to identify a set of 40 images as class 9, then giving it a very similar image that you insist is class 1 will confuse it. Doing so prompts the model to elevate the importance of differences between the 9 boats and the new 1 boats. If there are no significant differences, then the change in weights will find unintended features that you don't care about. -The result is a model that is much less effective.",0.0,False,2,6337 -2019-10-11 00:57:45.870,Warehouse routes between each started workorder in production order,"I'm working with odoo11 community version and currently I have some problem. -This is my exmplanation of problem: -In company I have many workcenters, and for each workcenter: -1) I want to create separate warehouse for each workcenter -or -2) Just 1 warehouse but different storage areas for each workcenter -(currently I made second option) and each workcenter have their own operation type: Production -Now my problem started, There are manufacturing orders and each manufacturing order have few workorders, And I want to do something that when some workorder is started then products are moved to this workcenter's warehouse/storage area and they are there untill next workorders using different workcenter starting then product are moved to next workcenter warehouse/storage area. -I can only set that after creating new sale order production order is sent to first Workcenter storage area and he is ther untill all workorders in production order are finished, I don't know how to trigger move routes between workcenters storage areas. for products that are still in production stage -Can I do this from odoo GUI, or maybe I need to do this somewhere in code?","Ok, I found my answer, which is that to accomplish what I wanted I need to use Manufacturing with Multi levell Bill of material, it working in way that theoretically 3 steps manufacturing order is divided into 3 single manufacture orders with 1 step each, and for example 2 and 3 prodcution order which before were 2 and 3 step are using as components to produce product that are finished in previous step which now is individual order.",1.2,True,1,6338 -2019-10-11 03:40:50.427,How to Connect Django with Python based Crawler machine?,"Good day folks -Recently, I made a python based web crawler machine that scrapes_ some news ariticles and django web page that collects search title and url from users. -But I do not know how to connect the python based crawler machine and django web page together, so I am looking for the any good resources that I can reference. -If anyone knows the resource that I can reference, -Could you guys share those? -Thanks","There are numerous ways you could do this. -You could directly integrate them together. Both use Python, so the scraper would just be written as part of Django. -You could have the scraper feed the data to a database and have Django read from that database. -You could build an API from the scraper to your Django implementation. -There are quite a few options for you depending on what you need.",1.2,True,1,6339 -2019-10-11 08:52:05.923,Is it possible to make a mobile app in Django?,"I was wondering if it is possible for me to use Django code I have for my website and somehow use that in a mobile app, in a framework such as, for example, Flutter. -So is it possible to use the Django backend I have right now and use it in a mobile app? -So like the models, views etc...","Yes. There are a couple ways you could do it - -Use the Django Rest Framework to serve as the backend for something like React Native. -Build a traditional website for mobile and then run it through a tool like PhoneGap. -Use the standard Android app tools and use Django to serve and process data through API requests.",1.2,True,1,6340 -2019-10-11 09:16:29.590,how to simulate mouse hover in robot framework on a desktop application,"Can anyone please let me know how to simulate mouse hover event using robot framework on a desktop application. I.e if I mouse hover on a specific item or an object, the sub menus are listed and i need to select one of the submenu item.","It depends on the automation library that you are using to interact with the Desktop application. -The normal approach is the following: - -Find the element that you want to hover on (By ID or some other unique locator) -Get the attribute position of the element (X,Y) -Move your mouse to that position. - -In this way you don´t ""hardcode"" the x,y position what will make your test case flaky.",0.0,False,1,6341 -2019-10-11 13:46:11.070,I have a network with 3 features and 4 vector outputs. How is MSE and accuracy metric calculated?,I understand how it works when you have one column output but could not understand how it is done for 4 column outputs.,"It’s not advised to calculate accuracy for continuous values. For such values you would want to calculate a measure of how close the predicted values are to the true values. This task of prediction of continuous values is known as regression. And generally R-squared value is used to measure the performance of the model. -If the predicted output is of continuous values then mean square error is the right option -For example: -Predicted o/p vector1-----> [2,4,8] and -Actual o/p vector1 -------> [2,3.5,6] -1.Mean square error is sqrt((2-2)^2+(4-3.5)^2+(8-6)^2 ) -2.Mean absolute error..etc. -(2)if the output is of classes then accuracy is the right metric to decide on model performance -Predicted o/p vector1-----> [0,1,1] -Actual o/p vector1 -------> [1,0,1] -Then accuracy calculation can be done with following: -1.Classification Accuracy -2.Logarithmic Loss -3.Confusion Matrix -4.Area under Curve -5.F1 Score",0.3869120172231254,False,1,6342 -2019-10-11 13:57:41.357,What are the types of Python operators?,"I tried type(+) hoping to know more about how is this operator represented in python but i got SyntaxError: invalid syntax. -My main problem is to cast as string representing an operation :""3+4"" into the real operation to be computed in Python (so to have an int as a return: 7). -I am also trying to avoid easy solutions requiring the os library if possible.","Operators don't really have types, as they aren't values. They are just syntax whose implementation is often defined by a magic method (e.g., + is defined by the appropriate type's __add__ method). -You have to parse your string: - -First, break it down into tokens: ['3', '+', '4'] -Then, parse the token string into an abstract syntax tree (i.e., something at stores the idea of + having 3 and 4 as its operands). -Finally, evaluate the AST by applying functions stored at a node to the values stored in its children.",1.2,True,1,6343 -2019-10-12 16:46:51.550,How to rotate a object trail in vpython?,I want to write a program to simulate 5-axis cnc gcode with vpython and I need to rotate trail of the object that's moving. Any idea how that can be done?,"It's difficult to know exactly what you need, but if instead of using ""make_trail=True"" simply create a curve object to which you append points. A curve object named ""c"" can be rotated using the usual way to rotate an object: c.rotate(.....).",0.0,False,1,6344 -2019-10-13 10:37:11.607,How to extract/cut out parts of images classified by the model?,"I am new to deep learning, I was wondering if there is a way to extract parts of images containing the different label and then feed those parts to different model for further processing? -For example,consider the dog vs cat classification. -Suppose the image contains both cat and dog. -We successfully classify that the image contains both, but how can we classify the breed of the dog and cat present? -The approach I thought of was,extracting/cutting out the parts of the image containing dog and cat.And then feed those parts to the respective dog breed classification model and cat breed classification model separately. -But I have no clue on how to do this.","Your thinking is correct, you can have multiple pipelines based on the number of classes. - -Training: -Main model will be an object detection and localization model like Faster RCNN, YOLO, SSD etc trained to classify at a high level like cat and dog. This pipeline provides you bounding box details (left, bottom, right, top) along with the labels. -Sub models will be multiple models trained on a lover level. For example a model that is trained to classify breed. This can be done by using models like vgg, resnet, inception etc. You can utilize transfer learning here. -Inference: Pass the image through Main model, crop out the detection objects using bounding box details (left, bottom, right, top) and based on the label information, feed it appropriate sub model and extract the results.",1.2,True,1,6345 -2019-10-13 14:56:02.397,Creating dask_jobqueue schedulers to launch on a custom HPC,"I'm new to dask and trying to use it in our cluster which uses NC job scheduler (from Runtime Design Automation, similar to LSF). I'm trying to create an NCCluster class similar to LSFCluster to keep things simple. -What are the steps involved in creating a job scheduler for custom clusters? -Is there any other way to interface dask to custom clusters without using JobQueueCluster? -I could find info on how to use the LSFCluster/PBSCluster/..., but couldn't find much information on creating one for a different HPC. -Any links to material/examples/docs will help -Thanks","Got it working after going through the source code. -Tips for anyone trying: - -Create a customCluster & customJob class similar to LSFCluster & LSFJob. -Override the following - - -submit_command -cancel_command -config_name (you'll have to define it in the jobqueue.yaml) -Depending on the cluster, you may need to override the _submit_job, _job_id_from_submit_ouput and other functions. - - -Hope this helps.",1.2,True,1,6346 -2019-10-13 23:43:47.973,How to run a python script using an anaconda virtual environment on mac,"I am trying to get some code working on mac and to do that I have been using an anaconda virtual environment. I have all of the dependencies loaded as well as my script, but I don't know how to execute my file in the virtual environment on mac. The python file is on my desktop so please let me know how to configure the path if I need to. Any help?",If you have a terminal open and are in your virtual environment then simply invoking the script should run it in your environment.,1.2,True,1,6347 -2019-10-14 15:59:45.917,Dynamically Injecting User Input Values into Python code on AWS?,"I am trying to deploy a Python webapp on AWS that takes a USERNAME and PASSWORD as input from a user, inputs them into a template Python file, and logs into their Instagram account to manage it automatically. -In Depth Explanation: -I am relatively new to AWS and am really trying to create an elaborate project so I can learn. I was thinking of somehow receiving the user input on a simple web page with two text boxes to input their Instagram account info (username & pass). Upon receiving this info, my instinct tells me that I could somehow use Lambda to quickly inject it into specific parts of an already existing template.py file, which will then be taken and combined with the rest of the source files to run the code. These source files could be stored somewhere else on AWS (S3?). I was thinking of running this using Elastic Beanstalk. -I know this is awfully involved, but my main issue is this whole dynamic injection thing. Any ideas would be so greatly appreciated. In the meantime, I will be working on it.","One way in which you could approach this would be have a hosted website on a static s3 bucket. Then, when submitting a request, goes to an API Gateway POST endpoint, This could then trigger a lambda (in any language of choice) passing in the two values. -This would then be passed into the event object of the lambda, you could store these inside secrets manager using the username as the Key name so you can reference it later on. Storing it inside a file inside a lambda is not a good approach to take. -Using this way you'd learn some key services: - -S3 + Static website Hosting -API Gateway -Lambdas -Secrets Manager - -You could also add alias's/versions to the lambda such as dev or production and same concept to API Gateways with stages to emulate doing a deployment. -However there are hundreds of different ways to also design it. And this is only one of them!",0.0,False,1,6348 -2019-10-14 18:35:11.307,how do I locate the btn by class name?,"I have this html code: - -I am trying to locate all the elements that meet this class with phyton, and selenium webdriver library: -likeBtn = driver.find_elements_by_class_name('_2ic5v') -but when I print -likeBtn -it prints -[] -I want to locate all of the buttons that much this div/span class, or aria-label -how do I do that successfully? Thanks in advance -update - when I do copy Xpath from page the print stays the same","Is it button class name dynamic or static? -How if you try choose By.CssSelector? -You can find element by copy selector in element",0.0,False,1,6349 -2019-10-15 06:25:59.843,Trying to find text in an article that may contain quotation marks,"I'm using python's findall function with a reg expression that should work but can't get the function to output results with quotation marks in them ('""). -This is what I tried: -Description = findall('

([A-Za-z ,\.\—'"":;0-9]+).

\n', text) -The quotation marks inside the reg expression are creating the hassle and I have no idea how to get around it.",Placing the backslash before the single quote like Sachith Rukshan suggested makes it work,1.2,True,1,6350 -2019-10-16 08:45:58.897,How to design realtime deeplearnig application for robotics using python?,"I have created a machine learning software that detects objects(duh!), processes the objects based on some computer vision parameters and then triggers some hardware that puts the object in the respective bin. The objects are placed on a conveyer belt and a camera is mounted at a point to snap pictures of objects(one object at a time) when they pass beneath the camera. I don't have control over the speed of the belt. -Now, the challenge is that I have to configure a ton of things to make the machine work properly. -The first problem is the time model takes to create segmentation masks, it varies from one object to another. -Another issue is how do I maintain signals that are generated after computer vision processing, send them to actuators in a manner that it won't get misaligned with the computer vision-based inferencing. -My initial design includes creating processes responsible for a specific task and then make them communicate with one other as per the necessity. However, the problem of synchronization still persists. -As of now, I am thinking of treating the software stack as a group of services as we usually do in backend and make them communicate using something like celery and Redis queue. -I am a kind of noob in system design, come from a background of data science. I have explored python's multithreading module and found it unusable for my purpose(all threads run on single core). I am concerned if I used multiprocessing, there could be additional delays in individual processes due to messaging and thus, that would add another uncertainty to the program. -Additional Details: - -Programming Frameworks and Library: Tensorflow, OpenCV and python -Camera Resolution: 1920P -Maximum Accutuation Speed: 3 triggers/second -Deep Learning Models: MaskRCNN/UNet - -P.S: You can also comment on the technologies or the keywords I should search for because a vanilla search yields nothing good.","Let me summarize everything first. - -What you want to do - -The ""object"" is on the conveyer belt -The camera will take pictures of the object -MaskRCNN will run to do the analyzing - -Here are some problems you're facing - -""The first problem is the time model takes to create segmentation masks, it varies from one object to another."" - --> if you want to reduce the processing time for each image, then an accelerator (FPGA, Chip, etc) or some acceleration technique is needed. Intel OpenVino and Intel DL stick is a good start. --> if there are too many pictures to process then you'll have 2 choices: 1) put a lot of machines so all the job can be done or 2) select only the important job and discard others. The fact that you set the ""Maximum Accutuation"" to a fixed number (3/sec) made me think that this is the problem you're facing. A background subtractor is a good start for creating images capture triggers. - -""Another issue is how do I maintain signals that are generated after computer vision processing, send them to actuators in a manner that it won't get misaligned with the computer vision-based inferencing."" - --> a ""job distributor"" like Celery is good choice here. If the message is stacked inside the broker (Redis), then some tasks will have to wait. But this can easily by scaling up your computer. - -Just a few advice here: - -a vision system also includes the hardware parts, so a hardware specification is a must. -Clarify the requirements -Impossible things do exist, so sometimes you could reduce some factors (reliable, cost) of your project.",1.2,True,1,6351 -2019-10-16 13:41:40.667,Is there another way to plot a graph in python without matplotlib?,"As the title says, that's basically it. I have tried to install matplotlib already but: - -I am on Windows and ""sudo"" doesn't work -Every solution and answers on Stack Overflow regarding matplotlib (or some other package) not being able to be installed doesn't work for me... -I get ""Error Code 1"" - -So! Is there any other way to plot a graph in python without matplotlib? If not, can I have help with how to install matplotlib, successfully?",in cmd (coammand prompt) type pip install matplotlib,-0.3869120172231254,False,1,6352 -2019-10-17 06:41:12.867,File related operations python subprocess vs. native python,"I have a simple task I want to perform over ssh: return all files from a given file list that do not exist. -The way I would go about doing this would be to wrap the following in an ssh session: -for f in $(files); do stat $f > /dev/null ;done -The stdout redirect will ignore all good files and then reading the stderr will give me a list of all non found files. -I first thought of using this bash code with the ssh part inside a subprocess.run(..., shell=True) but was discouraged to do so. Instead,paramikowas suggested. -I try to understand why and when native python is better than subprocessing bash - -Computability with different OS (not an issue for me as the code is pretty tightly tied to Ubuntu) -Error and exception handling - this one I do get and think it's important, though catching an exception or exit code from subprocess is kinda easy too - -The con in my eyes with native python is the need to involve somewhat complicated modules such as paramiko when bash's ssh and stat seem to me as more plain and easy to use -Are there any guidelines for when and how to choose bash over python? -This question is mainly about using a command over ssh, but is relevant for any other command that bash is doing in a short and easy way and python wraps","There are really three choices here: doing something in-process (like paramiko), running ssh directly (with subprocess), and running ssh with the shell (also with subprocess). As a general rule, avoid running the shell programmatically (as opposed to, say, upon interactive user request). -The reason is that it’s a human-oriented interface (thus the easy separation of words with spaces and shortcuts for $HOME and globbing) that is vastly underpowered as an API. Consider, for example, how your code would detect that ssh was missing: the situation doesn’t arise with paramiko (so long as it is installed), is obvious with subprocess, and is just an (ambiguous) exit code and stderr message from the shell. Also consider how you supply the command to run: it already must be a command suitable for the shell (due to limitations in the SSH protocol), but if you invoke ssh with the shell it must be encoded (sometimes called “doubly escaped”) so as to have the local shell’s interpretation be the desired multi-word command for the remote shell. -So far, paramiko and subprocess are pretty much equivalent. As a more difficult case, consider how a key verification failure would manifest: paramiko would describe the failure as data, whereas the others would attempt to interact with the user (which might or might not be present). paramiko also supports opening multiple channels over one authenticated connection; ssh does so as well but only via a complicated ControlMaster configuration involving Unix socket files (which might not have any good place to exist in some deployments). Speaking of configuration, you may need to pass -F to avoid complications from the user’s .ssh/config if it is not designed with this automated use case in mind. -In summary, libraries are designed for use cases like yours, and so it should be no surprise that they work better, especially for edge cases, than assembling your own interface from human-oriented commands (although it is very useful that such manual compositions are possible!). If installing a non-standard dependency like paramiko is a burden, at least use subprocess directly; cutting out the second shell is already a great improvement.",1.2,True,1,6353 -2019-10-17 13:02:33.333,Auto activate virtual environment in Visual Studio Code,"I want VS Code to turn venv on run, but I can't find how to do that. -I already tried to add to settings.json this line: - -""terminal.integrated.shellArgs.windows"": [""source${workspaceFolder}\env\Scripts\activate""] - -But, it throws me an 127 error code. I found what 127 code means. It means, Not found. But how it can be not found, if I see my venv folder in my eyes right now? -I think it's terminal fault. I'm using Win 10 with Git Bash terminal, that comes when you install Git to your machine.","There is a new flag that one can use: ""python.terminal.activateEnvironment"": true",0.573727155831378,False,2,6354 -2019-10-17 13:02:33.333,Auto activate virtual environment in Visual Studio Code,"I want VS Code to turn venv on run, but I can't find how to do that. -I already tried to add to settings.json this line: - -""terminal.integrated.shellArgs.windows"": [""source${workspaceFolder}\env\Scripts\activate""] - -But, it throws me an 127 error code. I found what 127 code means. It means, Not found. But how it can be not found, if I see my venv folder in my eyes right now? -I think it's terminal fault. I'm using Win 10 with Git Bash terminal, that comes when you install Git to your machine.","This is how I did it in 2021: - -Enter Ctrl+Shift+P in your vs code. - -Locate your Virtual Environment: -Python: select interpreter > Enter interpreter path > Find - -Once you locate your virtual env select your python version: -your-virtual-env > bin > python3. - -Now in your project you will see .vscode directory created open settings.json inside of it and add: -""python.terminal.activateEnvironment"": true -don't forget to add comma before to separate it with already present key value pair. - -Now restart the terminal. - - -You should see your virtual environment activated automatically.",1.2,True,2,6354 -2019-10-17 14:15:46.367,"Implement 1-ply, 2-ply or 3-ply search td-gammon","I've read some articles and most of them say that 3-ply improves the performance of the self-player train. -But what is this in practice? and how is that implemented?","There is stochasticity in the game because of the dice rolls, so one approach would be evaluate state positions by self play RL, and then while playing do a 2-ply search over all the possible dice combinations. That would be 36 + 6 i.e. 42 possible rolls, and then you have to make different moves that are available which increases the breath of the tree to an insane degree. I tried this and it failed because my Mac could not handle such computation. Instead what we could do is just randomize a few dice rolls and perform a MiniMax tree search with Alpha Beta pruning ( using the AfterState value function). -For a 1 ply search we just use the rolled dice, or if we want to predict the value before we roll the dice then we can simply loop over all the possible combinations. Then we just argmax over the afterstates.",0.0,False,1,6355 -2019-10-17 17:03:00.937,Most efficient way to execute 20+ SQL Files?,"I am currently overhauling a project here at work and need some advice. We currently have a morning checklist that runs daily and executes roughly 30 SQL files with 1 select statement each. This is being done in an excel macro which is very unreliable. These statements will be executed against an oracle database. -Basically, if you were re-implementing this project, how would you do it? I have been researching concurrency in python, but have not had any luck. We will need to capture the results and display them, so please keep that in mind.If more information is needed, please feel free to ask. -Thank you.","There are lots of ways depending on how long the queries run, how much data is output, are there input parameters and what is done to the data output. -Consider: -1. Don't worry about concurrency up front -2. Write a small python app to read in every *.sql file in a directory and execute each one. -3. Modify the python app to summarize the data output in the format that it is needed -4. Modify the python app to save the summary back into the database into a daily check table with the date / time the SQL queries were run. Delete all rows from the daily check table before inserting new rows -5. Have the Excel spreadsheet load it's data from that daily check table including the date / time the data was put in the table -6. If run time is slows, optimize the PL/SQL for the longer running queries -7. If it's still slow, split the SQL files into 2 directories and run 2 copies of the python app, one against each directory. -8. Schedule the python app to run at 6 AM in the Windows task manager.",0.6730655149877884,False,1,6356 -2019-10-18 20:56:03.203,How to write in discord with discord.py without receiving a message?,"I need to write some messages in discord with my bot, but I don't know how to do it. It seems that discord.py can't send messages autonomously. -Does anyone know how to do it?","I solved putting a while loop inside the function on_message. -So I need to send only a message and then my bot can write as many messages as he wants",0.0,False,1,6357 -2019-10-20 03:58:47.513,How can I cancel an active boto3 s3 file_download?,"I'm using boto3 to download files from an s3 bucket & I need to support canceling an active file transfer in my client UI - but I can't find how to do it. -There is a progress callback that I can use for transfer status, but I can not cancel the transfer from there. -I did find that boto3's s3transfer.TransferManager object has a .shutdown() member, but it is buggy (.shutdown() passes the wrong params to ._shutdown() a few lines below it) & crashes. -Is there another way to safely cancel an active file_download?","Can you kill the process associated with the file? -kill $(ps -ef | grep 'process-name' | awk '{print $2}')",0.2012947653214861,False,1,6358 -2019-10-20 13:24:41.453,How do I limit the number of times a character appears in a string in python?,"I'm a beginner, its been ~2 months since i started learning python. -I've written a code about a function that takes two strings, and outputs the common characters between those 2 strings. The issue with my code is that it returns all common characters that the two inputs have. For example: -input: common, moron -the output is ""oommoon"" when ideally it should be ""omn"". -i've tried using the count() function, and then the replace function, but it ended up completely replacing the letters that were appearing more than once in the output, as it should. -how should i go about this? i mean it's probably an easy solution for most of the ppl here, but what will the simplest approach be such that i, a beginner with okay-ish knowledge of the basics, understand it?","You can try this: -''.join(set(s1).intersection(set(s2)))",0.0,False,1,6359 -2019-10-20 20:53:03.697,How to iterate over a dictionary with tuples?,"So,I need to iterate over a dictionary in python where the keys are a tuple and the values are integers. -I only need to print out the keys and values. -I tried this: -for key,value in dict: -but didn't work because it assigned the first element of the tuple to the key and value and the second to the value. -So how should I do it?","Just use -for key in dict -and then access the value with dict[key]",0.1016881243684853,False,1,6360 -2019-10-21 08:36:05.530,how to use 1D-convolutional neural network for non-image data,"I have a dataset that I have loaded as a data frame in Python. It consists of 21392 rows (the data instances, each row is one sample) and 79 columns (the features). The last column i.e. column 79 has string type labels. I would like to use a CNN to classify the data in this case and predict the target labels using the available features. This is a somewhat unconventional approach though it seems possible. However, I am very confused on how the methodology should be as I could not find any sample code/ pseudo code guiding on using CNN for Classifying non-image data, either in Tensorflow or Keras. Any help in this regard will be highly appreciated. Cheers!","You first have to know, if it is sensible to use CNN for your dataset. You could use sliding 1D-CNN if the features are sequential eg) ECG, DNA, AUDIO. However I doubt that this is not the case for you. Using a Fully Connected Neural Net would be a better choice.",0.3869120172231254,False,1,6361 -2019-10-21 09:25:49.003,Training in Python and Deploying in Spark,"Is it possible to train an XGboost model in python and use the saved model to predict in spark environment ? That is, I want to be able to train the XGboost model using sklearn, save the model. Load the saved model in spark and predict in spark. Is this possible ? -edit: -Thanks all for the answer , but my question is really this. I see the below issues when I train and predict different bindings of XGBoost. - -During training I would be using XGBoost in python, and when  predicting I would be using XGBoost in mllib. -I have to load the saved model from XGBoost python (Eg: XGBoost.model file) to be predicted in spark, would this model be compatible to be used with the predict function in the mllib -The data input formats of both XGBoost in python and XGBoost in spark mllib are different. Spark takes vector assembled format but with python, we can feed the dataframe as such. So, how do I feed the data when I am trying to predict in spark with a model trained in python. Can I feed the data without vector assembler ? Would XGboost predict function in spark mllib take non-vector assembled data as input ?",You can run your python script on spark using spark-submit command so that can compile your python code on spark and then you can predict the value in spark.,0.0,False,2,6362 -2019-10-21 09:25:49.003,Training in Python and Deploying in Spark,"Is it possible to train an XGboost model in python and use the saved model to predict in spark environment ? That is, I want to be able to train the XGboost model using sklearn, save the model. Load the saved model in spark and predict in spark. Is this possible ? -edit: -Thanks all for the answer , but my question is really this. I see the below issues when I train and predict different bindings of XGBoost. - -During training I would be using XGBoost in python, and when  predicting I would be using XGBoost in mllib. -I have to load the saved model from XGBoost python (Eg: XGBoost.model file) to be predicted in spark, would this model be compatible to be used with the predict function in the mllib -The data input formats of both XGBoost in python and XGBoost in spark mllib are different. Spark takes vector assembled format but with python, we can feed the dataframe as such. So, how do I feed the data when I am trying to predict in spark with a model trained in python. Can I feed the data without vector assembler ? Would XGboost predict function in spark mllib take non-vector assembled data as input ?","you can - -load data/ munge data using pyspark sql, -then bring data to local driver using collect/topandas(performance bottleneck) -then train xgboost on local driver -then prepare test data as RDD, -broadcast the xgboost model to each RDD partition, then predict data in parallel - -This all can be in one script, you spark-submit, but to make the things more concise, i will recommend split train/test in two script. -Because step2,3 are happening at driver level, not using any cluster resource, your worker are not doing anything",0.0,False,2,6362 -2019-10-22 07:03:31.320,Converting the endianness type of an already existing binary file,"I have a binary file on my PC that contains data in big-endian. The file contains around 121 MB. -The problem is I would like to convert the data into little-endian with a python script. -What is currently giving me headaches is the fact that I don't know how to convert an entire file. If I would have a short hex string I could simply use struct.pack to convert it into little-endian but if I see this correctly I can't give struct.pack a binary file as input. -Is there an other function/utility that I can use to do that or how should my approach look like?","We need a document or knowledge of the file's exact structure. -Suppose that there is a 4 byte file. If this file has just a int, we need to flip that. But if it is a combination of 4 char, we should leave it as it be. -Above all, you should find the structure. Then we can talk about the translation. I think there is no translation tools to support general data, but you need to parse that binary file following the structure.",0.0,False,1,6363 -2019-10-24 17:06:41.350,"How to solve problem related to BigQueryError ""reason"": ""invalid"", ""location"": ""test"", ""debugInfo"": """", ""message"": ""no such field.""","Someone worked before with streaming data into (google) BigQuery using Google Cloud Functions (insert_rows_from_dataframe())? -My problem is it seems like sometimes the table schema is not updated immediately and when you try to load some data into table immediately after creation of a new field in the schema it returns an error: - -BigQueryError: [{""reason"": ""invalid"", ""location"": ""test"", ""debugInfo"": """", ""message"": ""no such field.""}]"" - -However, if I try to load again after few seconds it all works fine, so my question if someone knows the maximum period of time in seconds for this updating (from BigQuery side) and if is possible somehow to avoid this situation?","Because the API operation on BigQuery side is not atomic, you can't avoid this case. -You can only mitigate the impact of this behavior and perform a sleep, a retries, or set a Try-catch to replay the insert_rows_from_dataframe() several times (not infinite, in case of real problem, but 5 times for example) until it pass. -Nothing is magic, if the consistency is not managed on a side, the other side has to handle it!",0.3869120172231254,False,1,6364 -2019-10-24 18:04:12.393,Kivy_deps.glew.whl is not a supported wheel on this version,"I was trying to install kivy_deps.glew(version).whl with - -pip install absolute/path/to/file/kivy_deps.glew - -And I get this error: - -kivy_deps.glew(version).whl is not a supported wheel on this version - -I searched in the web and saw that some people said that the problem is because you shoud have python 2.7 and I have python 3.7. The version is of glew is cp27. So if this is the problem how to install python 2.7 and 3.7 in the same time and how to use both of them with pip.(i.e maybe you can use - -pip2.7 install - -For python 2.7 and - -pip install - -For python 3.7 -P.S: My PC doesn't have an internet connection that's why i'm installing it with a wheel file. I have installed all dependecies except glew and sdl2. If there is any unofficial file for these two files for python 3.7 please link them. -I know this question has been asked before in stackoverflow but I didn't get any solution from it(it had only 1 anwser tho) -Update: I uninstalled python 3.7 and installed python 2.7, but pip and python weren't commands in cmd because python 2.7 hadn't pip. So I reinstalled python 3.7",I fixed it. Just changed in the name of the file cp27 to cp37,1.2,True,1,6365 -2019-10-24 18:19:04.017,How to connect ML model which is made in python to react native app,"i made a one ML model in python now i want to use this model in react native app means that frontend will be based on react native and model is made on python,how can i connect both thing with each other",create a REST Api in flask/django to deploy your model on server.create end points for separate functions.Then call those end points in your react native app.Thats how it works.,0.1352210990936997,False,2,6366 -2019-10-24 18:19:04.017,How to connect ML model which is made in python to react native app,"i made a one ML model in python now i want to use this model in react native app means that frontend will be based on react native and model is made on python,how can i connect both thing with each other",You can look into the CoreMl library for react native application if you are developing for IOS platform else creating a restAPI is a good option. (Though some developers say that latency is an issue but it also depends on what kind of model and dataset you are using ).,0.0,False,2,6366 -2019-10-25 15:15:17.720,can pandas autocorr handle irregularly sample timeseries data?,"I have a dataframe with datetime index, where the data was sampled irregularly (the datetime index has gaps, and even where there aren't gaps the spacing between samples varies). -If I do: -df['my column'].autocorr(my_lag) -will this work? Does autocorr know how to handle irregularly sampled datetime data?","This is not quite a programming question. -Ideally, your measure of autocorrelation would use data measured at the same frequency/same time interval between observations. Any autocorr function in any programming package will simply measure the correlation between the series and whatever lag you want. It will not correct for irregular frequencies. -You would have to fix this yourself but 1) setting up a series with a regular frequency, 2) mapping the actual values you have to the date structure, 3) interpolating values where you have gaps/NaN, and then 4) running your autocorr. -Long story short, autocorr would not do all this work for you. -If I have misunderstood the problem you are worried about, let me know. It would be helpful to know a little more about the sampling frequencies. I have had to deal with things like this a lot.",0.0,False,1,6367 -2019-10-25 16:46:06.047,Should modules always contain a class?,"I'm writing a module which only contains functions. Is it good practice to put these inside a class, even if there are no class arguments and the __init__ function is pointless? And if so how should I write it?","It is good to build modules that contain a class for better organization and manipulation depending on how big the code is and how it will be used, but yes it is good to get use to building classes with methods in them. Can you post your code?",0.0,False,1,6368 -2019-10-26 03:37:29.547,Internet checksum -- Adding hex numbers together for checksum,"I came across the following example of creating an Internet Checksum: - -Take the example IP header 45 00 00 54 41 e0 40 00 40 01 00 00 0a 00 00 04 0a 00 00 05: - -Adding the fields together yields the two’s complement sum 01 1b 3e. -Then, to convert it to one’s complement, the carry-over bits are added to the first 16-bits: 1b 3e + 01 = 1b 3f. -Finally, the one’s complement of the sum is taken, resulting to the checksum value e4c0. - - -I was wondering how the IP header is added together to get 01 1b 3e?","The IP header is added together with carry in hexadecimal numbers of 4 digits. -i.e. the first 3 numbers that are added are 0x4500 + 0x0054 + 0x41e0 +...",0.2012947653214861,False,1,6369 -2019-10-27 18:36:01.290,Access to data in external hdd from jupyter notebook,"I am a python3 beginner, and I've been stuck on how to utilize my data at my scripts. -My data is stored in an external hdd and I am seeking for the way to retrieve the data to use on a program in jupyter notebook somehow. -Does anyone know how to make an access to external hdd?","Hard to say what the issue is without seeing any code. In general make sure your external hard drive is connected to your machine, and when loading your data (depends on what kind of data you want to use) specify the full path to your data.",1.2,True,1,6370 -2019-10-28 17:00:37.963,Scheduling Emails with Django?,"I want to schedule emails using Django. Example ---> I want to send registered users their shopping cart information everyday at 5:00 P.M. -How would I do this using Django? I have read a lot of articles on this problem but none of them have a clear and definite solution. I don't want to implement a workaround. -Whats the proper way of implementing this? Can this be done within my Django project or do I have to use some third-party service? -If possible, please share some code. Otherwise, details on how I can implement this will do.","There's no built-in way to do what you're asking. What you could do, though, is write a management command that sends the emails off and then have a crontab entry that calls that command at 5PM (this assumes your users are in the same timezone as your server). -Another alternative is using celery and celery-beat to create scheduled tasks, but that would require more work to set up.",0.3869120172231254,False,1,6371 -2019-10-28 17:46:00.680,Storing multiple values in one column,"I am designing a web application that has users becoming friends with other users. I am storing the users info in a database using sqlite3. -I am brainstorming on how I can keep track on who is friends with whom. -What I am thinking so far is; to make a column in my database called Friendships where I store the various user_ids( integers) from the user's friends. -I would have to store multiple integers in one column...how would I do that? -Is it possible to store a python list in a column? -I am also open to other ideas on how to store the friendship network information in my database.... -The application runs through FLASK","What you are trying to do here is called a ""many-to-many"" relationship. Rather than making a ""Friendships"" column, you can make a ""Friendship"" table with two columns: user1 and user2. Entries in this table indicate that user1 has friended user2.",0.1352210990936997,False,2,6372 -2019-10-28 17:46:00.680,Storing multiple values in one column,"I am designing a web application that has users becoming friends with other users. I am storing the users info in a database using sqlite3. -I am brainstorming on how I can keep track on who is friends with whom. -What I am thinking so far is; to make a column in my database called Friendships where I store the various user_ids( integers) from the user's friends. -I would have to store multiple integers in one column...how would I do that? -Is it possible to store a python list in a column? -I am also open to other ideas on how to store the friendship network information in my database.... -The application runs through FLASK","It is possible to store a list as a string into an sql column. -However, you should instead be looking at creating a Friendships table with primary keys being the user and the friend. -So that you can call the friendships table to pull up the list of friends. -Otherwise, I would suggest looking into a Graph Database, which handles this kind of things well too.",0.1352210990936997,False,2,6372 -2019-10-30 12:29:29.603,Display two animations at the same time with Manim,"Manim noobie here. -I am trying to run two animations at the same time, notably, I'm trying to display a dot transitioning from above ending up between two letters. Those two letters should create some space in between in the meantime. -Any advice on how to do so? Warm thanks in advance.","To apply two transformations at the same time, you can do self.play(Transformation1, Transformation2). This way, since the two Transformations are in the same play statement, they will run simultaneously.",1.2,True,1,6373 -2019-10-31 10:58:42.647,Bloomberg API how to get only the latest quote to a given time specified by the user in Python?,"I need to query from the BBG API the nearest quote to 14:00 o'clock for a number of FX currency pairs. I read the developers guide and I can see that reference data request provides you with the latest quote available for a currency however if I run the request at 14.15 it will give me the nearest quote to that time not 14.00. Historical and intraday data output too many values as I need only the latest quote to a given time. -Would you be able to advise me if there is a type of request which will give me what I am looking for.","Further to previous suggestions, you can start subscription to //blp/mktdata service before 14:00 for each instrument to receive stream of real-time ticks. Cache the last tick, when hitting 14:00 mark the cache as pre-14:00, then mark the first tick after as post:14, select the nearest to 14:00 from both.",0.0,False,1,6374 -2019-11-01 03:45:32.480,"In a Python bot, how to run a function only once a day?","I have a Python bot running PRAW for Reddit. It is open source and thus users could schedule this bot to run at any frequency (e.g. using cron). It could run every 10 minutes, or every 6 hours. -I have a specific function (let's call it check_logs) in this bot that should not run every execution of this bot, but rather only once a day. The bot does not have a database. -Is there a way to accomplish this in Python without external databases/files?","Generally speaking, it's better (and easier) to use the external database or file. But, if you absolutely need it you could also: - -Modify the script itself, e.g. store the date of the last run in commented out last line of the script. -Store the date of the last update on the web, for example, in your case it could be a Reddit post or google doc or draft email or a site like Pastebin, etc. -Change the ""modified date"" of the script itself and use it as a reference.",0.0,False,1,6375 -2019-11-01 12:00:44.503,how to solve fbs error 'Can not find path ./libshiboken2.abi3.5.13.dylib'?,"I have been able to freeze a Python/PySide2 script with fbs on macOS, and the app seems to work. -However, I got some errors from the freeze process stating: - -Can not find path ./libshiboken2.abi3.5.13.dylib. - -Does anyone know how to fix that?","Try to use the --runtime-tmpdir because while running the generated exe file it needs this file libshiboken2.abi3.5.13.dylib and unable hook that file. -Solution: use --add-data & --runtime-tmpdir to pyinstaller command line. -pyinstaller -F --add-data ""path/libshiboken2.abi3.5.13.dylib"":""**PATH"" ---runtime-tmpdir temp_dir_name your_program.py -here PATH = the directory name of that file looking for.-F = one file",0.0,False,1,6376 -2019-11-02 21:15:06.757,How to get to the first 4 numbers of an int number ? and also the 5th and 6th numbers for example,"I have a function that checks if a date ( int number ) that is written in this format: ""YYYYMMDD"" is valid or not. -My question is how do i get to the first 4 numbers for example ( the year )? -the month ( the 5th and 6th number ) and the days. -Thanks","Probably the easiest way would be to convert it to a string and use substrings or regular expressions. If you need performance, use a combination of modulo and division by powers of 10 to extract the desired parts.",0.2012947653214861,False,1,6377 -2019-11-02 21:21:16.990,Can aubio be used to detect rhythm-only segments?,"Does aubio have a way to detect sections of a piece of audio that lack tonal elements -- rhythm only? I tested a piece of music that has 16 seconds of rhythm at the start, but all the aubiopitch and aubionotes algorithms seemed to detect tonality during the rhythmic section. Could it be tuned somehow to distinguish tonal from non-tonal onsets? Or is there a related library that can do this?","Use a spectrum analyser to detect sections with high amplitude. If you program - you could take each section and make an average of the freqencies (and amplitudes) present to give you an idea of the instrument(s) involved in creating that amplitude peak. -Hope that helps - if you're using python I could give you some pointers how to program this!? -Regards -Tony",0.0,False,1,6378 -2019-11-03 23:03:00.293,Project organization with Tensorflow.keras. Should one subclass tf.keras.Model?,"I'm using Tensorflow 1.14 and the tf.keras API to build a number (>10) of differnet neural networks. (I'm also interested in the answers to this question using Tensorflow 2). I'm wondering how I should organize my project. -I convert the keras models into estimators using tf.keras.estimator.model_to_estimator and Tensorboard for visualization. I'm also sometimes using model.summary(). Each of my models has a number (>20) of hyperparameters and takes as input one of three types of input data. I sometimes use hyperparameter optimization, such that I often manually delete models and use tf.keras.backend.clear_session() before trying the next set of hyperparameters. -Currently I'm using functions that take hyperparameters as arguments and return the respective compiled keras model to be turned into an estimator. I use three different ""Main_Datatype.py"" scripts to train models for the three different input data types. All data is loaded from .tfrecord files and there is an input function for each data type, which is used by all estimators taking that type of data as input. I switch between models (i.e. functions returning a model) in the Main scripts. I also have some building blocks that are part of more than one model, for which I use helper functions returning them, piecing together the final result using the Keras functional API. -The slight incompatibilities of the different models are begining to confuse me and I've decided to organise the project using classes. I'm planing to make a class for each model that keeps track of hyperparameters and correct naming of each model and its model directory. However, I'm wondering if there are established or recomended ways to do this in Tensorflow. -Question: Should I be subclassing tf.keras.Model instead of using functions to build models or python classes that encapsulate them? Would subclassing keras.Model break (or require much work to enable) any of the functionality that I use with keras estimators and tensorboard? I've seen many issues people have with using custom Model classes and am somewhat reluctant to put in the work only to find that it doesn't work for me. Do you have other suggestions how to better organize my project? -Thank you very much in advance.","Subclass only if you absolutely need to. I personally prefer following the following order of implementation. If the complexity of the model you are designing, can not be achieved using the first two options, then of course subclassing is the only option left. - -tf.keras Sequential API -tf.keras Functional API -Subclass tf.keras.Model",1.2,True,1,6379 -2019-11-04 08:48:35.323,How to automate any application variable directly without GUI with Python?,"I need to automate some workflows to control some Mac applications, I have got a way to do this with Pyautogui module,but I don't want to simulate keyboard or mouse actions anymore, I think if I can get the variables under any GUI elements and program with them directly it would be better, how can I do this?","This is not possible unless the application has some kind of api. -For Web GUIs you can use Selenium and directly select the DOM elements.",0.3869120172231254,False,1,6380 -2019-11-04 20:54:43.360,Is it possible to use socketCAN protocol on MacOS,I am looking to connect to a car wirelessly using socketCAN protocol on MacOS using the module python-can on python3. I don't know how to install the socketCAN protocol on MacOS. Pls help.,"SocketCAN is implemented only for the Linux kernel. So it is not available on other operating systems. But as long as your CAN adapter is supported by python-can, you don't need SocketCAN.",0.0,False,1,6381 -2019-11-05 07:32:04.260,Is there any built-in functionality in django to call a method on a specific day of the month?,"Brief intro of the app: - -I'm working on MLM Webapp and want to make payment on every 15th and last day of every month. -Calculation effect for every user when a new user comes into the system. - -What I did [ research ] - -using django crontab extension -celery - -Question is: --- Concern about the database insertion/update query: - -on the 15th-day hundreds of row generating with income calculation for users. so is there any better option to do that? -how to observe missed and failed query transaction? - -Please guide me, how to do this with django, Thanks to everyone!","For your 1st question, i don't think there will be any issue if you're using celery and celery beat for scheduling this task. Assuming your production server has 2 cores (so 4 threads hopefully), you can configure your celery worker (not the beat scheduler) to run using 1 worker with 1/2 thread. At the 15th of a month, beat will see that a task is due and will call your celery worker to accomplish this task. While doing this your worker will be using 1 thread and the other threads will be open (so your server won't go down). There are different ways to configure your celery worker depending on your use case (e.g. using gevent rather than regular thread), but the basic config should be fine. -Well I think you should keep a column in your table to track which ones were successfully handled by your code, and which failed. Celery dashboards will only show if total work succeeded or not, and won't give any further insights. -Hope this helps!",1.2,True,1,6382 -2019-11-05 12:45:32.743,Cluster identification with NN,"I have a dataframe containing the coordinates of millions of particles which I want to use to train a Neural network. These particles build individual clusters which are already identified and labeled; meaning that every particle is already assigned to its correct cluster (this assignment is done by a density estimation but for my purpose not that relevant). -the challenge is now to build a network which does this clustering after learning from the huge data. there are also a few more features in the dataframe like clustersize, amount of particles in a cluster etc. -since this is not a classification problem but more a identification of clusters-challenge what kind of neural network should i use? I have also problems to build this network: for example a CNN which classifies wheather there is a dog or cat in the picture, the output is obviously binary. so also the last layer just consists of two outputs which represent the probability for being 1 or 0. But how can I implement the last layer when I want to identify clusters? -during my research I heard about self organizing maps. would these networks do the job? -thank you","These particles build individual clusters which are already identified - and labeled; meaning that every particle is already assigned to its - correct cluster (this assignment is done by a density estimation but - for my purpose not that relevant). - the challenge is now to build a network which does this clustering - after learning from the huge data. - -Sounds pretty much like a classification problem to me. Images themselves can build clusters in their image space (e.g. a vector space of dimension width * height * RGB). - -since this is not a classification problem but more a identification - of clusters-challenge what kind of neural network should i use? - -You have data of coordinates, you have labels. Start with a simple fully connected single/multi-layer-perceptron i.e. vanilla NN, with as many outputs as number of clusters and softmax-activation function. -There are tons of blogs and tutorials for Deep Learning libraries like keras out there in the internet.",0.0,False,2,6383 -2019-11-05 12:45:32.743,Cluster identification with NN,"I have a dataframe containing the coordinates of millions of particles which I want to use to train a Neural network. These particles build individual clusters which are already identified and labeled; meaning that every particle is already assigned to its correct cluster (this assignment is done by a density estimation but for my purpose not that relevant). -the challenge is now to build a network which does this clustering after learning from the huge data. there are also a few more features in the dataframe like clustersize, amount of particles in a cluster etc. -since this is not a classification problem but more a identification of clusters-challenge what kind of neural network should i use? I have also problems to build this network: for example a CNN which classifies wheather there is a dog or cat in the picture, the output is obviously binary. so also the last layer just consists of two outputs which represent the probability for being 1 or 0. But how can I implement the last layer when I want to identify clusters? -during my research I heard about self organizing maps. would these networks do the job? -thank you","If you want to treat clustering as a classification problem, then you can try to train the network to predict whether two points belong to the same clusters or to different clusters. -This does not ultimately solve your problems, though - to cluster the data, this labeling needs to be transitive (which it likely will not be) and you have to label n² pairs, which is expensive. -Furthermore, because your clustering is density-based, your network may need to know about further data points to judge which ones should be connected...",0.2012947653214861,False,2,6383 -2019-11-05 20:27:30.517,Implementing a built in GUI with pymunk and pygame in Python?,"I am looking to make a python program in which I can have a sidebar GUI along with an interactive 2d pymunk workspace to the right of it, which is to be docked within the same frame. -Does anyone know how I might implement this?","My recommendation is to use pygame as your display. If an object is chosen, you can add it to the pymunk space at the same time as using pymunk to get each body's space and draw it onto the display. This is how I've written my games.",0.0,False,1,6384 -2019-11-06 17:07:27.240,Set PYTHONPATH for local Jupyter Notebook in VS Code,"I'm using Visual Studio 1.39.2 on Windows 10. I'm very happy that you can run Jupyter Notebook natively through VS Code as of October this year (2019), but one thing I don't get right is how to set my PYTHONPATH prior to booting up a local Jupyter server. -What I want is to be able to import a certain module which is located in another folder (because the module is compiled from C++ code). When I run a normal Python debugging session, I found out that I can set environment variables of the integrated terminal, via the setting terminal.integrated.env.linux. Thus, I set my PYTHNPATH through this option when debugging as normal. But when running a Jupyter Notebook, the local Jupyter server doesn't seem to run in the integrated terminal (at least not from what I can see), so it doesn't have the PYTHONPATH set. -My question is then, how can I automatically have the PYTHONPATH set for my local Jupyter Notebook servers in VS Code?","I'm a developer on this extension. If you have a specific path for module resolution we provide a setting for the Jupyter features called: -Python->Data Science: Run Startup Commands -That setting will run a series of python instructions in any Jupyter session context when starting up. In that setting you could just append that path that you need to sys.path directly and then it will run and add that path every time you start up a notebook or an Interactive Window session.",0.5457054096481145,False,1,6385 -2019-11-07 02:14:50.033,Script that opens cmd from spyder,I am working on a text adventure with python and the issue i am having is getting spyder to open a interactive cmd window. so far i have tried os.systems('cmd / k') to try and open this which it did but i could not get any code to run and kept getting an app could not run this file error. my current code runs off a import module that pulls the actual adventure from another source code file. how can i make it to where only one file runs and opens the cmd window to play the text adventure?,"(Spyder maintainer here) Cmd windows are hidden by default because there are some packages that open lot of them while running code (e.g. pyomo). -To change this behavior, you need to go to -Tools > Preferences > IPython console > Advanced settings > Windows adjustments -and deactivate the option called Hide command line output windows generated by the subprocess module.",0.2012947653214861,False,1,6386 -2019-11-07 08:11:10.283,Retrieving information from a POST without forms in Django,"I'm developing something like an API (more like a communications server? Idk what to call it!) to receive data from a POST message from an external app. Basically this other app will encounter an error, then it sends an error ID in a post message to my API, then I send off an email to the affected account. -My question is how do I handle this in Django without any form of UI or forms? I want this to pretty much be done quietly in the background. At most a confirmation screen that the email is sent. -I'm using a LAMP stack with Python/Django instead of PHP.","A Django view doesn't have to use a form. Everything that was POSTed is there in request.POST which you may access directly. (I commonly do this to see which of multiple submit buttons was clicked). -Forms are a good framework for validating the data that was POSTed, but you don't have to use their abilities to generate content for rendering. If the data is validated in the front-end, you can use the form validation framework to check against front-end coding errors and malicious POSTs not from your web page, and simply process the cleaned_data if form.is_valid() and do ""Something went wrong"" if it didn't (which you believe to be impossible, modulo front-end bugs or malice).",1.2,True,1,6387 -2019-11-07 08:18:43.410,PyFPDF can't add page while specifying the size,"on pyfpdf documentation it is said that it is possible to specify a format while adding a page (fpdf.add_page(orientation = '', format = '', same = False)) but it gives me an error when specifying a format. -error: - -pdf.add_page(format = (1000,100)) TypeError: add_page() got an - unexpected keyword argument 'format' - -i've installed pyfpdf via pip install and setup.py install but it doesnt work in both ways -how can i solve this?","Your problem is that two packages of pypdf exist, fpdf and fpdf2. They both use from fpdf import FPDF, but only fpdf2 has also a format= keyword in the add_page() method. -So you need to install the fpdf2 package.",0.2012947653214861,False,1,6388 -2019-11-07 11:18:08.470,Sharing variables between Python subprocesses,"I have a Python program named read.py which reads data from serial communication every second, and another python program called calculate.py which has to take the real time values from read.py. -Using subprocess.popen('read.py',shell=True) I am able to run read.py from calculate.py -May I know how to read or use the value from read.py in calculate.py? -Since the value changes every second I am confused how to proceed like, saving value in registers or producer consumer type, etc. -for example : from import datetime -when ever strftime %s is used, the second value is given -how to use the same technique to use variable from another script?",I can suggest writing values to a .txt file for later reading,0.3869120172231254,False,1,6389 -2019-11-07 18:11:15.747,Redeploying a Flask app in Google App Engine,"How do I redeploy an updated version of the Flask web app in Google App Engine. -For example, I have running web app and now there are new features added into it and needs redeployment. How can I do that? -Also how to remove the previous version.",Add --no-promote if you want to deploy without routing service to the latest version deployed.,0.0,False,1,6390 -2019-11-08 13:21:30.283,"Select a ""mature"" curve that best matches the slope of a new ""immature"" curve","I have a multitude of mature curves (days are plotted on X axis and data is >= 90 days old so the curve is well developed). -Once a week I get a new set of data that is anywhere between 0 and 14 days old. -All of the data (old and new), when plotted, follows a log curve (in shape) but with different slopes. So some weeks have a higher slope, curve goes higher, some smaller slope, curve is lower. At 90 days all curves flatten. -From the set of ""mature curves"" I need to select the one whose slope matches the best the slope of my newly received date. Also, from the mature curve I then select the Y-value at 90 days and associate it with my ""immature""/new curve. -Any suggestions how to do this? I can seem to find any info. -Thanks much!","This seems more like a mathematical problem than a coding problem, but I do have a solution. -If you want to find how similar two curves are, you can use box-differences or just differences. -You calculate or take the y-values of the two curves for each x value shared by both the curves (or, if they share no x-values because, say, one has even and the other odd values, you can interpolate those values). -Then you take the difference of the two y-values for every x-value. -Then you sum up those differences for all x-values. -The resulting number represents how different the two curves are. -Optionally, you can square all the values before summing up, but that depends on what definition of ""likeness"" you are using.",0.0,False,1,6391 -2019-11-09 22:53:51.393,Get N random non-overlapping substrings of length K,"Let's say we have a string R of length 20000 (or another arbitrary length). I want to get 8 random non-overlapping sub strings of length k from string R. -I tried to partition string R into 8 equal length partitions and get the [:k] of each partition but that's not random enough to be used in my application, and the condition of the method to work can not easily be met. -I wonder if I could use the built-in random package to accomplish the job, but I can't think of a way to do it, how can I do it?","You could simply run a loop, and inside the loop use the random package to pick a starting index, and extract the substring starting at that index. Keep track of the starting indices that you have used so that you can check that each substring is non-overlapping. As long as k isn't too large, this should work quickly and easily. -The reason I mention the size of k is because if it is large enough, then it could be possible to select substrings that don't allow you to find 8 non-overlapping ones. But that only needs to be considered if k is quite large with respect to the length of the original string.",0.0,False,1,6392 -2019-11-10 10:49:15.520,How to run previously created Django Project,"I am new to Django. I am using Python 3.7 with Django 2.2.6. -My Django development environment is as the below. - -I am using Microsoft Visual Studio Code on a Windows 8.1 computer -To give the commands I am using 'DOS Command Prompt' & 'Terminal window' in -VS Code. -Created a virtual environment named myDjango -Created a project in the virtual environment named firstProject -Created an app named firstApp. - -At the first time I could run the project using >python manage.py runserver -Then I had to restart my computer. - -I was able to go inside the previously created virtual environment using -workon myDjango command. - -But my problem is I don't know how to go inside the previously created project 'firstProject' and app 'firstApp' using the 'Command prompt' or using the 'VSCode Terminal window' -Thanks and regards, -Chiranthaka","Simply navigate to the folder containing the app you want to manage using the command prompt. -The first cd should contain your parent folder. -The second cd should contain the folder that has your current project. -The third cd should be the specific app you want to work on. -After that, you can use the python manage.py to keep working on your app.",1.2,True,1,6393 -2019-11-10 15:41:49.253,python baserequesthandler client address is real established ip?,"i have a question for you -I'm using the udp socket server using baserequesthandler on python -I want to protect the server against spoofing - source address changes. -Does client_address is the actual ip address of established to server ? -If not, how do I get the actual address?","Authenticate the packets so that you know that every message in session X from source address Y is from the same client. -By establishing a shared session key which is then used along with a sequence number to produce a hash of the packet keyed by the (sequence, session_key) pair. Which is then included in every packet. This can be done in both directions protecting both the client and server. -When you receive a packet you use its source address and the session number to look up the session, then you compute HMAC((sequence, session_key), packet) and check if the MAC field in the message matches. If it doesn't discard the message. -This might not be a correct protocol but it is close enough to demonstrate the principle.",0.0,False,1,6394 -2019-11-10 19:22:04.930,How to use PostgreSQL in Python Pyramid without ORM,"Do I need SQLAlchemy if I want to use PostgreSQL with Python Pyramid, but I do not want to use the ORM? Or can I just use the psycopg2 directly? And how to do that?","Even if you do not want to use ORM, you can still use SQLAlchemy's query -language. -If you do not want to use SQLAlchemy, you can certainly use psycopg2 directly. Look into Pyramid cookbook - MongoDB and Pyramid or CouchDB and Pyramid for inspiration.",0.0,False,1,6395 -2019-11-10 23:16:41.710,Python 3.7.3 Inadvertently Installed on Mac OS 10.15.1 - Included in Xcode Developer Tools 11.2 Now?,"I decided yesterday to do a clean install of Mac OS (as in, erase my entire disk and reinstall the OS). -I am on a Macbook Air 2018. I did a clean install of Mac OS 10.15.1. -I did this clean install due my previous Python environment being very messy. -It was my hope that I could get everything reigned in and installed properly. -I've started reinstalling my old applications, and took care to make sure nothing was installed in a weird location. -However, when I started setting up VS Code, I noticed that my options for Python interpreters showed 4 options. They are as follows: - -Python 2.7.16 64-bit, located in /usr/bin/python -Python 2.7.16 64-bit, located in /System/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python -Python 3.7.3 64-bit, located in /user/bin/python -Python 3.7.3 64-bit, located in /Library/Developer/CommandLineTools/usr/bin/python3 - -In terminal, if I enter where python python3 -it returns -/usr/bin/python /usr/bin/python3. -How in the world did python3 get there? -My only idea is that it now is included in the Xcode Developer Tools 11.2 package, as I did install that. I cannot find any documentation of this inclusion. -Any ideas how this got here? More importantly, how do I remove it? I want to use Homebrew for all of my installation needs. Also, why does VS Code show 4 options? -Thanks!","The command line tool to run the python 2.7 environment is at /usr/bin/python, but the framework and dependencies for it are in /System. This includes the Python.app bundle, which is just a wrapper for scripts that need to run using the Mac's UI environment. -Although these files are separate executables, it's likely that they point to the same environment. -Every MacOS has these. -Catalina does indeed also include /usr/bin/python3 by default. The first time you run it, the OS will want to download Xcode or the Command line tools to install the 'complete' python3. So these pair are also the same environment. -I don't think you can easily remove these, due to the security restrictions on system files in Catalina. -Interestingly, Big Sur still comes with python2 !",1.2,True,1,6396 -2019-11-11 00:39:28.400,"How to access MySQL database that is on another machine, located in a different locations (NOT LOCAL) with python","I am finished with my project and now I want to put it on my website where people could download it and use it. My project is connected to my MySQL and it works on my machine. On my machine, I can read, and modify my database with python. It obviously, will not work if a person from another country tries to access it. How can I make it so a person from another town, city, or country could access my database and be able to read it? -I tried using SSH but I feel like it only works on a local network. -I have not written a single line of code on this matter because I have no clue how to get started. -I probably know how to read my database on a local network but I have no clue how to access it from anywhere else. -Any help, tips, or solutions would be great and appreciated. -Thank you!","If I'm understanding correctly, you want to run a MySQL server from your home PC and allow others to connect and access data? Well, you would need to make sure the correct port is forwarded in your router and firewall, default is TCP 3306. Then simply provide the user with your current IP address (could change). - -Determine the correct MySQL Server port being listened on. -Allow port forwarding on the TCP protocol and the port you determined, default is 3306. -Allow incoming connections on this port from software firewall if any. -Provide the user with your current IP Address, Port, and Database name. -If you set login credentials, make sure the user has this as well. -That's it. The user should be able to connect with the IP Address, Port, Database Name, Username, and Password.",0.0,False,1,6397 -2019-11-12 04:54:01.377,Is there a dynamic scheduling system for better implementation of a subscription based payment system?,"I was making a subscription payment system from scratch in python in Django. I am using celery-beat for a scheduled task with RabbitMQ as a queue broker. django_celery_beat uses DatabaseScheduler which is causing problems. - -Takes a long time to dispatch simple-task to the broker. I was using it to expire users. For some expiration tasks, it took around 60 secs - 150secs. But normally it used to take 100ms to 500ms. -Another problem is that, while I re-schedule some task, while it is being written into the database it blocks the scheduler for some bizarre reason and multiple tasks are missed because of that. - -I have been looking into Apache Airflow because it is marketed as an industry-standard scheduling solution. -But I don't think, it is applicable and feasible for my small project. -If you have worked and played with a subscription payment system, can you advise me how to go forward with this?","I have a long winding solution that I implemented for a similar project. - -First I save the schedule as a model in the database -Next I have a cron job that gets the entries that need to be run for that day -Then schedule those jobs as normal Celery jobs by setting the ETA based on the time set in the schedule model. - -This way Celery just runs off the messages from Redis in my case. Try it if don't get a direct answer.",0.0,False,1,6398 -2019-11-12 12:52:29.447,How do I customise the flask-user registration and login functions?,"I want to customise the functions that process the results of completing the flask-user registration and login forms. I know how to customise the html forms themselves, but I want to change how flask-user performs the registration process. For example, I want to prevent the flask-user login and registration process from creating flash messages and I want registration to process a referral code. -I understand how to add an _after_registration_hook to perform actions after the registration function has completed, but, this doesn't allow me to remove the flash messages that are created in the login and registration processes. -My custom login and registration processes would build on the existing flask-user login and registration functions with functionality added or removed.","You seem to be asking about the flask-user package - however you tagged this with flask-security (which is a different package but offers similar functionality). I can answer for flask-security-too (my fork of the original flask-security) - if you are/want to use flask-user - it might be useful to change your tags. -In a nutshell - for flask-security - you can turn off ALL flashes with a config variable. -For registration - you can easily override the form and add a referral code - and validate/process that as part of form validation.",0.0,False,1,6399 -2019-11-12 13:47:52.050,How I can debug two-language program?,"I use Python as a high-level wrapper and a loaded C++ kernel in the form of a binary library to perform calculations. I debug high level Python code in IDE Eclipse in the usual way, but how do I debug C++ code? -Thank you in advance for your help.","Try using gdb's ""attach "" command (or ""gdb -p "" command-line option) to attach to the python process that has the C++ kernel library loaded.",0.3869120172231254,False,1,6400 -2019-11-13 08:29:13.933,Why my Python command doesn't work in Windows 10 CMD?,I have added the C:\Users\Admin\Anaconda3\python.exe path to my system environment variables PATH but still when I run python command it opens Windows app store! Why bthis happens and how to fix it?,"the PATH variable should contain -C:\Users\Admin\Anaconda3 -not -C:\Users\Admin\Anaconda3\python.exe",1.2,True,1,6401 -2019-11-13 11:59:54.057,Automate downloading of certain csv files from a website,"I am trying to automate downloading of certain csv files from a website. -This is how I manually do it: - -I log in to the website. -Click on the button export as csv. -The file gets downloaded. - -The problem is the button does not have any link to it so I was not able to automate it using wget or requests.","You can use selenium in python. There is an option to click using ""link text"" or ""partial link text"". It is quite easy and efficient. -driver.findElement(By.linkText(""click here"")).click() -It kind of looks like this.",0.0,False,1,6402 -2019-11-13 12:45:22.860,Python does not work after installing fastai and boto3. Permission denied // Python was not found,"Last thing I did was pip install boto3 and fastai through git bash yesterday. I can't imagine if anything else could have had any influence. -I have been using python for a few months now, but today it stopped running. I opened my -Sublime Text - and after running some simple code I got: -""Python was not found but can be installed from the Microsoft Store"". -GIT bash: - -$ python --version bash: - /c/Users/.../AppData/Local/Microsoft/WindowsApps/python: Permission - denied - -But if I open up a file of python 3 in this link: - -C:\Users...\AppData\Roaming\Microsoft\Windows\Start - Menu\Programs\Python 3.7 - -My Python works. -I think I have to redirect my main python file from the first link directory to the second and have no clue how to do this, that my Git and Sublime would be able to pick on this.","So, I gave up and just installed the recommended link from Microsoft store. So now I possibly have 4 pythons with 2 different versions in 3 locations, but hey.... it works :) -Regarding a comment below my first questions: -When I run $ ls -l which python in GITbash, it gives: --rwxr-xr-x 1 ... 197121 97296 Mar 25 2019 /c/Users/.../AppData/Local/Programs/Python/Python37-32/python* - -/.../ is just my user name - -Yesterday I tried that as well, the start was identical, although I can't really remember the link, if it was the same.",0.0,False,1,6403 -2019-11-13 13:05:07.793,Is there a way to change TCP settings for Django project?,"I have been working on a project built with Django. When I run profiler due to slowness of a page in project, this was a line of the result: - -10 0.503 0.050 0.503 0.050 {method 'recv_into' of '_socket.socket' objects} - -Which says almost 99% of passed time was for the method recv_into(). After some research, I learned the reason is Nagel's algorithm which targets to send packets only when the buffer is full or there are no more packets to transmit. I know I have to disable this algorithm and use TCP_NODELAY but I don't know how, also it should only affect this Django project. -Any help would be much appreciated.","Are you using cache settings in the settings.py file? Please check that maybe you have tcp_nodelay enable there, if so then remove it or try to clear browser cache.",-0.1016881243684853,False,1,6404 -2019-11-13 14:01:57.823,How does this Fibonacci Lambda function work?,"Am a beginner on Python (self studying) and got introduced to Lambda (nameless) function but I am unable to deduce the below expression for Fibonacci series (got from Google) but no explanation available online (Google) as to how this is evaluated (step by step). Having a lot of brain power here, I thought somebody can help me with that. Can you help evaluate this step by step and explain ? -lambda n: reduce(lambda x, _: x+[x[-1]+x[-2]],range(n-2), [0, 1]) -Thanks in advance. -(Thanks xnkr, for the suggestion on a reduce function explained and yes, am able to understand that and it was part of the self training I did but what I do not understand is how this works for lambda x, _ : x+[x[-1]+x[-2]],range(n-2), [0, 1]. It is not a question just about reduce but about the whole construct - there are two lambdas, one reduce and I do not know how the expression evaluates to. What does underscore stand for, how does it work, etc) -Can somebody take the 2 minutes that can explain the whole construct here ?","Break it down piece by piece: -lambda n: - defines a function that takes 1 argument (n); equivalent to an anonymous version of: def somefunc(n): -reduce() - we'll come back to what it does later; as per docs, this is a function that operates on another function, an iterable, and optionally some initial value, in that order. These are: - -A) lambda x, _: - again, defines a function. This time, it's a function of two arguments, and the underscore as the identifier is just a convention to signal we're not gonna use it. -B) X + [ ] - prepend some list of stuff with the value of the first arg. We already know from the fact we're using reduce that the arg is some list. -C) The is x[-1] + x[-2] - meaning the list we're prepending our X to is, in this case, the sum of the last two items already in X, before we do anything to X in this iteration. -range(n-2) is the iterable we're working on; so, a list of numbers from 1 to N-2. The -2 is here because the initial value (in 3) already has the first two numbers covered. -Speaking of which, [0, 1] is our predefined first two starting values for X[-2], X[-1]. -And now we're executing. reduce() takes the function from (1) and keeps applying it to each argument supplied by the range() in (2) and appending the values to a list initialized as [0, 1] in (3). So, we call I1: [0, 1] + lambda 0, [0, 1], then I2: I1 + lambda 1, I1, then I3: I2 + lambda 2, I2 and so on.",1.2,True,1,6405 -2019-11-13 16:38:11.820,Printing top few lines of a large JSON file in Python,"I have a JSON file whose size is about 5GB. I neither know how the JSON file is structured nor the name of roots in the file. I'm not able to load the file in the local machine because of its size So, I'll be working on high computational servers. -I need to load the file in Python and print the first 'N' lines to understand the structure and Proceed further in data extraction. Is there a way in which we can load and print the first few lines of JSON in python?","You can use the command head to display the N first line of the file. To get a sample of the json to know how is it structured. -And use this sample to work on your data extraction. -Best regards",-0.2012947653214861,False,1,6406 -2019-11-14 00:50:56.010,"get json data from host that requires headers={'user-agent', 'cookie', x-xsrf-token'}","There is a server that contains a json dataset that I need -I can manually use chrome to login -to the url and use chrome developer tool to read the request header for said json data -I determined that the minimum required headers that should be sent to the json endpoint are ['cookie', 'x-xsrf-token', 'user-agent'] -I don't know how I can get these values so that I can automate fetching this data. I would like to use request module to get the data -I tried using selenium, to navigate to the webpage that exposes these header values, but cannot get said headers values (not sure if selenium supports this) -Is there a way for me to use request module to inch towards getting these header values...by following the request header ""bread crumbs"" so to speak? -Is there an alternative module that excels at this? -To note, I have used selenium to get the required datapoints successfully, but selenium is resource heavy and prone to crash; -By using the request module with header values greatly simplifies the workflow and makes my script reliable","Based on pguardiario's comment -Sessions cookies and csrf-token are provided by the host when a request is made against the Origin url. These values are needed to make subsequent requests against the endpoint with the JSON payload. By using request.session() against the Origin url, and then updating the header when using request.get(url, header). I was able to access the json data",1.2,True,1,6407 -2019-11-15 04:01:44.763,Regex Match for Non Hyphenated Words - Python,"I am trying to create a regex expression in Python for non-hyphenated words but I am unable to figure out the right syntax. -The requirements for the regex are: - -It should not contain hyphens AND -It should contain atleast 1 number - -The expressions that I tried are:= - -^(?!.*-) - - -This matches all non-hyphenated words but I am not able to figure out how to additionally add the second condition. - - -^(?!.*-(?=/d{1,})) - - -I tried using double lookahead but I am not sure about the syntax to use for it. This matches ID101 but also matches STACKOVERFLOW - -Sample Words Which Should Match: -1DRIVE , ID100 , W1RELESS -Sample Words Which Should Not Match: -Basically any non-numeric string (like STACK , OVERFLOW) or any hyphenated words (Test-11 , 24-hours) -Additional Info: -I am using library re and compiling the regex patterns and using re.search for matching. -Any assistance would be very helpful as I am new to regex matching and am stuck on this for quite a few hours.","I came up with - -^[^-]*\d[^-]*$ - -so we need at LEAST one digit (\d) -We need the rest of the string to contain anything BUT a - ([^-]) -We can have unlimited number of those characters, so [^-]* -but putting them together like [^-]*\d would fail on aaa3- because the - comes after a valid match- lets make sure no dashes can sneak in before or after our match ^[-]*\d$ -Unfortunately that means that aaa555D fails. So we actually need to add the first group again- ^[^-]*\d[^-]$ --- which says start - any number of chars that aren't dashes - a digit - any number of chars that aren't dashes - end -Depending on style, we could also do ^([^-]*\d)+$ since the order of the digits/numbers dont matter, we can have as many of those as we want. - -However, finally... this is how I would ACTUALLY solve this particular problem, since regexes may be powerful, but they tend to make the code harder to understand... -if (""-"" not in text) and re.search(""\d"", text):",0.3869120172231254,False,1,6408 -2019-11-15 07:07:23.633,How do I link to a specific page of a PDF document inside a cell in Excel?,"I am writing a python code which writes a hyperlink into a excel file.This hyperlink should open in a specific page in a pdf document. -I am trying something like -Worksheet.write_url('A1',""C:/Users/...../mypdf#page=3"") but this doesn't work.Please let me know how this can be done.","Are you able to open the pdf file directly to a specific page even without xlsxwriter? I can not. -From Adobe's official site: - -To target an HTML link to a specific page in a PDF file, add - #page=[page number] to the end of the link's URL. -For example, this HTML tag opens page 4 of a PDF file named - myfile.pdf: - -Note: If you use UNC server locations (\servername\folder) in a link, - set the link to open to a set destination using the procedure in the - following section. -If you use URLs containing local hard drive addresses (c:\folder), you cannot link to page numbers or set destinations.",0.3869120172231254,False,1,6409 -2019-11-16 04:57:26.760,Using import in Python,"So I’m a new programmer and starting to use Python 3, and I see some videos of people teaching the language and use “import”. My question is how they know what to import and where you can see all the things you can import. I used import math in one example that I followed along with, but I see other videos of people using import JSON or import random, and I’m curious how they find what they can import and how they know what it will do.","In all programming languages, whenever you actually need a library, you should import it. For example, if you need to generate a random number, search for this function in your chosen programming language, find the appropriate library, and import it into your code.",0.1352210990936997,False,1,6410 -2019-11-16 05:22:32.853,Spyder Editor - How to Disable Auto-Closing Brackets,"Does anyone know how to make Spyder stop automatically inserting closing brackets? -It often results in complete mess when you have multiple levels of different brackets. I had a look around and could only find posts about auto-closing quotes, but I'm not really interested in these. But those brackets are making me slightly miserable. -I had a look in Preferences but the closest I could find is 'Automatic code completion'. But I certainly don't want all of it off especially when working with classes.","In Spyder 4 and Spyder 5 go to: - -Tools - Preferences - Editor - Source code - -and deselect the following items: - -Automatic insertion of parentheses, braces and brackets - -Automatic insertion of closing quotes (since it's the same nuisance than with brackets)",1.2,True,1,6411 -2019-11-17 00:01:23.980,Get all keys in a hash-table that satisfy certain arithmetic property,"Let's say I have a Hash-table, each key is defined as a tuple with 4 integers (A, B, C, D). where are integers represent a quantity of a certain attribute, and its corresponding value is a tuple of gears that satisfy (A, B, C, D). -I wanted to write a program that do the following: with any given attribute tuple (x, y, z, w), I want to find all the keys satisfying (|A - x| + |B - y| + |C - z| + |D - w|) / 4 <= i where i is a user defined threshold; return the value of these keys if exist and do some further calculation. (|A - x| means the absolute value of A - x) -To my experience, this kind of thing can be better done with Answer set programming, Haskell, Prolog and all this kind of logical programming languages, but I'm forced to use python for this is a python project... -I can hard code for a particular ""i"" but I really have no idea how to do this for arbitrary integers. please tell me how I can do this in pure python, Thank you very much!!!!",Just write a function that loops over all values in the table and checks them one by one. The function will take the table and i as arguments.,1.2,True,1,6412 -2019-11-18 00:40:56.233,How to create a different workflow depending on result of last run?,"I am trying to accomplish the following task using Airflow. I have an address and I want to run 3 different tasks taskA, taskB, taskC. Each task returns True if the address was detected. Store the times when each of the functions detected the address. -I want to accomplish the below logic. - -Run all three tasks to start off with. -If any of them return True, store the current time. -Wait for 1 minute and rerun only the tasks that did not return True. -If all have returned True end the job. - -I am not sure how I can accomplish selectively running only those tasks that returned False from the last run. -I have so far looked at the BranchPythonOperator but I still haven't been able to accomplish the desired result.",You can get last run status value from airflow db.,0.0,False,1,6413 -2019-11-18 18:32:02.663,How to allow my computer to download .py files from an email,"I am unable to download a .py file from an email. I get the error ""file not supported."" The file was saved from a Jupyter-notebook script. -I have Python 3.6.6 and Jupyter downloaded on my Windows 10 laptop and tried to access the file through Chrome and through my computer's email app, but this didn't resolve the problem. -Any ideas on how to make the file compatible with my computer? -EDIT: I had to have the .ipynb file sent rather than the .py file.","Generally email providers block any thing which can possibly execute on clients machine. -Best option to share will be transfer via email will be .py.txt -or any cloud drives.",0.0,False,1,6414 -2019-11-19 12:15:40.287,Use of views in jam.py framework,"for an academic project, I am currently using the python framework jam.py 5.4.83 to develop a back office for a new company. -I would like to use views instead of tables for reporting but I don't find how to do it, I can only import data from tables. -So if someone already used this framework, I would be very thankful. -Regards, -Yoan","the use of database views are not supported in Jam.py -However, you can import tables as read only if used for reporting. -Than you can build Reports as you would. -Good luck.",0.0,False,1,6415 -2019-11-19 15:26:28.493,How to find the intersecting area of two sub images using OpenCv?,Let's say there are two sub images of a large image. I am trying to detect the overlapping area of two sub images. I know that template matching can help to find the templates. But i'm not sure how to find the intersected area and remove them in either one of the sub images. Please help me out.,"MatchTemplate returns the most probable position of a template inside a picture. You could do the following steps: - -Find the (x,y) origin, width and height of each picture inside the larger one -Save them as rectangles with that data(cv::Rect r1, cv::Rect r2) -Using the & operator, find the overlap area between both rectangles (r1&r2)",0.2012947653214861,False,1,6416 -2019-11-19 19:57:43.280,How to scale numpy matrix in Python?,"I have this numpy matrix: -x = np.random.randn(700,2) -What I wanna do is scale the values of the first column between that range: 1.5 and 11 and the values of the second column between `-0.5 and 5.0. Does anyone have an idea how I could achieve this? Thanks in advance","subtract each column's minimum from itself -for each column of the result divide by its maximum -for column 0 of that result multiply by 11-1.5 -for column 1 of that result multiply by 5--0.5 -add 1.5 to column zero of that result -add -0.5 to column one of that result - -You could probably combine some of those steps.",0.3869120172231254,False,1,6417 -2019-11-20 13:02:15.587,zsh: command not found: import,"I'm using MAC OS Catalina Version 10.15.1 and I'm working on a python project. Every time I use the command ""import OS"" on the command line Version 2.10 (433), I get this message: zsh: command not found: import. I looked up and followed many of the solutions listed for this problem but none of them have worked. The command worked prior to upgrading my MAC OS. Any suggestion on how to fix it?","The file is being interpreted as zsh, not a python. I suggest you to add this to the first line: -#!/usr/bin/env python",0.2012947653214861,False,2,6418 -2019-11-20 13:02:15.587,zsh: command not found: import,"I'm using MAC OS Catalina Version 10.15.1 and I'm working on a python project. Every time I use the command ""import OS"" on the command line Version 2.10 (433), I get this message: zsh: command not found: import. I looked up and followed many of the solutions listed for this problem but none of them have worked. The command worked prior to upgrading my MAC OS. Any suggestion on how to fix it?","Don't capitalize it. -import os",0.0,False,2,6418 -2019-11-20 16:21:07.560,Starting conda pompt from cmd,"I want to start the conda Prompt from cmd, because I want to use the promt as a terminal in Atom.io. -There is no Conda.exe and the path to conda uses cmd to jump into the prompt. But how do I start it inside of cmd?","I guess what you want is to change to Anaconda shell using cmd, you can find the address for your Anaconda and run the following in your cmd: -%windir%\System32\cmd.exe ""/K"" ""Address""\anaconda3 -Or, you can find your Anaconda prompt shortcut, right click on that, and open its properties window. In the properties window, find Target. Then, copy the whole thing in Target and paste it into your cmd.",0.2012947653214861,False,1,6419 -2019-11-20 19:03:29.667,Prevent internal POST methods to be called by third parties,"I'm worried about the security of my web app, I'm using Django and sometimes I use AJAX to call a Django url that will execute code and then return an HttpResponse with a message according the result, the user never notice this as it's happening in background. -However, if the url I'm calling with AJAX is, for example, ""runcode/"", and the user somehow track this and try to send a request to my domain with that url (something like ""www.example.com/runcode/""), it will not run the code as Django expects the csrf token to be send too, so here goes the question. -It is possible that the user can obtain the csrf token and send the POST?, I feel the answer for that will be ""yes"", so anyone can help me with a hint on how to deny these calls if they are made without the intended objective?","Not only django but this behavior is common in all others, -You can only apply 2 solution, - -Apply CORS and just allow your domain, to block other domain to access data from your API response, but this will not effective if a user direct call your API end-point. -As lain said in comment, If data is sensitive or user's personal, add authentication in API. - -Thanks",0.0,False,1,6420 -2019-11-21 15:43:05.370,Looping through webelements with selenium Python,"I am currently trying to automate a process using Selenium with python, but I have hit a roadblock with it. The list is part of a list which is under a tree. I have identified the base of the tree with the following xpath -item = driver.find_element_by_xpath(""//*[@id='filter']/ul/li[1]//ul//li"") -items = item.find_elements_by_tag_name(""li"") -I am trying to Loop through the ""items"" section but need and click on anything with an ""input"" tag -for k in items: - WebDriverWait(driver, 10).until(EC.element_to_be_clickable((k.find_element(By.TAG_NAME, ""input"")))).click() -When execute the above I get the following error: -""TypeError: find_element() argument after * must be an iterable, not WebElement"" -For some reason .click() will not work if I use something like the below. -k.find_element_by_tag_name(""input"").click() -it only works if i use the webdriverwait. I have had to use the web driver wait method anytime i needed to click something on the page. -My question is: -What is the syntax to replicate items = item.find_elements_by_tag_name(""li"") -for WebDriverWait(driver, 10).until(EC.element_to_be_clickable((k.find_element(By.TAG_NAME, ""input"")))).click() -i.e how do I use a base path and append to the using the private methods find_elements(By.TAG_NAME) -Thanks in advance","I have managed to find a work around and get Selenium to do what i need. -I had to call the javascript execution, so instead of trying to get -WebDriverWait(driver, 10).until(EC.element_to_be_clickable((k.find_element(By.TAG_NAME, ""input"")))).click() to work, i just used -driver.execute_script(""arguments[0].click();"", k.find_element_by_tag_name(""input"")) -Its doing exactly what I needed it to do.",1.2,True,1,6421 -2019-11-23 19:16:37.407,How can I update the version of SQLite in my Flask/SQLAlchemy App?,"I wish to use the latest version of SQLite3 (3.30.1) because of its new capability to handle SQL 'ORDER BY ... ASC NULLS LAST' syntax as generated by the SQLAlchemy nullslast() function. -My application folder env\Scripts contains the existing (old) version of sqlite3.dll (3.24), however when I replace it, there is no effect. In fact, if I rename that DLL, the application still works fine with DB accesses. -So, how do I update the SQLite version for an application? -My environment: -Windows 10, 64-bit (I downloaded a 64-bit SQlite3 DLL version). I am running with pyCharm, using a virtual env.","I have found that the applicable sqlite3.dll is determined first by a Windows OS defined lookup. It first goes through the PATH variable, finding and choosing the first version it finds in any of those paths. -In this case, probably true for all pyCharm/VirtualEnv setups, a version found in my user AppData\Local\Programs\Python\Python37\DLLs folder was selected. -When I moved that out of the way, it was able to find the version in my env\Scripts folder, so that the upgraded DLL was used, and the sQLAlchemy nullslast() function did its work.",1.2,True,1,6422 -2019-11-25 12:59:29.660,Python and Telethon: how to handle sw distribution,"I developed a program to interact between Telegram and other 3rd party Software. It's written in Python and I used the Telethon library. -Everything works fine, but since it uses my personal configuration including API ID, API hash, phone number and username, I would like to know how to handle all of this if I wanted to distribute the software to other people. -Of course they can't use my data, so should they login into Telegram development page and get all the info? Or, is there a more user-friendly way to do it?","Since the API ID and the API Hash in Telegram are supposed to be distributed with your client all you need to do is prompt the user for their Phone Number. -You could do this using a GUI Library (like PySide2 using QInputDialog) or if it is a command line application using input(). Keep in mind that the user will also need a way to enter the code they receive from Telegram and their 2FA Password if set.",1.2,True,1,6423 -2019-11-25 18:14:02.303,Pyarmor Pack Python File Check Restrict Mode Failed,"So i am trying to Pack my python script with pyarmor pack however, when i pack the script it does not work, it throws check restrict mode failed. If i Obfuscate the script normally with pyarmor obfuscate instead of pack the script it works fine, and is obfuscated fine. This version runs no problem. Wondering how i can get pack to work as i want my python file in an exe -I have tried to compile the obfuscated script with pyinstaller however this does not work either -Wondering what else i can try?","I had this problem, fixed by adding --restrict=0 -For example: pyarmor obfuscate --restrict=0 app.py",0.3869120172231254,False,1,6424 -2019-11-26 14:11:58.723,how to find cosine similarity in a pre-computed matrix with a new vector?,"I have a dataframe with 5000 items(rows) and 2048 features(columns). -Shape of my dataframe is (5000, 2048). -when I calculate cosine matrix using pairwise distance in sklearn, I get (5000,5000) matrix. -Here I can compare each other. -But now If I have a new vector shape of (1,2048), how can find cosine similarity of this item with early dataframe which I had, using (5000,5000) cosine matrix which I have already calculated? -EDIT -PS: I can append this new vector to my dataframe and calculate again cosine similarity. But for large amount of data it gets slow. Or is there any other fast and accurate distance metrics?","The initial (5000,5000) matrix encodes the similarity values of all your 5000 items in pairs (i.e. symmetric matrix). -To have the similarities in case of a new item, concatenate and make a (5001, 2048) matrix and then estimate similarity again to get (5001,5001) -In other words, you can not directly use the (5000,5000) precomputed matrix to get the similarity with the new (1,2048) vector.",0.0,False,2,6425 -2019-11-26 14:11:58.723,how to find cosine similarity in a pre-computed matrix with a new vector?,"I have a dataframe with 5000 items(rows) and 2048 features(columns). -Shape of my dataframe is (5000, 2048). -when I calculate cosine matrix using pairwise distance in sklearn, I get (5000,5000) matrix. -Here I can compare each other. -But now If I have a new vector shape of (1,2048), how can find cosine similarity of this item with early dataframe which I had, using (5000,5000) cosine matrix which I have already calculated? -EDIT -PS: I can append this new vector to my dataframe and calculate again cosine similarity. But for large amount of data it gets slow. Or is there any other fast and accurate distance metrics?","Since cosine similarity is symmetric. You can compute the similarity meassure with the old data matrix, that is similarity between the new sample (1,2048) and old matrix (5000,2048) this will give you a vector of (5000,1) you can append this vector into the column dimension of the pre-computed cosine matrix making it (5000,5001) now since you know the cosine similarity of the new sample to itself. you can append this similarity to itself, back into the previously computed vector making it of size (5001,1), this vector you can append in the row dimension of the new cosine matrix that will make it (5001,5001)",0.0,False,2,6425 -2019-11-27 09:52:59.383,SQLITE3 / Python - Database disk image malformed but integrity_check ok,"My actual problem is that python sqlite3 module throws database disk image malformed. -Now there must be a million possible reasons for that. However, I can provide a number of clues: - -I am using python multiprocessing to spawn a number of workers that all read (not write) from this DB -The problem definitely has to do with multiple processes accessing the DB, which fails on the remote setup but not on the local one. If I use only one worker on the remote setup, it works -The same 6GB database works perfectly well on my local machine. I copied it with git and later again with scp to remote. There the same script with the copy of the original DB gives the error -Now if I do PRAGMA integrity_check on the remote, it returns ok after a while - even after the problem occurred -Here are the versions (OS are both Ubuntu): - -local: sqlite3.version >>> 2.6.0, sqlite3.sqlite_version >>> 3.22.0 -remote: sqlite3.version >>> 2.6.0, sqlite3.sqlite_version >>> 3.28.0 - - -Do you have some ideas how to allow for save ""parallel"" SELECT?","The problem was for the following reason (and it had happened to me before): -Using multiprocessing with sqlite3, make sure to create a separate connection for each worker! -Apparently this causes problems with some setups and sometimes doesn't.",0.0,False,1,6426 -2019-11-28 11:25:12.167,Fail build if coverage lowers,"I have GitHub Actions that build and test my Python application. I am also using pytest-cov to generate a code coverage report. This report is being uploaded to codecov.io. -I know that codecov.io can't fail your build if the coverage lowers, so how do I go about with GitHub Actions to fail the build if the coverage drops? Do I have to check the previous values and compare with the new ""manually"" (having to write a script)? Or is there an existing solution for this?","There is nothing built-in, instead you should use one of the many integrations like sonarqube, if I don’t want to write a custom script.",0.0,False,1,6427 -2019-12-01 04:16:36.060,what's command to list all the virtual environments in venv?,"I know in conda I can use conda env list to get a list of all conda virtual environments, what's the corresponding command in python venv that can list all the virtual environments in a given venv? also, is there any way I can print/check the directory of current venv? somehow I have many projects that have same name .venv for their virtual environment and I'd like to find a way to verify which venv I'm in. Thanks","Virtual environments are simple a set of files in a directory on your system. You can find them the same way you would find images or documents with a certain name. For example, if you are using Linux or macOS, you could run find / | grep bin/activate in terminal. Not too sure about Windows but I suspect you can search for something similar in Windows Explorer.",0.0,False,2,6428 -2019-12-01 04:16:36.060,what's command to list all the virtual environments in venv?,"I know in conda I can use conda env list to get a list of all conda virtual environments, what's the corresponding command in python venv that can list all the virtual environments in a given venv? also, is there any way I can print/check the directory of current venv? somehow I have many projects that have same name .venv for their virtual environment and I'd like to find a way to verify which venv I'm in. Thanks","I'm relatively new to python venv as well. I have found that if you created your virtual environment with python -m venv with in a project folder. -I'm using windows, using cmd, for example, you have a Dash folder located in C:\Dash, when you created a venv called testenv by -python -m venv testenv, -you can activate the virtual environment by just input -C:\Dash\testenv\Scripts\activate, -then you can deactivate it by just type in deactivate. -If you want to list the venv that you have, you go to the C:\Dash folder. type -dir -in cmd, it will list the list of the virtual env you have, similar to conda env list. if you want to delete that virtual env, simply do -rm -rf testenv -you can list the packages installed within that venv by doing -pip freeze. -I hope this helps. please correct me if I'm wrong.",0.2012947653214861,False,2,6428 -2019-12-02 14:13:17.757,Localization with RPLider (Python),"we are currently messing around with an Slamtec RPLidar A1. We have a robot with the lidar mounted on it. -Our aim is to retrieve a x and y position in the room. (it's a closed room, could have more than 4 corners but the whole room should be recognized once (it does not matter where the RPlidar stands). -Anyway the floormap is not given. -We got so far a x and y position with BreezySLAM but we recognized, wherever the RPlidar stands, it always sees itself as center, so we do not really know how to retrieve correct x and y from this information. -We are new to this topic and maybe someone can give us a good hint or link to find a simple solution. -PS: We are not intending to track the movement of the robot.","Any sensor sees itself in the center of the environment. The idea of recording map is good one, if not. You can presume that any of the corners can be your zero points, you your room is not square, you can measure length of the wall and and you down to 2 points. Unfortunately if you don't have any additional markers in the environment or you can't create map before actual use, I'm afraid there is no chance for robot to correctly understand where is desired (0,0) point is.",0.0,False,1,6429 -2019-12-02 17:35:56.597,A function possible inputs in PYTHON,"How can I quickly check what are the possible inputs to a specific function? For example, I want to plot a histogram for a data frame: df.hist(). I know I can change the bin size, so I know probably there is a way to give the desired bin size as an input to the hist() function. If instead of bins = 10 I use df.hist(bin = 10), Python obviously gives me an error and says hist does not have property bin. -I wonder how I can quickly check what are the possible inputs to a function.",Since your question tag contains jupyter notebook I am assuming you are trying on it. So in jupyter notebook 2.0 Shift+Tab would give you function arguments.,0.296905446847765,False,1,6430 -2019-12-03 02:24:09.220,How to send data from Python script to JavaScript and vice-versa?,"I am trying to make a calculator (with matrix calculation also). I want to make interface in JavaScript and calculation stuff in Python. But I don't know how to send parameters from python to JavaScript and from JavaScript to python. -Edit: I want to send data via JSON (if possible).","You would have to essentially set both of them up as API's and access them via endpoints. -For Javascript, you can use node to set up your API endpoint, and for Python use Flask.",0.2012947653214861,False,1,6431 -2019-12-03 05:29:29.123,Is there any operator in Python to check and compare the type and value?,"I know that other languages like Javascript have -a == and === operator also they have a != and !== operator does Python also has a === and !== (ie. a single operator that checks the type and compares the value at the same time like === operator) and if not how can we implement it.","No, and you can't really implement it yourself either. -You can check the type of an object with type, but if you just write a function that checks type(x) is type(y) and x == y, then you get results like [1] and [1.0] showing up as equivalent. While that may fulfill the requirements you stated, I've never seen a case where this wasn't an oversight in the requirements. -You can try to implement your own deep type-checking comparison, but that requires you to know how to dig into every type you might have to deal with to perform the comparison. That can be done for the built-in container types, but there's no way to make it general. -As an aside, is looks vaguely like what you want if you don't know what is does, but it's actually something entirely different. is checks object identity, not type and value, leading to results like x = 1000; x + 1 is not 1001.",0.0814518047658113,False,1,6432 -2019-12-04 11:32:33.307,Using python and spacy text summarization,"Basically i am trying to do text summarize using spacy and nltk in python. Now i want to summarize the normal 6-7 lines text and show the summarized text on the localhost:xxxx so whenever i run that python file it will show on the localhost. -Can anyone tell is it possible or not and if it is possible how to do this. Since there would be no databse involved.",You have to create a RESTFUL Api using FLASK or DJAngo with some UI elements and call your model. Also you can use displacy( Spacy UI Bro) Directly on your system.,1.2,True,1,6433 -2019-12-05 06:08:12.167,Find the maximum result after collapsing an array with subtractions,"Given an array of integers, I need to reduce it to a single number by repeatedly replacing any two numbers with their difference, to produce the maximum possible result. -Example1 - If I have array of [0,-1,-1,-1] then performing (0-(-1)) then (1-(-1)) and then (2-(-1)) will give 3 as maximum possible output -Example2- [3,2,1,1] we can get maximum output as 5 { first (1-1) then (0-2) then (3-(-2)} -Can someone tell me how to solve this question?","The other answers are fine, but here's another way to think about it: -If you expand the result into individual terms, you want all the positive numbers to end up as additive terms, and all the negative numbers to end up as subtractive terms. -If you have both signs available, then this is easy: - -Subtract all but one of the positive numbers from a negative number -Subtract all of the negative numbers from the remaining positive number - -If all your numbers have the same sign, then pick the one with the smallest absolute value at treat it as having the opposite sign in the above procedure. That works out to: - -If you have only negative numbers, then subtract them all from the least negative one; or -If you have only positive numbers, then subtract all but one from the smallest, and then subtract the result from the remaining one.",0.0,False,1,6434 -2019-12-05 17:41:59.977,Combining logistic and continuous regression with scikit-learn,"In my dataset X I have two continuous variables a, b and two boolean variables c, d, making a total of 4 columns. -I have a multidimensional target y consisting of two continuous variables A, B and one boolean variable C. -I would like to train a model on the columns of X to predict the columns of y. However, having tried LinearRegression on X it didn't perform so well (my variables vary several orders of magnitude and I have to apply suitable transforms to get the logarithms, I won't go into too much detail here). -I think I need to use LogisticRegression on the boolean columns. -What I'd really like to do is combine both LinearRegression on the continuous variables and LogisticRegression on the boolean variables into a single pipeline. Note that all the columns of y depend on all the columns of X, so I can't simply train the continuous and boolean variables independently. -Is this even possible, and if so how do I do it?","If your target data Y has multiple columns you need to use multi-task learning approach. Scikit-learn contains some multi-task learning algorithms for regression like multi-task elastic-net but you cannot combine logistic regression with linear regression because these algorithms use different loss functions to optimize. Also, you may try neural networks for your problem.",0.0,False,2,6435 -2019-12-05 17:41:59.977,Combining logistic and continuous regression with scikit-learn,"In my dataset X I have two continuous variables a, b and two boolean variables c, d, making a total of 4 columns. -I have a multidimensional target y consisting of two continuous variables A, B and one boolean variable C. -I would like to train a model on the columns of X to predict the columns of y. However, having tried LinearRegression on X it didn't perform so well (my variables vary several orders of magnitude and I have to apply suitable transforms to get the logarithms, I won't go into too much detail here). -I think I need to use LogisticRegression on the boolean columns. -What I'd really like to do is combine both LinearRegression on the continuous variables and LogisticRegression on the boolean variables into a single pipeline. Note that all the columns of y depend on all the columns of X, so I can't simply train the continuous and boolean variables independently. -Is this even possible, and if so how do I do it?","What i understand you want to do is to is to train a single model that both predicts a continuous variable and a class. You would need to combine both loses into one single loss to be able to do that which I don't think is possible in scikit-learn. However I suggest you use a deep learning framework (tensorflow, pytorch, etc) to implement your own model with the required properties you need which would be more flexible. In addition you can also tinker with solving the above problem using neural networks which would improve your results.",0.0,False,2,6435 -2019-12-06 14:08:01.030,How to build whl package for pandas?,"Hi I have a built up Python 2.7 environment with Ubuntu 19.10. -I would like to build a whl package for pandas. -I pip installed the pandas but do not know how to pack it into whl package. -May I ask what I should do to pack it. -Thanks",You cannot pack back an installed wheel. Either you download a ready-made wheel with pip download or build from sources: python setup.py bdist_wheel (need to download the sources first).,1.2,True,1,6436 -2019-12-07 11:29:54.817,Walls logic in Pygame,"I'm making a game with Pygame, and now I stuck on how to process collision between player and wall. This is 2D RPG with cells, where some of them are walls. You look on world from top, like in Pacman. -So, I know that i can get list of collisions by pygame.spritecollide() and it will return me list of objects that player collides. I can get ""collide rectangle"" by player.rect.clip(wall.rect), but how I can get player back from the wall? -So, I had many ideas. The first was push player back in opposite direction, but if player goes, as example, both right and bottom directions and collide with vertical wall right of itself, player stucks, because it is needed to push only left, but not up. -The second idea was implement diagonally moving like one left and one bottom. But in this way we don't now, how move first: left or bottom, and order becomes the most important factor. -So, I don't know what algorithm I should use.","If you know the location of the centre of the cell and the location of the player you can calculate the x distance and the y distance from the wall at that point in time. Would it be possible at that point to take the absolute value of each distance and then take the largest value as the direction to push the player in. -e.g. The player collides with the right of the wall so the distance from the centre of the wall in the y direction should be less than the distance in x. -Therefore you know that the player collided with the left or the right of the wall and not the top, this means the push should be to the right or the left. -If the player's movement is stored as in the form [x, y] then knowing whether to push left or right isn't important since flipping the direction of movement in the x axis gives the correct result. -The push should therefore be in the x direction in this example -e.g. player.vel_x = -player.vel_x. -This would leave the movement in the y axis unchanged so hopefully wouldn't result in the problem you mentioned. -Does that help?",1.2,True,1,6437 -2019-12-07 17:17:25.127,How to load values related to selected multiple options in Django,"Thank you all for always willing to help. -I have a Django app with countries and state choices fields. However, I have no idea whatsoever on how to load the related states for each country. What I mean here is, if I choose ""Nigeria"" in the list of countries, how can I make all Nigerian states to automatically load in the state choice field?","You have to create many to many field state table, then you can multiple select state as per country. -this feature available on django- country package or django- cities package.",0.0,False,1,6438 -2019-12-08 17:44:54.277,Find how similar a text is - One Class Classifier (NLP),"I have a big dataset containing almost 0.5 billions of tweets. I'm doing some research about how firms are engaged in activism and so far, I have labelled tweets which can be clustered in an activism category according to the presence of certain hashtags within the tweets. -Now, let's suppose firms are tweeting about an activism topic without inserting any hashtag in the tweet. My code won't categorized it and my idea was to run a SVM classifier with only one class. -This lead to the following question: - -Is this solution data-scientifically feasible? -Does exists any other one-class classifier? -(Most important of all) Are there any other ways to find if a tweet is similar to the ensable of tweets containing activism hashtags? - -Thanks in advance for your help!","Sam H has a great answer about using your dataset as-is, but I would strongly recommend annotating data so you have a few hundred negative examples, which should take less than an hour. Depending on how broad your definition of ""activism"" is that should be plenty to make a good classifier using standard methods.",0.2012947653214861,False,1,6439 -2019-12-09 10:34:14.533,Existing Tensorflow model to use GPU,"I made a TensorFlow model without using CUDA, but it is very slow. Fortunately, I gained access to a Linux server (Ubuntu 18.04.3 LTS), which has a Geforce 1060, also the necessary components are installed - I could test it, the CUDA acceleration is working. -The tensorflow-gpu package is installed (only 1.14.0 is working due to my code) in my virtual environment. -My code does not contain any CUDA-related snippets. I was assuming that if I run it in a pc with CUDA-enabled environment, it will automatically use it. -I tried the with tf.device('/GPU:0'): then reorganizing my code below it, didn't work. I got a strange error, which said only XLA_CPU, CPU and XLA_GPU is there. I tried it with XLA_GPU but didn't work. -Is there any guide about how to change existing code to take advantage of CUDA?","Not enough to give exact answer. -Have you installed tensorflow-gpu separately? Check using pip list. -Cause, initially, you were using tensorflow (default for CPU). -Once you use want to use Nvidia, make sure to install tensorflow-gpu. -Sometimes, I had problem having both installed at the same time. It would always go for the CPU. But, once I deleted the tensorflow using ""pip uninstall tensorflow"" and I kept only the GPU version, it worked for me.",0.0,False,1,6440 -2019-12-09 12:23:35.760,how to select the metric to optimize in sklearn's fit function?,"When using tensorflow to train a neural network I can set the loss function arbitrarily. Is there a way to do the same in sklearn when training a SVM? Let's say I want my classifier to only optimize sensitivity (regardless of the sense of it), how would I do that?","This is not possible with Support Vector Machines, as far as I know. With other models you might either change the loss that is optimized, or change the classification threshold on the predicted probability. -SVMs however minimize the hinge loss, and they do not model the probability of classes but rather their separating hyperplane, so there is not much room for manual adjustements. -If you need to focus on Sensitivity or Specificity, use a different model that allows maximizing that function directly, or that allows predicting the class probabilities (thinking Logistic Regressions, Tree based methods, for example)",1.2,True,1,6441 -2019-12-09 16:29:01.433,conda environment: does each new conda environment needs a new kernel to work? How can I have specific libraries for all my environments?,"I use ubuntu (through Windows Subsystem For Linux) and I created a new conda environment, I activated it and I installed a library in it (opencv). However, I couldn't import opencv in Jupyter lab till I created a new kernel that it uses the path of my new conda environment. So, my questions are: - -Do I need to create a new kernel every time I create a new conda environment in order for it to work? I read that in general we should use kernels for using different versions of python, but if this is the case, then how can I use a specific conda environment in jupyter lab? Note that browsing from Jupyter lab to my new env folder or using os.chdir to set up the directory didn't work. -Using the new kernel that it's connected to the path of my new environment, I couldn't import matplotlib and I had to activate the new env and install there again the matplotlib. However, matplotlib could be imported when I was using the default kernel Python3. -Is it possible to have some standard libraries to use them with all my conda environments (i.e. install some libraries out of my conda environments, like matplotlib and use them in all my enviroments) and then have specific libraries in each of my environments? I have installed some libraries through the base environment in ubuntu but I can't import these in my new conda environment. - -Thanks in advance!","To my best understanding: -You need ipykernel in each of the environments so that jupyter can import the other library. -In my case, I have a new environment called TensorFlow, then I activate it and install the ipykernel, and then add it to jupyter kernelspec. Finally I can access it in jupyter no matter the environment is activated or not.",0.2012947653214861,False,1,6442 -2019-12-11 15:38:28.387,JupyterLab - how to find out which python venv is my session running on?,"I am running venv based kernel and I am getting trouble in returning a proper answer from which python statement from my JupyterLab notebook. When running this command from terminal where I have my venv activated it works (it returns a proper venv path ~/venvs/my_venv/bin/python), but it does not work in the notebook. -!which python -returns the host path: -/usr/bin/python -I have already tried with os.system() and subprocess, but with no luck. -Does anyone know how to execute this command from the Jupyter notebook?","It sounds like you are starting the virtual environment inside the notebook, so that process's PATH doesn't reflect the modifications made by the venv. Instead, you want the path of the kernel that's actually running: that's sys.executable.",1.2,True,2,6443 -2019-12-11 15:38:28.387,JupyterLab - how to find out which python venv is my session running on?,"I am running venv based kernel and I am getting trouble in returning a proper answer from which python statement from my JupyterLab notebook. When running this command from terminal where I have my venv activated it works (it returns a proper venv path ~/venvs/my_venv/bin/python), but it does not work in the notebook. -!which python -returns the host path: -/usr/bin/python -I have already tried with os.system() and subprocess, but with no luck. -Does anyone know how to execute this command from the Jupyter notebook?","maybe it's because you are trying to run the command outside venv -try source /path/to/venv/bin/active first and then try which python",-0.2012947653214861,False,2,6443 -2019-12-12 05:29:34.497,Can a db handle be passed from a perl script to a python script?,I've been trying to look for ways to call my python script from my perl script and pass the database handle from there while calling it. I don't want to establish another connection in my python script and just use the db handle which is being used by the perl script. Is it even possible and if yes then how?,"There answer is that almost all databases (Oracle, MySQL, Postgresql) will NOT allow you to pass open DB connections between processes (even parent/child). This is a limit on the databases connection, which will usually be associated with lot of state information. -If it was possible to 'share' such a connection, it will be a challenge for the system to know where to ship the results for queries sent to the database (will the result go to the parent, or to the child ?). -Even if it is possible somehow to forward connection between processes, trying to pass a complex object (database connection is much more the socket) between Perl (usually DBI), and Python is close to impossible. -The 'proper' solution is to pass the database connection string, username, and password to the Python process, so that it can establish it's own connection.",1.2,True,1,6444 -2019-12-12 10:03:34.553,Error when installing Tensorflow - Python 3.8,"I'm new to programming and following a course where I must install Tensorflow. The issue is that I'm using Python 3.8 which I understand isn't supported by Tensorflow. -I've downloaded Python 3.6 but I don't know how to switch this as my default version of python. -Would it be best to set up a venv using python 3.6 for my program and install Tensorflow in this venv? -Also, I using Windows and Powershell.","it would have been nice if you would have the share the error screenshot -though as per i got the case -tensorflow work in both 3.8 and 3.6 just you have to check that you have 64bit version not 32 bit -you can acess both version from thier respective folder no need to install a venv",0.0,False,2,6445 -2019-12-12 10:03:34.553,Error when installing Tensorflow - Python 3.8,"I'm new to programming and following a course where I must install Tensorflow. The issue is that I'm using Python 3.8 which I understand isn't supported by Tensorflow. -I've downloaded Python 3.6 but I don't know how to switch this as my default version of python. -Would it be best to set up a venv using python 3.6 for my program and install Tensorflow in this venv? -Also, I using Windows and Powershell.","If you don't want to use Anaconda or virtualenv, then actually multiple Python versions can live side by side. I use Python38 as my default and Python35 for TensorFlow until they release it for Python38. If you wish to use the ""non-default"" Python, just invoke with the full path of the python.exe (or create a shortcut/batch file for it). Python then will take care of using the correct Python libs for that version.",0.0,False,2,6445 -2019-12-12 14:51:24.720,How to search pattern in big binary files efficiently,"I have several binary files, which are mostly bigger than 10GB. -In this files, I want to find patterns with Python, i.e. data between the pattern 0x01 0x02 0x03 and 0xF1 0xF2 0xF3. -My problem: I know how to handle binary data or how I use search algorithms, but due to the size of the files it is very inefficient to read the file completely first. That's why I thought it would be smart to read the file blockwise and search for the pattern inside a block. -My goal: I would like to have Python determine the positions (start and stop) of a found pattern. Is there a special algorithm or maybe even a Python library that I could use to solve the problem?","The common way when searching a pattern in a large file is to read the file by chunks into a buffer that has the size of the read buffer + the size of the pattern - 1. -On first read, you only search the pattern in the read buffer, then you repeatedly copy size_of_pattern-1 chars from the end of the buffer to the beginning, read a new chunk after that and search in the whole buffer. That way, you are sure to find any occurence of the pattern, even if it starts in one chunk and ends in next.",0.9950547536867304,False,1,6446 -2019-12-13 09:54:52.117,Add manually packages to PyCharm in Windows,"I'm using PyCharm. I try to install Selenium but I have a problem with proxy. I try to add packages manually to my project/environment but I don't know how. -I downloaded files with Selenium. Could you tell me how to add this package to Project without using pip?","open pycharm -click on settings (if u use mac click on preference ) -click project -then click projecti nterpreter -click the + button on the bottom of the window you can see a new window search Selenium package and install",0.0,False,1,6447 -2019-12-13 14:48:25.083,Implementing trained-model on camera,I just trained my model successfully and I have some checkpoints from the training process. Can you explain to me how to use this data to recognize the objects live with the help of a webcam?,"Congratulations :) -First of all, you use the model to recognize the objects, the model learned from the data, minor detail. -It really depends on what you are aiming for, as the comment suggest, you should probably provide a bit more information. -The simplest setup would probably be to take an image with your webcam, read the file, pass it to the model and get the predictions. If you want to do it live, you are gonna have the stream from the webcam and then pass the images to the model.",0.0,False,1,6448 -2019-12-15 02:32:28.747,8Puzzle game with A* : What structure for the open set?,"I'm developing a 8 Puzzle game solver in python lately and I need a bit of help -So far I finished coding the A* algorithm using Manhattan distance as a heuristic function. -The solver runs and find ~60% of the solutions in less than 2 seconds -However, for the other ~40%, my solver can take up to 20-30 minutes, like it was running without heuristic. -I started troubleshooting, and it seems that the openset I use is causing some problems : - -My open set is an array -Each iteration, I loop through the openset to find the lowest f(n) (complexity : O(n) ) - -I have the feeling that O(n) is way too much to run a decent A* algorithm with such memory used so I wanted to know how should I manage to make the openset less ""time eater"" -Thank you for your help ! Have a good day -EDIT: FIXED -I solved my problem which was in fact a double problem. -I tried to use a dictionary instead of an array, in which I stored the nodes by their f(n) value and that allowed me to run the solver and the ~181000 possibilities of the game in a few seconds -The second problem (I didn't know about it because of the first), is that I didn't know about the solvability of a puzzle game and as I randomised the initial node, 50% of the puzzles couldn't be solved. That's why it took so long with the openset as the array.","The open set should be a priority queue. Typically these are implemented using a binary heap, though other implementations exist. -Neither an array-list nor a dictionary would be efficient. - -The closed set should be an efficient set, so usually a hash table or binary search tree, depending on what your language's standard library defaults to. -A dictionary (aka ""map"") would technically work, but it's conceptually the wrong data-structure because you're not mapping to anything. An array-list would not be efficient.",1.2,True,1,6449 -2019-12-15 14:24:50.807,My python scripts using selenium don't work anymore. Chrome driver version problem,"My scripts don't work anymore and I can't figure it out. -It is a chrome version problem apparently... But I don't know how to switch to another version (not the latest?) Does exist another way? -My terminal indicates : -Traceback (most recent call last): -File ""/Users/.../Documents/SCRIPTS/PYTHON/Scripts/# -- coding: utf-8 --.py"", line 21, in - driver = webdriver.Chrome() -File ""/opt/anaconda3/lib/python3.7/site-packages/selenium/webdriver/chrome/webdriver.py"", line 81, in init - desired_capabilities=desired_capabilities) -File ""/opt/anaconda3/lib/python3.7/site-packages/selenium/webdriver/remote/webdriver.py"", line 157, in init - self.start_session(capabilities, browser_profile) -File ""/opt/anaconda3/lib/python3.7/site-packages/selenium/webdriver/remote/webdriver.py"", line 252, in start_session - response = self.execute(Command.NEW_SESSION, parameters) -File ""/opt/anaconda3/lib/python3.7/site-packages/selenium/webdriver/remote/webdriver.py"", line 321, in execute - self.error_handler.check_response(response) -File ""/opt/anaconda3/lib/python3.7/site-packages/selenium/webdriver/remote/errorhandler.py"", line 242, in check_response - raise exception_class(message, screen, stacktrace) -selenium.common.exceptions.SessionNotCreatedException: Message: session not created: Chrome version must be between 71 and 75 -(Driver info: chromedriver=2.46.628411 (3324f4c8be9ff2f70a05a30ebc72ffb013e1a71e),platform=Mac OS X 10.14.5 x86_64) -Any idea?","This possibly happens, as your Chrome Browser or Chromium may be updated to newer versions automatically. But you still run your selenium scripts using the old version of the chromedriver. -Check the current version of your Google chrome or Chromium, then download the chromedriver for that specific version. -Then your scripts may work fine!",0.0,False,1,6450 -2019-12-15 19:29:16.370,Add new data to model sklearn: SGD,"I made models with sklearn, something like this: -clf = SGDClassifier(loss=""log"") -clf.fit(X, Y) -And then now I would like to add data to learn for this model, but with more important weight. I tried to use partial_fit with sample_weight bigger but not working. Maybe I don't use fit and partial_fit as good, sorry I'm beginner... -If someone know how to add new data I could be happy to know it :) -Thanks for help.","Do you think it has other way to do a first learning and then add new data more important for the model? Keras? -Thanks guys",0.0,False,1,6451 -2019-12-16 16:28:11.120,RPA : How to do back-end automation using RPA tools?,"I would like to know how back-end automation is possible through RPA. -I'd be interested in solving this scenario relative to an Incident Management Application, in which authentication is required. The app provide: - -An option useful to download/export the report to a csv file -Sort the csv as per the requirement -Send an email with the updated csv to the team - -Please let me know how this possible through RPA and what are those tools -available in RPA to automate this kind of scenario?","RPA tools are designed to automate mainly front-end activities by mimicing human actions. It can be done easily using any RPA tool. -However, if you are interested in back-end automation the first question would be, if specific application has an option to interact in the way you want through the back-end/API? -If yes, in theory you could develop RPA robot to run pre-developed back-end script. However, if all you need would be to run this script, creating robot for this case may be redundant.",0.3869120172231254,False,2,6452 -2019-12-16 16:28:11.120,RPA : How to do back-end automation using RPA tools?,"I would like to know how back-end automation is possible through RPA. -I'd be interested in solving this scenario relative to an Incident Management Application, in which authentication is required. The app provide: - -An option useful to download/export the report to a csv file -Sort the csv as per the requirement -Send an email with the updated csv to the team - -Please let me know how this possible through RPA and what are those tools -available in RPA to automate this kind of scenario?","There are several ways to do it. It is especially useful when your backed are 3rd party applications where you do not have lot of control. Many RPA products like Softomotive WinAutomation, Automation Anywhere, UiPath etc. provide file utilities, excel utilities, db utilities, ability to call apis, OCR capabilities etc., which you can use for backed automation.",1.2,True,2,6452 -2019-12-16 22:57:12.960,google colab /bin/bash: 'gdrive/My Drive/path/myfile : Permission denied,"I'm trying to run a file (an executable) in google colab I mounted the drive and everything is ok however whenever i try to run it using : -! 'gdrive/My Drive/path/myfile' -I get this output of the cell: -/bin/bash: 'gdrive/My Drive/path/myfile : Permission denied -any ideas how to overcome the permissions?","you first need to permit that file/folder as: - chmod 755 file_name",1.2,True,1,6453 -2019-12-17 00:12:24.427,Accessing SAS(9.04) from Anaconda,"We are doing a POC to see how to access SAS data sets from Anaconda -All documentation i find says only SASpy works with SAS 9.4 or higher -Our SAS version is 9.04.01M3P062415 -Can this be done? If yes any documentation in this regard will be highly appreciated -Many thanks in Advance!","SAS datasets are ODBC compliant. SasPy is for running SAS code. If the goal is to read SAS datasets, only, use ODBC or OleDb. I do not have Python code but SAS has a lot of documentation on doing this using C#. Install the free SAS ODBC drivers and read the sas7bdat. The drivers are on the SAS website. -Writing it is different but reading should be fine. You will lose some aspects of the dataset but data will come through.",0.0,False,1,6454 -2019-12-17 09:45:19.723,How do you write to a file without changing its ctime?,"I was hoping that just using something like -with open(file_name, ""w"") as f: -would not change ctime if the file already existed. Unfortunately it does. -Is there a version which will leave the ctime intact? -Motivation: -I have a file that contains a list of events. I would like to know how old the oldest event is. It seems this should be the files ctime.","Because fopen works that way when using 'w' as an option. From the manual: - -""w"" write: Create an empty file for output operations. - If a file with the same name already exists, its contents are discarded and the file is treated as a new empty file. - -If you don't want to create a new file use a+ to append to the file. This leaves the create date intact.",0.1016881243684853,False,2,6455 -2019-12-17 09:45:19.723,How do you write to a file without changing its ctime?,"I was hoping that just using something like -with open(file_name, ""w"") as f: -would not change ctime if the file already existed. Unfortunately it does. -Is there a version which will leave the ctime intact? -Motivation: -I have a file that contains a list of events. I would like to know how old the oldest event is. It seems this should be the files ctime.","Beware, ctime is not the creation time but the inode change time. It is updated each time you write to the file, or change its meta-data, for example rename it. So we have: - -atime : access time - each time the file is read -mtime : modification time - each time file data is change (file is written to) -ctime : change time - each time something is changed in the file, either data or meta-data like name or (hard) links - -I know no way to reset the ctime field because even utimes and its variant can only set the atime and mtime (and birthtime for file systems that support it like BSD UFS2) - except of course changing the system time with all the involved caveats...",1.2,True,2,6455 -2019-12-17 10:48:24.980,"Can you change the precision globally of a piece of code in Python, as a way of debugging it?","I am solving a system of non-linear equations using the Newton Raphson Method in Python. This involves using the solve(Ax,b) function (spsolve in my case, which is for sparse matrices) iteratively until the error or update reduces below a certain threshold. My specific problem involves calculating functions such as x/(e^x - 1) , which are badly calculated for small x by Python, even using np.expm1(). -Despite these difficulties, it seems like my solution converges, because the error becomes of the order of 10^-16. However, the dependent quantities, do not behave physically, and I suspect this is due to the precision of these calculations. For example, I am trying to calculate the current due to a small potential difference. When this potential difference becomes really small, this current begins to oscillate, which is wrong, because currents must be conserved. -I would like to globally increase the precision of my code, but I'm not sure if that's a useful thing to do since I am not sure whether this increased precision would be reflected in functions such as spsolve. I feel the same about using the Decimal library, which would also be quite cumbersome. Can someone give me some general advice on how to go about this or point me towards a relevant post? -Thank you!","You can try using mpmath, but YMMV. generally scipy uses double precision. For a vast majority of cases, analyzing the sources of numerical errors is more productive than just trying to reimplement everything with higher widths floats.",1.2,True,1,6456 -2019-12-17 20:15:22.827,BigQuery - Update Tables With Changed/Deleted Records,"Presently, we send entire files to the Cloud (Google Cloud Storage) to be imported into BigQuery and do a simple drop/replace. However, as the file sizes have grown, our network team doesn't particularly like the bandwidth we are taking while other ETLs are also trying to run. As a result, we are looking into sending up changed/deleted rows only. -Trying to find the path/help docs on how to do this. Scope - I will start with a simple example. We have a large table with 300 million records. Rather than sending 300 million records every night, send over X million that have changed/deleted. I then need to incorporate the change/deleted records into the BigQuery tables. -We presently use Node JS to move from Storage to BigQuery and Python via Composer to schedule native table updates in BigQuery. -Hope to get pointed in the right direction for how to start down this path.","Stream the full row on every update to BigQuery. -Let the table accommodate multiple rows for the same primary entity. -Write a view eg table_last that picks the most recent row. -This way you have all your queries near-realtime on real data. -You can deduplicate occasionally the table by running a query that rewrites self table with latest row only. -Another approach is if you have 1 final table, and 1 table which you stream into, and have a MERGE statement that runs scheduled every X minutes to write the updates from streamed table to final table.",0.3869120172231254,False,1,6457 -2019-12-21 00:05:12.757,"In Databricks python notebook, how to import a file1 objects resides in different directory than the file2?","Note: I did research on this over web but all of them are pointing to the solution which works on prem/desktops. This case is on databricks notebook, I referred databricks help guide but could not find the solution. -Dear all, -In my local desktop i used to import the objects from other python files by referring their absolute path such as -""from dir.dira.dir0.file1 import *"" -But in Databricks python notebook i'm finding it difficult to crack this step since 2 hours. Any help is appreciated. -Below is how my command shows, -from dbfs.Shared.ABC.models.NJ_WrkDir.test_schdl import * -also tried below ways, none of them worked -from dbfs/Shared/ABC/models/NJ_WrkDir/test_schdl import * -from \Shared\ABC\models\NJ_WrkDir\test_schdl import * -from Shared/ABC/models/NJ_WrkDir/test_schdl import * -from Shared.ABC.models.NJ_WrkDir.test_schdl import * -The error messages shows: -ModuleNotFoundError: No module named 'Shared -ModuleNotFoundError: No module named 'dbfs -SyntaxError: unexpected character after line continuation character - File """", line 2 - from \Shared\ABC\models\NJ_WrkDir\test_schdl import * - ^ -Thank you!","The solution is, include the command in child databricks python notebook as -""%run /path/parentfile"" -(from where we want to import the objects from)",0.0,False,1,6458 -2019-12-21 11:10:31.507,Calculate mean across one specific dimension of a 4D tensor in Pytorch,"I have a PyTorch video feature tensor of shape [66,7,7,1024] and I need to convert it to [1024,66,7,7]. How to rearrange a tensor shape? Also, how to perform mean across dimension=1? i.e., after performing mean of the dimension with size 66, I need the tensor to be [1024,1,7,7]. -I have tried to calculate the mean of dimension=1 but I failed to replace it with the mean value. And I could not imagine a 4D tensor in which one dimension is replaced by its mean. -Edit: - I tried torch.mean(my_tensor, dim=1). But this returns me a tensor of shape [1024,7,7]. The 4D tensor is being converted to 3D. But I want it to remain 4D with shape [1024,1,7,7]. -Thank you very much.","The first part of the question has been answered in the comments section. So we can use tensor.transpose([3,0,1,2]) to convert the tensor to the shape [1024,66,7,7]. -Now mean over the temporal dimension can be taken by -torch.mean(my_tensor, dim=1) -This will give a 3D tensor of shape [1024,7,7]. -To obtain a tensor of shape [1024,1,7,7], I had to unsqueeze in dimension=1: -tensor = tensor.unsqueeze(1)",1.2,True,1,6459 -2019-12-21 17:16:30.333,False Positive Rate in Confusion Matrix,"I was trying to manually calculate TPR and FPR for the given data. But unfortunately I dont have any false positive cases in my dataset and even no true positive cases. -So I am getting divided by zero error in pandas. So I have an intuition that fpr=1-tpr. Please let me know my intuition is correct if not let know how to fix this issue. -Thank you","It is possible to have FPR = 1 with TPR = 1 if your prediction is always positive no matter what your inputs are. -TPR = 1 means we predict correctly all the positives. FPR = 1 is equivalent to predicting always positively when the condition is negative. -As a reminder: - -FPR = 1 - TNR = [False Positives] / [Negatives] -TPR = 1 - FNR = [True Positives] / [Positives]",0.0,False,1,6460 -2019-12-22 01:56:37.603,Import python modules in a completely different directory,"I am writing a script that automates the use of other scripts. I've set it up to automatically import other modules from .py files stored in a directory called dependencies using importlib.import_modules() -Originally, I had dependencies as a subdirectory of the root of my application, and this worked fine. However, it's my goal to have the dependencies folder stored potentially anywhere a user would like. In my personal example, it's located in my dropbox folder while my script is run from a different directory entirely. -I cannot for the life of me seem to get the modules to be detected and imported anymore and I'm out of ideas. -Would someone have a better idea of how to achieve this? -This is an example of the path structure: - -E: -|_ Scripts: -| |_ Mokha.py -| -|_ Dropbox: -| |_ Dependencies: -| |_ utils.py - -Here's my code for importing: (I'm reading in a JSON file for the dependency names and looping over every item in the list) - -def importPythonModules(pythonDependencies): - chdir(baseConfig[""dependencies-path]) - for dependency in pythonDependencies: - try: - moduleImport = dependency - module = importlib.import_module(moduleImport) - modules[dependency] = module - print(""Loaded module: %s"" % (dependency)) - except ModuleNotFoundError as e: - print(e) - raise Exception(""Error importing python dependecies."") - chdir(application_path) - -The error I get is No module named 'utils' -I've tried putting an init.py in both the dependencies folder, the root of my dropbox, and both at the same time to no avail. -This has got to be possible, right?","UPDATE: I solved it. -sys.path.append(baseConfig['dependencies-path']) -Not super happy with the solution but it'll work for now.",0.0,False,1,6461 -2019-12-22 06:06:18.430,How to put images in a linked list in python,"I have created a class Node having two data members: data and next. -I have created another class LinkedList to having a data member: head -Now I want to store an image in the node but I have no idea how to do it. The syntax for performing this operation would be very much helpful.","PIL is the Python Imaging Library which provides the python interpreter with image editing capabilities. -USE from PIL import Image after installing. -Windows: Download the appropriate Pillow package according to your python version. Make sure to download according to the python version you have. -pip install Pillow for Linux users. -Then u can easily add image to your linked list by assigning it to a variable",0.0,False,1,6462 -2019-12-22 23:43:11.057,What cause pip did not work after reinstall python?,"I had Python 3.7.4 in D:\python3.7.4 before but for some reason, I uninstalled it today, then I changed the folder name to D:\python3.7.5 and installed python 3.7.5 in it, then, when I try to use pip in cmd I got a fatal error saying - -Unable to create processing using '""Unable to create process using '""d:\python3.7.4\python.exe"" ""D:\Python3.7.5\Scripts\pip.exe""' - -I tried to change all things contain python3.7.4 in environment variable to python3.7.5 but the same error still exists, does anyone know how to fix this? -Thanks","Try to create a new folder and run the installation there. -This should work, as I did the same myself to go install 2 different versions before",0.0,False,1,6463 -2019-12-23 08:37:50.350,"How to create a registration form in Django that is divided into two parts, such that one call fill up the second part only after email verification?","I have the logic for email verification, but I am not sure how to make it such that only after clicking the link on the verification email, the user is taken to the second page of the form, and only after filling the second part the user is saved.","I would say that much better idea is to save user to database anyway, but mark him as inactive (simple boolean field in model will be enough). Upon registration, before confirming email mark him as inactive and as soon as he confirms email and fills second part of your registration form that you mentioned change that boolean value to true. If you don't want to keep inactive users data in your database, you can set up for example cron, that will clean users that haven't confirmed their email for few days.",1.2,True,1,6464 -2019-12-23 10:03:28.647,python multiprocess read data from disk,"it confused me long time. -my program has two process, both read data from disk, disk max read speed 10M/s -1. if two process both read 10M data, is two process spend time same with one process read twice? -2. if two process both read 5M data, two process read data spend 1s, one process read twice spend 1s, i know multi process can save time from IO, but the spend same time in IO, multi process how to save time?","It's not possible to increase disk read speed by adding more threads. With 2 threads reading you will get at best 1/2 the speed per thread (in practice even less), with 3 threads - 1/3 the speed, etc. -With disk I/O it is the difference between sequential and random access speed that is really important. For example, sequential read speed can be 10 MB/s, and random read just 10 KB/s. This is the case even with the latest SSD drives (although the ratio may be less pronounced). -For that reason you should prefer to read from disk sequentially from only one thread at a time. Reading the file in 2 threads in parallel will not only reduce the speed of each read by half, but will further reduce because of non-sequential (interleaved) disk access. - -Note however, that 10 MB is really not much; modern OSes will prefetch the entire file into the cache, and any subsequent reads will appear instantaneous.",0.0,False,1,6465 -2019-12-23 20:07:26.647,How to move Python virtualenv to different system (computer) and use packages present in Site-packages,"I am making a python 3 application (flask based) and for that I created a virtualenv in my development system, installed all packages via pip and my app worked fine. -But when I moved that virtualenv to a different system (python3 installed) and ran my application with the absolute path of my virtualenv python (c:/......./myenv/Scripts/python.exe main.py) then it threw the errors that packages are not installed, -I activated the virtualenv and used pip freeze and there were no packages were installed. -But under virtualenv there is 'Site-Packages' (myenv -> lib -> site-packages) , all my installed packages were persent there. -My Question is how to use the packages that are inside 'site-packages' even after moving the virtualenv to different system in Python 3.",Maybe you can consider using pipenv to control the virtualenvs on different computer or environment.,0.0,False,2,6466 -2019-12-23 20:07:26.647,How to move Python virtualenv to different system (computer) and use packages present in Site-packages,"I am making a python 3 application (flask based) and for that I created a virtualenv in my development system, installed all packages via pip and my app worked fine. -But when I moved that virtualenv to a different system (python3 installed) and ran my application with the absolute path of my virtualenv python (c:/......./myenv/Scripts/python.exe main.py) then it threw the errors that packages are not installed, -I activated the virtualenv and used pip freeze and there were no packages were installed. -But under virtualenv there is 'Site-Packages' (myenv -> lib -> site-packages) , all my installed packages were persent there. -My Question is how to use the packages that are inside 'site-packages' even after moving the virtualenv to different system in Python 3.","You Must not copy & paste venv, even in the same system. -If you install new package in venv-copied, then it would installed in venv-original. Becaus settings are bound to specific directory.",0.0,False,2,6466 -2019-12-24 00:29:06.477,How do I allow a file to be accessible from all directories?,"I have a python program which is an interpreter, for a language that I have made. It is called cbc.py, and it is in a certain directory. Now, I want to know how I can call it, along with sys.argv arguments (like python3 cbc.py _FILENAME_TO_RUN_) in any directory. I have done research on the .bashrc file and on the PATH variable, but I can't find anything that really helps me with my problem. Could someone please show me how to resolve my problem?","You need to make your script executable first and then add it to your PATH. -If you have your python script at ~/path/to/your/script/YOUR_SCRIPT_NAME: - -add #!/usr/bin/python3 at the top of you script, -give executable permision to your script using sudo chmod a+x YOUR_SCRIPT_NAME, -edit ~/.bashrc to add your script path, e.g. echo PATH=""$HOME/path/to/your/script:$PATH"" >> ~/.bashrc, -restart or re-login or run source ~/.bashrc, -now you can access your script via YOUR_SCRIPT_NAME anywhere.",0.0,False,1,6467 -2019-12-24 14:27:11.827,Blueprism-like spying and bot development,"Blueprism gives the possibility to spy elements (like buttons and textboxes) in both web-browsers and windows applications. How can I spy (windows-based only) applications using Python, R, Java, C++, C# or other, anything but not Blueprism, preferrably opensource. - -For web-browsers, I know how to do this, without being an expert. Using Python or R, for example, I can use Selenium or RSelenium, to spy elements of a website using different ways such as CSS selector, xpath, ID, Class Name, Tag, Text etc. -But for Applications, I have no clue. BluePrism has mainly two different App spying modes which are WIN32 and Active Accessibility. How can I do this type of spying and interacting with an application outside of Blueprism, preferrably using an opensource language? - -(only interested in windows-based apps for now) -The aim is of course to create robots able to navigate the apps as a human would do.","There is a free version of Blue Prism now :) Also Blue Prism uses win32, active accessibility and UI Automation which is a newer for of the older active accessibility. -To do this yourself without looking into Blue Prism you would need to know how to use UIA with C#/VB.new or C++. There are libraries however given that Blue Prism now has a free version I would recommend using that. Anything specific can be developed withing a code stage within Blue Prism.",0.0,False,1,6468 -2019-12-25 06:36:41.020,how to use SVM to classify if the shape of features for each sample is matrix? Is it simply to reshape the matrix to long vector?,I have 120 samples and the shape of features for each sample is matrix of 15*17. how to use SVM to classify? Is it simply to reshape the matrix to long vector?,"Yes, that would be the approach I would recommend. It is essentially the same procedure that is used when utilizing images in image classification tasks, since each image can be seen as a matrix. -So what people do is to write the matrix as a long vector, consisting of every column concatenated to one another. -So you can do the same here.",0.0,False,1,6469 -2019-12-25 14:11:55.277,How can I fetch data from a website to my local Django Website?,"I am rather new to Django and I need to fetch some data from a website. For example I want the top ten posts of the day from Reddit. I know of a ""request"" module for the same.But I am not sure where and how should I implement it and will it be important to store the data in a model or not.","You can create a helper class named like network.py and implement functions to fetch the data. -If you want to store them in the database you can create appropriate models otherwise you can directly import and call the function and use the data returned from network.py in your view.",0.0,False,1,6470 -2019-12-25 14:33:59.377,How to upload a file to pythonanywhere using react native?,"I am trying to build an app through react-native wherein I need to upload a JSON file to my account folder hosted on pythonanywhere. -Can you please tell me how can I upload a JSON file to the pythonanywhere folder through react-native?",The web framework that you're using will have documentation about how to create a view that can accept filee uploads. Then you can use the fetch API in your javascript to send the file to it.,0.6730655149877884,False,1,6471 -2019-12-26 16:59:19.517,Pycharm can't find python.exe,"No Python at 'C:\Users\Mr_Le\AppData\Local\Programs\Python\Python38-32\python.exe' -Any time I try to run my code it keeps prompting me this ^^^ but I had recently deleted Python 3.8 to downgrade to Python 3.6 and just installed Python 3.6 to run pytorch. -Does anyone know how to fix this?","For other users: just check the ""C:\Users<>\AppData\Local\Programs\Python"" folder on your PC and remove any folders belonging to previous installations of Python. Also check if environmental variables are correct.",0.0,False,2,6472 -2019-12-26 16:59:19.517,Pycharm can't find python.exe,"No Python at 'C:\Users\Mr_Le\AppData\Local\Programs\Python\Python38-32\python.exe' -Any time I try to run my code it keeps prompting me this ^^^ but I had recently deleted Python 3.8 to downgrade to Python 3.6 and just installed Python 3.6 to run pytorch. -Does anyone know how to fix this?","1.In your windows search bar find python 3.9.8. -[Searching for Windows][1] -[1]: https://i.stack.imgur.com/vNMxT.png - -Right click on your the app - -Click on App Settings -[Your settings will populate][2] -[2]: https://i.stack.imgur.com/E4yM3.png - -Scroll down on this page -[][3] -[3]: https://i.stack.imgur.com/HFc1J.png - -Hit the Repair box - -Try to run your python script again after restarting all your programs",0.0,False,2,6472 -2019-12-26 20:34:44.420,Access output of intermediate layers in Tensor-flow 2.0 in eager mode,"I have CNN that I have built using on Tensor-flow 2.0. I need to access outputs of the intermediate layers. I was going over other stackoverflow questions that were similar but all had solutions involving Keras sequential model. -I have tried using model.layers[index].output but I get - -Layer conv2d has no inbound nodes. - -I can post my code here (which is super long) but I am sure even without that someone can point to me how it can be done using just Tensorflow 2.0 in eager mode.","The most straightforward solution would go like this: -mid_layer = model.get_layer(""layer_name"") -you can now treat the ""mid_layer"" as a model, and for instance: -mid_layer.predict(X) -Oh, also, to get the name of a hidden layer, you can use this: -model.summary() -this will give you some insights about the layer input/output as well.",0.0,False,1,6473 -2019-12-27 02:38:58.220,"Given a midpoint, gradient and length. How do I plot a line segment of specific length?","I am trying to plot the endpoints of the line segment which is a tangent to a circle in Python. -I know the circle has center of (A, B), and a radius of r. The point at which I want to find the tangent at is (a, b). I want the tangent to be a segment of length c. How do I write a code which allows me to restrict the length of the line? -I have the equation of the tangent to be y = (-(B - b)/(A - a))(x - a) + b. So I know how to plot the two endpoints if the length of the segment did not matter. But how would I determine the x-coordinates of the point? Is there some sort of command which allows me to limit the length of a line? -Thank you!!!","I don't know thonny, and it sounds like your implementation will depend a bit on the context of this computation. -That said, it sounds like what you're looking for is the two points of intersection of your tangent line and a (new, conceptual) cicle with a given radius centered on (a,b). You should be able to put together the algebraic expression for those points, and simplify it into something tidy. Watch out for special cases though, where the slope of the tangent is undefined (or where it's zero).",0.0,False,1,6474 -2019-12-27 06:26:33.787,How to match duplicates and if match how to remove second one in list in python?,"I have the list of APIs, -Input = [WriteConsoleA, WSAStartup, RegCloseKey, RegCloseKey, RegCloseKey, NtTerminateProces, RegCloseKey] -expected output = [WriteConsoleA, WSAStartup, RegCloseKey, NtTerminateProces, RegCloseKey]",you can simply convert set(list) i.e. set(Input) to remove all the duplicates.,0.0,False,1,6475 -2019-12-27 09:10:51.553,TextBlob Naive Bayes classifier for neutral tweets,"I am doing a small project on sentiment analysis using TextBlob. I understand there are are 2 ways to check the sentiment of tweet: - -Tweet polarity: Using it I can tell whether the tweet is positive, negative or neutral -Training a classifier: I am using this method where I am training a TextBlob Naive Bayes classifier on positive and negative tweets and using the classifier to classify tweet either as 'positive' or 'negative'. - -My question is, using the Naive bayes classifier, can I also classify the tweet as 'neutral' ? In other words, can the 'sentiment polarity' defined in option 1 can somehow be used in option 2 ?","If you have only two classes, Positive and Negative, and you want to predict if a tweet is Neutral, you can do so by predicting class probabilities. -For example, a tweet predicted as 80% Positive remains Postive. However, a tweet predicting as 50% Postive could be Neutral instead.",0.0,False,1,6476 -2019-12-27 12:55:42.747,Sentiment Classification using Doc2Vec,"I am confused as to how I can use Doc2Vec(using Gensim) for IMDB sentiment classification dataset. I have got the Doc2Vec embeddings after training on my corpus and built my Logistic Regression model using it. How do I use it to make predictions for new reviews? sklearn TF-IDF has a transform method that can be used on test data after training on training data, what is its equivalent in Gensim Doc2Vec?","To get a vector for an unseen document, use vector = model.infer_vector([""new"", ""document""]) -Then feed vectorinto your classifier: preds = clf.predict([vector]).",0.2012947653214861,False,1,6477 -2019-12-28 06:27:45.017,How can I embed a python file or code in HTML?,"I am working on an assignment and am stuck with the following problem: -I have to connect to an oracle database in Python to get information about a table, and display this information for each row in an .html-file. Hence, I have created a python file with doctype HTML and many many ""print"" statements, but am unable to embed this to my main html file. In the next step, I have created a jinja2 template, however this passes the html template data (incl. ""{{ to be printed }}"") to python and not the other way round. I want to have the code, which is executed in python, to be implemented on my main .html file. -I can't display my code here since it is an active assignment. I am just interested in general opinions on how to pass my statements from python (or the python file) into an html file. I can't find any information about this, only how to escape html with jinja. -Any ideas how to achieve this? -Many thanks.","You can't find information because that won't work. Browser cannot run python, meaning that they won't be able to run your code if you embed it into an html file. The setup that you need is a backend server that is running python (flask is a good framework for that) that will do some processing depending on the request that is being sent to it. It will then send some data to a template processor (jinja in this case work well with flask). This will in turn put the data right into the html page you want to generate. Then this html page will be returned to the client making the request, which is something the browser will understand and will show to the user. If you want to do some computation dynamically on the browser you will need to use javascript instead which is something a browser can run (since its in a sandbox mode). -Hope it helps!",0.0,False,2,6478 -2019-12-28 06:27:45.017,How can I embed a python file or code in HTML?,"I am working on an assignment and am stuck with the following problem: -I have to connect to an oracle database in Python to get information about a table, and display this information for each row in an .html-file. Hence, I have created a python file with doctype HTML and many many ""print"" statements, but am unable to embed this to my main html file. In the next step, I have created a jinja2 template, however this passes the html template data (incl. ""{{ to be printed }}"") to python and not the other way round. I want to have the code, which is executed in python, to be implemented on my main .html file. -I can't display my code here since it is an active assignment. I am just interested in general opinions on how to pass my statements from python (or the python file) into an html file. I can't find any information about this, only how to escape html with jinja. -Any ideas how to achieve this? -Many thanks.","Thanks for the suggestions. What I have right now is a perfectly working python file containing jinja2 and the html output I want, but as a python file. When executing the corresponding html template, the curly expressions {{name}} are displayed like this, and not as the functions executed within the python file. Hence, I still have to somehow tell my main html file to execute this python script on my webpage, which I cannot manage so far. -Unfortunately, it seems that we are not allowed to use flask, only jinja and django.",0.0,False,2,6478 -2019-12-30 03:00:38.240,How to give an AI controls in a video game?,"So I made Pong using PyGame and I want to use genetic algorithms to have an AI learn to play the game. I want it to only know the location of its paddle and the ball and controls. I just don't know how to have the AI move the paddle on its own. I don't want to do like: ""If the ball is above you, go up."" I want it to just try random stuff until it learns what to do. -So my question is, how do I get the AI to try controls and see what works?","So you'd want as the AI input the position of the paddle, and the position of the ball. The AI output is two boolean output whether the AI should press up or down button on the next simulation step. -I'd also suggest adding another input value, the ball's velocity. Otherwise, you would've likely needed to add another input which is the location of the ball in the previous simulation step, and a much more complicated middle layer for the AI to learn the concept of velocity.",0.0,False,1,6479 -2019-12-30 07:27:06.787,How to get recent data from bigtable?,"I need to get 50 latest data (based on timestamp) from BigTable. -I get the data using read_row and filter using CellsRowLimitFilter(50). But it didn't return the latest data. It seems the data didn't sorted based on timestamp? how to get the latest data? -Thank you for your help.",Turns out the problem was on the schema. It wasn't designed for timeseries data. I should have create the rowkey with id#reverse_timestamp and the data will be sorted from the latest. Now I can use CellsRowLimitFilter(50) and get 50 latest data.,1.2,True,1,6480 -2019-12-30 11:55:40.920,Will pyqt5 connected with MySQL work on other computers without MySQL?,"I am building a GUI software using PyQt5 and want to connect it with MySQL to store the data. -In my computer, it will work fine, but what if I transfer this software to other computer who doesn't have MySQL, and if it has, then it will not have the same password as I will add in my code (using MySQL-connector)a password which I know to be used to connect my software to MySQL on my PC. -My question is, how to handle this problem???","If you want your database to be installed with your application and NOT shared by different users using your application, then using SQLite is a better choice than MySQL. SQLite by default uses a file that you can bundle with your app. That file contains all the database tables including the connection username/password.",1.2,True,1,6481 -2020-01-03 03:03:26.107,"How could I run tensorflow on windows 10? I have the gpu Geforce gtx 1650. Can I run tensorflow on it? if yes, then how?","I want to do some ML on my computer with Python, I'm facing problem with the installation of tensorflow and I found that tensorflow could work with GPU, which is CUDA enabled. I've got a GPU Geforce gtx 1650, will tensorflow work on that. -If yes, then, how could I do so?","Here are the steps for installation of tensorflow: - -Download and install the Visual Studio. -Install CUDA 10.1 -Add lib, include and extras/lib64 directory to the PATH variable. -Install cuDNN -Install tensorflow by pip install tensorflow",0.0,False,1,6482 -2020-01-06 05:04:25.633,I want my python tool to have a mechanism like whenever anyone runs the tool a pop up should come up as New version available please use the latest,"I Have created a python based tool for my teammates, Where we group all the similar JIRA tickets and hence it becomes easier to pick the priority one first. But the problem is every time I make some changes I have to ask people to get the latest one from the Perforce server. So I am looking for a mechanism where whenever anyone uses the tool a pop up should come up as ""New version available"" please install. -Can anyone help how to achieve that?","I have an idea,you can use requests module to crawl your website(put the number of version in the page) and get the newest version. -And then,get the version in the user's computer and compare to the official version.If different or lower than official version,Pop a window to remind user to update",0.2655860252697744,False,3,6483 -2020-01-06 05:04:25.633,I want my python tool to have a mechanism like whenever anyone runs the tool a pop up should come up as New version available please use the latest,"I Have created a python based tool for my teammates, Where we group all the similar JIRA tickets and hence it becomes easier to pick the priority one first. But the problem is every time I make some changes I have to ask people to get the latest one from the Perforce server. So I am looking for a mechanism where whenever anyone uses the tool a pop up should come up as ""New version available"" please install. -Can anyone help how to achieve that?","You could maintain the latest version code/tool on your server and have your tool check it periodically against its own version code. If the version code is higher on the server, then your tool needs to be updated and you can tell the user accordingly or raise appropriate pop-up recommending for an update.",0.1352210990936997,False,3,6483 -2020-01-06 05:04:25.633,I want my python tool to have a mechanism like whenever anyone runs the tool a pop up should come up as New version available please use the latest,"I Have created a python based tool for my teammates, Where we group all the similar JIRA tickets and hence it becomes easier to pick the priority one first. But the problem is every time I make some changes I have to ask people to get the latest one from the Perforce server. So I am looking for a mechanism where whenever anyone uses the tool a pop up should come up as ""New version available"" please install. -Can anyone help how to achieve that?","On startup, or periodically while running, you could have the tool query your Perforce server and check the latest version. If it doesn't match the version currently running, then you would show the popup, and maybe provide a download link. -I'm not personally familiar with Perforce, but in Git for example you could check the hash of the most recent commit. You could even just include a file with a version number that you manually increment every time you push changes.",0.2655860252697744,False,3,6483 -2020-01-06 13:14:31.660,How to get the Performance Log of another tab from Selenium using Python?,"I'm using Selenium with Python API and Chrome to do the followings: - -Collect the Performance Log; -Click some tags to get into other pages; - -For example, I click a href in Page 'A', which commands the browser opens a new window to load another URL 'B'. -But when I use driver.get_log('performance') to get the performance log, I can only get the log of Page 'A'. Even though I switch to the window of 'B' as soon as I click the href, some log entries of the page 'B' will be lost. -So how can I get the whole performance log of another page without setting the target of to '_top'?","I had the same problem and I think it is because the driver does not immediately switch to a new window. -I switched to page ""B"" and reloaded this page, then uses get_log and it worked.",0.0,False,1,6484 -2020-01-08 06:15:06.680,What is the difference between iterdescendants() and iterchildren() in lxml?,"In LXML python library, how to iterate? and what is the difference between iterdescendants() and iterchildren() in lxml python ?",when you use iterchildren() you iterate over first level childs. When you use iterdescendants() you iterate over childs and childs of childs.,0.0,False,1,6485 -2020-01-08 16:40:23.643,NS3 - python.h file can not be located compilation error,"I have included Python.h in my module header file and it was built successfully. -Somehow when I enabled-examples configuration to compile the example.cc file, which includes the module header file. It reported the Python.h file can not be found - fatal error. -I have no clue at the moment what is being wrong. -Could anyone give a hint? It is for the NS3(Network Simulator 3) framework.","thanks for writing back to me:). -I solved the issue by adding the pyembed feature in the wscript within the same folder as my.cc file. -Thanks again:). -J.",0.0,False,1,6486 -2020-01-09 15:38:35.540,Anaconda prompt launches Visual Studio 2017 when run .py Files,"Traditionally I've used Notepad ++ along with the Anaconda prompt to write and run scripts locally on my Windows PC. -I had my PC upgraded and thought I'd give Virtual Studio Code a chance to see if I liked it. -Now, every time I try to execute a .py file in the Anaconda prompt Visual Studio 2017 launches. I hate this and can't figure out how to stop it. -I've tried the following: - -Uninstalling Virtual Studio Code. -Changing environments in Anaconda. -Reinstalling Anaconda. I did not check the box for the %PATH option. -Reboots at every step. - -On my Windows 10 laptop Visual Studio 2017 doesn't appear in my Apps and Features to uninstall. I've tried Googling and am stuck. -The programs involved are: -Windows 10 Professional -Visual Studio 2017 -Anaconda version 2019.10 Build Channel py37_0 -Can someone help me figure out how to stop this?","How were you running the scripts before? python script.py or only script.py? -If it is the latter, what happened probably is that Windows has associated .py files to Visual Studio. Right click on the file, go to Open With, then select Python if you want to run them, or Notepad++ if you want to edit them.",1.2,True,1,6487 -2020-01-09 23:15:20.873,AWS Lambda - Run Lambda multiply times with different environment variables,"I have an AWS Lambda that uses 2 environment variables. I want to run this lambda up to several hundred times a day, however i need to change the environment variables between runs. -Ideally, I would like something where I could a list a set of variables pairs and run the lambdas on a schedule -The only way I see of doing this, is have separate lambdas and setting the environment variables for each manually -Any Ideas about how to achieve this","You could use an SQS queue for this. Instead of your scheduler initiating the Lambda function directly, it could simply send a message with the two data values to an SQS queue, and the SQS queue could be configured to trigger the Lambda. When triggered, the Lambda will receive the data from the message. So, the Lambda function does not need to change. -Of course, if you have complete control over the client that generates the two data values then that client could also simply invoke the Lambda function directly, passing the two data values in the payload.",1.2,True,1,6488 -2020-01-10 07:19:16.663,convert float64 to int (excel to pandas),"I have imported excel file into python pandas. but when I display customer numbers I get in float64 format i.e - -7.500505e+09 , 7.503004e+09 - how do convert the column containing these numbers","int(yourVariable) will cast your float64 to a integer number. -Is this what you are looking for?",0.0,False,1,6489 -2020-01-10 09:01:43.203,Camera Calibration basic doubts,"I am starting out with computer vision and opencv. I would like to try camera calibration for the images that I have to see how it works. I have a very basic doubt. -Should I use the same camera from which the distorted images were captured or I can use any camera to perform my camera calibration?","Camera calibration is supposed to do for the same camera. Purpose of calibrating a camera is to understand how much distortion the image has and to correct it before we use it to take actual pics. Even if you do not have the original camera, If you have the checkerboard images taken from that camera it is sufficient. Otherwise, look for a similar camera with features as similar as possible (focal length etc.) to take checker board images for calibration and this will somewhat serve your purpose.",0.3869120172231254,False,1,6490 -2020-01-11 08:50:04.607,NLP AI logic - dialogue sequences with multiple parameters per sequence architecture,"I have a dataset of dialogues with various parameters (like if it is a question, an action, what emotion it conveys etc ). I have 4 different ""informations"" per sentence. -let s say A replys to B -A has an additive parameter in a different list for its possible emotions (1.0.0.0) (angry.happy.sad.bored) - an another list for it s possible actions (1.0.0.0) (question.answer.inpulse.ending) -I know how to build a regular RNN model (from the tutorials and papers I have seen here and there), but I can t seem to find a ""parameters"" architecture. -Should I train multiple models ? (like sentence A --> emotions, then sentence B -->actions) then train the main RNN separately and predicting the result through all models ? -or is there a way to build one single model with all the information stored right at the beginning ? -I apologize for my approximate English, witch makes my search for answers even more difficult.","From the way I understand your question, you want to find emotions/actions based on a particular sentence. Sentence A has emotions as labels and Sentence B has actions as labels. Each of the labels has 4 different values with a total of 8 values. And you are confused about how to implement labels as input. -Now, you can give all these labels their separate classes. Like emotions will have labels (1.2.3.4) and actions will have labels (5.6.7.8). Then concat both the datasets and run Classification through RNN. -If you need to pass emotions/actions as input, then add them to vectorized matrix. Suppose you have Sentence A stating ""Today's environment is very good"" with happy emotion. Add the emotion with it's matrix row, like this: -Today | Environment | very | good | health -1 | 1 | 1 | 1 | 0 -Now add emotion such that: -Today | Environment | very | good | health | emotion -1 | 1 | 1 | 1 | 0 | 2(for happy) -I hope this answers your question.",1.2,True,1,6491 -2020-01-11 20:43:56.263,How to identify the message in a delivery notification?,"In pika, I have called channel.confirm_delivery(on_confirm_delivery) in order to be informed when messages are delivered successfully (or fail to be delivered). Then, I call channel.basic_publish to publish the messages. Everything is performed asynchronously. -How, when the on_confirm_delivery callback is called, do I find what the concerned message? In the parameters, The only information that changes in the object passed as a parameter to the callback is delivery_tag, which seems to be an auto-incremented number. However, basic_publish doesn't return any delivery tag. -In other words, if I call basic_publish twice, how do I know, when I receive an acknowledgement, whether it's the first or the second message which is acknowledged?","From RabbitMQ document, I find: - -Delivery tags are monotonically growing positive integers and are presented as such by client libraries. - -So you can keep a growing integer in your code per channel, set it to 0 when channel is open, increase it when you publish a message. Then this integer will be same as the delivery_tag.",1.2,True,1,6492 -2020-01-12 11:18:22.443,how to set the format for date time column in jupyter notebook,"11am – 4pm, 7:30pm – 11:30pm (Mon-Sun)------(this is opening and closing time of restaurant) - [i have this kind of format in my TIME column and this is not converting into datetime format...so how to prepare the data so that i can apply linear regression???] - -ValueError: ('Unknown string format:', '11am – 4pm, 7:30pm – 11:30pm (Mon-Sun)')","From my understanding, datetime format requires the 24h format, or - 00:00:00 -So instead of 7:30pm, it would be 19:30:00.",0.0,False,1,6493 -2020-01-12 15:18:09.343,Which data to plot to know what model suits best for the problem?,"I'm sorry, i know that this is a very basic question but since i'm still a beginner in machine learning, determining what model suits best for my problem is still confusing to me, lately i used linear regression model (causing the r2_score is so low) and a user mentioned i could use certain model according to the curve of the plot of my data and when i see another coder use random forest regressor (causing the r2_score 30% better than the linear regression model) and i do not know how the heck he/she knows better model since he/she doesn't mention about it. I mean in most sites that i read, they shoved the data to some models that they think would suit best for the problem (example: for regression problem, the models could be using linear regression or random forest regressor) but in some sites and some people said firstly we need to plot the data so we can predict what exact one of the models that suit the best. I really don't know which part of the data should i plot? I thought using seaborn pairplot would give me insight of the shape of the curve but i doubt that it is the right way, what should i actually plot? only the label itself or the features itself or both? and how can i get the insight of the curve to know the possible best model after that?","This question is too general, but I will try to give an overview of how to choose the model. First of all you should that there is no general rule to choose the family of models to use, it is more a choosen by experiminting different model and looking to which one gives better results. You should also now that in general you have multi-dimensional features, thus plotting the data will not give you a full insight of the dependance of your features with the target, however to check if you want to fit a linear model or not, you can start plotting the target vs each dimension of the input, and look if there is some kind of linear relation. However I would recommand that you to fit a linear model, and check if if this is relvant from a statistical point of view (student test, smirnov test, check the residuals...). Note that in real life applications, it is not likeley that linear regression will be the best model, unless you do a lot of featue engineering. So I would recommand you to use more advanced methods (RandomForests, XGboost...)",0.2012947653214861,False,2,6494 -2020-01-12 15:18:09.343,Which data to plot to know what model suits best for the problem?,"I'm sorry, i know that this is a very basic question but since i'm still a beginner in machine learning, determining what model suits best for my problem is still confusing to me, lately i used linear regression model (causing the r2_score is so low) and a user mentioned i could use certain model according to the curve of the plot of my data and when i see another coder use random forest regressor (causing the r2_score 30% better than the linear regression model) and i do not know how the heck he/she knows better model since he/she doesn't mention about it. I mean in most sites that i read, they shoved the data to some models that they think would suit best for the problem (example: for regression problem, the models could be using linear regression or random forest regressor) but in some sites and some people said firstly we need to plot the data so we can predict what exact one of the models that suit the best. I really don't know which part of the data should i plot? I thought using seaborn pairplot would give me insight of the shape of the curve but i doubt that it is the right way, what should i actually plot? only the label itself or the features itself or both? and how can i get the insight of the curve to know the possible best model after that?","If you are using off-the-shelf packages like sklearn, then many simple models like SVM, RF, etc, are just one-liners, so in practice, we usually try several such models at the same time.",0.0,False,2,6494 -2020-01-12 22:30:19.967,Openshift online - no longer running collectstatic,"I've got 2 Python 3.6 pods currently running. They both used to run collectstatic upon redeployment, but then one wasn't working properly, so I deleted it and made a new 3.6 pod. Everything is working perfectly with it, except it no longer is running collectstatic on redeployment (so I'm doing it manually). Any thoughts on how I can get it running again? -I checked the documentation, and for the 3.11 version of openshift still looks like it has a variable to disable collectstatic (which i haven't done), but the 4.* versions don't seem to have it. Don't know if that has anything to do with it. -Edit: -So it turns out that I had also updated the django version to 2.2.7. -As it happens, the openshift infrastructure on openshift online is happy to collectstatic w/ version 2.1.15 of Django, but not 2.2.7 (or 2.2.9). I'm not quite sure why that is yet. Still looking in to it.",Currently Openshift Online's python 3.6 module doesn't support Django 2.2.7 or 2.2.9.,1.2,True,1,6495 -2020-01-13 13:47:41.013,How to edit ELF by adding custom sections and symbols,"I want to take an elf file and then based on the content add a section with data and add symbols. Using objcopy --add-section I can add a section with the content that I would like. I cannot figure out how to add a symbol. -Regardless, I would prefer not run a series of programs in order to do what I want but rather do it natively in c or python. In pyelftools I can view an elf, but I cannot figure out how to edit and elf. -How can I add custom sections and symbols in Python or C?","ELF has nothing to do with the symbols stored in it by programs. It is just a format to encode everything. Symbols are generated normally by compilers, like the C compiler, fortran compiler or an assembler, while sections are fixed by the programming language (e.g. the C compiler only uses a limited number of sections, depending on the kind of data you are using in your programs). Some compilers have extensions to associate a variable to a section, so the linke will consider it special in some way. The compiler/assembler generates a symbol table in order for the linker to be able to use it to resolve dependencies. -If you want to add symbols to your program, the easiest way it to create an assembler module with the sections and symbols you want to add to the executable, then assemble it and link to the final executable. -Read about ld(1) program (the linker), and how it uses the link scripts (special hidden files that direct the linker on how to organize the sections in the different modules at link time) to handle the sections in an object file. ELF is just a format. If you use a link script and the help of the assembler, you'll be able to add any section you want or modify the normal memory map that programs use to have.",0.0,False,1,6496 -2020-01-13 17:24:11.667,Google Earth Engine using Python,"How should a beginner start learning Google Earth Engine coding with python using colab? I know python, but how do I come to know about the objects of images and image classification.",i use geemap package for convert shape file to earth engine variable without uploading file on assets,0.0,False,1,6497 -2020-01-13 18:30:12.277,How to predict the player using random forest ML,"I have to predict the winner of the Australian Open 2020. My dataset has these features: Location / Tournament / Date / Series / Court / Surface / Round / Winner / Loser etc. -I trained my model using just these features 'Victory','Series','Court','Surface','WinRank','LoseRank','WPts','LPts','Wsets','Lsets','Weather' and I have a 0.93 accuracy but now I have to predict the name of the winner and I don't have any idea how to do it based on the model that I trained. -Example: If I have Dimitrov G. vs Simion G using random forest the model has to give me one of them as the winner of the match. -I transformed the names of the players in dummy variables but after that, I don't know what to do? -Can anyone give me just an idea of how could I predict the winner? so I can create a Tournament, please?","To address such a problem, I would suggest creation of a custom target variable. -Firstly, the transformation of names of players into dummy variables seems reasonable (Just make sure, the unique player is identified with the same first and last name combinations thereby, avoiding duplications and thus, having the correct dummy code for the player name). -Now, to create the target variable ""wins"" - - -Use the two player names - P1, P2 of the match as input features for your model. -Define the ""wins"" as 1 if P1 wins and 0 if P2 wins. -Run your model with this set up. -When you want to create a tournament and predict the winner, the inputs will be your 2 players and other match features. If, ""wins"" is close to 1, it means your P1 wins and output that player name.",1.2,True,1,6498 -2020-01-14 14:08:12.387,Eikon API - ek.get_data for indices,"I would like to retrieve the following (historical) information while using the -ek.get_data() -function: ISIN, MSNR,MSNP, MSPI, NR, PI, NT -for some equity indices, take "".STOXX"" as an example. How do I do that? I want to specify I am using the get data function instead of the timeseries function because I need daily data and I would not respect the 3k rows limit in get.timeseries. -In general: how do I get to know the right names for the fields that I have to use inside the -ek.get_data() -function? I tried with both the codes that the Excel Eikon program uses and also the names used in the Eikon browser but they differ quite a lot from the example I saw in some sample code on the web (eg. TR.TotalReturnYTD vs TR.PCTCHG_YTD. How do I get to understand what would be the right name for the data types I need?","Considering the codes in your function (ISIN, MSNR,MSNP, MSPI, NR, PI, NT), I'd guess you are interested in the Datastream dataset. You are probably beter off using the DataStream WebServices (DSWS) API instead of the Eikon API. This will also relieve you off your 3k row limit.",0.0,False,1,6499 -2020-01-14 16:05:18.980,Installing cutadapat package in windows,"I'm trying to install a package name cutdapt in a windows server. I'm trying to do it this way: -pip install --trusted-host pypi.org --trusted-host files.pythonhosted.org cutadapt -But every time I try to install it I get this error: Building wheel for cutadapt (PEP 517): finished with status 'error' -Any ideas on how to pass this issue?","Turns out, that I had some problems with python 3.5, so I switched to python 3.8 and managed to install the package.",1.2,True,1,6500 -2020-01-14 17:32:47.693,Can I use Node.js for the back end and Python for the AI calculations?,"I am trying to create a website in Node.js. Though, as I am taking a course on how to use Artificial Intelligence and would like to implement such into my program. Therefore, I was wondering if it was feasible to connect Python Spyder to a Node.js based web application with somewhat ease.","Yes. That is possible. There are a few ways you can do this. You can use the child_process library, as mentioned above. Or, you can have a Python API that takes care of the AI stuff, which your Node app communicates with. -The latter example is what I prefer as most my projects run on containers as micro services on Kubernates.",0.2012947653214861,False,1,6501 -2020-01-14 21:20:47.013,Python 3.X micro ORM compatible with SQL Server,"My application is database heavy (full of very complex queries and stored procedures), it would be too hard and inefficient to write these queries in a lambda way, for this reason I'll have to stick with raw SQL. -So far I found these 2 'micro' ORMs but none are compatible with MSSQL: -PonyORM -Supports: SQLite, PostgreSQL, MySQL and Oracle -Peewee -Supports: SQLite, PostgreSQL, MySQL and CockroachDB -I know SQLAlchemy supports MSSQL, however it would bee too big for what I need.",As of today - Jan 2020 - it seems that using pyodbc is still the way to go for SQL Server + Python if you are not using Django or any other big frameworks.,1.2,True,1,6502 -2020-01-15 06:45:49.180,catboost classifier for class imbalance?,"I am using catboost classifier for my binary classification model where I have a highly imbalance dataset of 0 -> 115000 & 1 -> 10000. -Can someone please guide me in how to use the following parameters in catboostclassifier: -1. class_weights -2. scale_pos_weight ? -From the documentation, I am under the impression that I can use -Ratio of sum of negative class by sum of positive class i.e. 115000/10000=11.5 as the input for scale_pos_weight but I am not sure . -Please let me know what exact values to use for these two parameters and method to derive that value? -Thanks","For scale_pos_weight you would use negative class // positive class. in your case it would be 11 (I prefer to use whole numbers). -For class weight you would provide a tuple of the class imbalance. in your case it would be: class_weights = (1, 11) -class_weights is more flexible so you could define it for multi-class targets. for example if you have 4 classes you can set it: class_weights = (0.5,1,5,25) -and you need to use only one of the parameters. for a binary classification problem I would stick with scale_pos_weight.",1.2,True,1,6503 -2020-01-16 01:15:11.980,How to write \n without making a newline,"So I'm trying to write this exact string but I don't \n to make a new line I want to actually print \n on the screen. Any thoughts on how to go about this? (using python -Languages:\npython\nc\njava","adding a backslash will interpret the succeeding backslash character literally. print(""\\n"").",0.0,False,1,6504 -2020-01-20 05:37:33.623,My Dataset is showing a string when it should be a curly bracket set/dictionary,"My dataset has a column where upon printing the dataframe each entry in the column is like so: -{""Wireless Internet"",""Air conditioning"",Kitchen} -There are multiple things wrong with this that I would like to correct - -Upon printing this in the console, python is printing this:'{""Wireless Internet"",""Air conditioning"",Kitchen}' Notice the quotations around the curly brackets, since python is printing a string. -Ideally, I would like to find a way to convert this to a list like: [""Wireless Internet"",""Air conditioning"",""Kitchen""] but I do not know how. Further, notice how some words so not have quotations, such as Kitchen. I do not know how to go about correcting this. - -Thanks","what you have is a set of words, curly brackets are for Dictionary use such as {'Alex,'19',Marry','20'} its linking it as a key and value which in my case it name and age, rather than that you can use to_list command in python maybe it suits your needs.",0.0,False,1,6505 -2020-01-20 11:17:53.697,Get whole row using database package execute function,"I am using databases package in my fastapi app. databases has execute and fetch functions, when I tried to return column values after inserting or updating using execute, it returns only the first value, how to get all the values without using fetch.. -This is my query - -INSERT INTO table (col1, col2, col3, col4) - VALUES ( val1, val2, val3, val4 ) RETURNING col1, col2;","INSERT INTO table (col1, col2, col3, col4) VALUES ( val1, val2, val3, val4 ) RETURNING (col1, col2); - -you can use this query to get all columns",1.2,True,2,6506 -2020-01-20 11:17:53.697,Get whole row using database package execute function,"I am using databases package in my fastapi app. databases has execute and fetch functions, when I tried to return column values after inserting or updating using execute, it returns only the first value, how to get all the values without using fetch.. -This is my query - -INSERT INTO table (col1, col2, col3, col4) - VALUES ( val1, val2, val3, val4 ) RETURNING col1, col2;","I had trouble with this also, this was my query: - -INSERT INTO notes (text, completed) VALUES (:text, :completed) RETURNING notes.id, notes.text, notes.completed - -Using database.execute(...) will only return the first column. -But.. using database.fetch_one(...) inserts the data and returns all the columns. -Hopes this helps",0.0,False,2,6506 -2020-01-20 12:19:47.043,"PyCharm venv issue ""pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available""","I hope someone can help me as I would like to use PyCharm to develop in Python. -I have looked around but do not seem to be able to find any solutions to my issue. -I have Python 3 installed using the Windows msi. -I am using Windows 10 . have downloaded PyCharm version 2019.3.1 (Community Edition). -I create a new project using the Pure Python option. -On trying to pip install any package, I get the error -pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available -If I try this in VSCode using the terminal it works fine. -Can anyone tell me how to resolve this issue. It would appear to be a problem with the virtual environment but I do not know enough to resolve the issue. -Thanks for your time.","Sorry guys, it appears the basic interpreter option was on Anaconda, that I had installed sometime ago , forgotten about and it defaulted to it . Changing my basic interpreter option to my Python install (Python.exe) solved the issue. -Keep on learning",0.6730655149877884,False,1,6507 -2020-01-20 14:24:55.687,Does anyone know how Tesseract - OCR postprocessing / spellchecking works?,"I was using tesseract-ocr (pytesseract) for spanish and it achieves very high accuracy when you set the language to spanish and of course, the text is in spanish. If you do not set language to spanish this does not perform that good. So, I'm assuming that tesseract is using many postprocessing models for spellchecking and improving the performance, I was wondering if anybody knows some of those models (ie edit distance, noisy channel modeling) that tesseract is applying. -Thanks in advance!","Your assumption is wrong: If you do not specify language, tesseract uses English model as default for OCR. That is why you got wrong result for Spanish input text. There is no spellchecking post processing.",0.0,False,1,6508 -2020-01-21 22:38:55.577,Erwin API with Python,I am trying to get clear concept on how to get the Erwin generated DDL objects with python ? I am aware Erwin API needs to be used. What i am looking if what Python Module and what API needs to used and how to use them ? I would be thankful for some example !,"Here is a start: -import win32com.client -ERwin = win32com.client.Dispatch(""erwin9.SCAPI"") -I haven't been able to browse the scapi dll so what I know is from trial and error. Erwin publishes VB code that works, but it is not straightforward to convert.",0.2012947653214861,False,1,6509 -2020-01-22 08:29:49.570,"Venv fails in CentOS, ensurepip missing","Im trying to install a venv in python3 (on CentOS). However i get the following error: - -Error: Command '['/home/cleared/Develop/test/venv/bin/python3', '-Im', - 'ensurepip', '--upgrade', '--default-pip']' returned non-zero exit - status 1. - -I guess there is some problem with my ensurepip... -Running python3 -m ensurepip results in - -FileNotFoundError: [Errno 2] No such file or directory: - '/usr/lib64/python3.6/ensurepip/_bundled/pip-9.0.3-py2.py3-none-any.whl' - -Looking in the /usr/lib64/python3.6/ensurepip/_bundled/ I find pip-18.1-py2.py3-none-any.whl and setuptools-40.6.2-py2.py3-none-any.whl, however no pip-9.0.3-py2.py3-none-any.whl -Running pip3 --version gives - -pip 20.0.1 from /usr/local/lib/python3.6/site-packages/pip (python - 3.6) - -Why is it looking for pip-9.0.3-py2.py3-none-any.whl when I'm running pip 20.0.1, and why to i have pip-18.1-py2.py3-none-any.whl? And how to I fix this?","These versions are harcoded at the beginning of ./lib/python3.8/ensurepip/__init__.py. You can edit this file with the correct ones. -Regarding the reason of this corruption, I can only guess. I would bet on a problem during the installtion of this interpreter.",1.2,True,2,6510 -2020-01-22 08:29:49.570,"Venv fails in CentOS, ensurepip missing","Im trying to install a venv in python3 (on CentOS). However i get the following error: - -Error: Command '['/home/cleared/Develop/test/venv/bin/python3', '-Im', - 'ensurepip', '--upgrade', '--default-pip']' returned non-zero exit - status 1. - -I guess there is some problem with my ensurepip... -Running python3 -m ensurepip results in - -FileNotFoundError: [Errno 2] No such file or directory: - '/usr/lib64/python3.6/ensurepip/_bundled/pip-9.0.3-py2.py3-none-any.whl' - -Looking in the /usr/lib64/python3.6/ensurepip/_bundled/ I find pip-18.1-py2.py3-none-any.whl and setuptools-40.6.2-py2.py3-none-any.whl, however no pip-9.0.3-py2.py3-none-any.whl -Running pip3 --version gives - -pip 20.0.1 from /usr/local/lib/python3.6/site-packages/pip (python - 3.6) - -Why is it looking for pip-9.0.3-py2.py3-none-any.whl when I'm running pip 20.0.1, and why to i have pip-18.1-py2.py3-none-any.whl? And how to I fix this?",I would make a clean reinstall of Python (and maybe some of its dependencies as well) with your operating system's package manager (yum?).,0.0,False,2,6510 -2020-01-22 11:33:05.413,How can I deploy my features in a Machine Learning algorithm?,"I’m way new to ML so I have a really rudimentary question. I would appreciate it if one clarifies it for me. -Suppose I have a set of tweets which labeled as negative and positive. I want to perform some sentiment analysis. -I extracted 3 basic features: - -Emotion icons -Exclamation marks -Intensity words(very, really etc.). - -How should I use these features with SVM or other ML algorithms? -In other words, how should I deploy the extracted features in SVM algorithm? -I'm working with python and already know how should I run SVM or other algorithms, but I don't have any idea about the relation between extracted features and role of them in each algorithm! -Based on the responses of some experts I update my question: -At first, I wanna appreciate your time and worthy explanations. I think my problem is solving… So in line with what you said, each ML algorithm may need some vectorized features and I should find a way to represent my features as vectors. I want to explain what I got from your explanation via a rudimentary example. -Say I have emoticon icons (for example 3 icons) as one feature: -1-Hence, I should represent this feature by a vector with 3 values. -2-The vectorized feature can initial in this way : [0,0,0] (each value represents an icon = :) and :( and :P ). -3-Next I should go through each tweet and check whether the tweet has an icon or not. For example [2,1,0] shows that the tweet has: :) 2 times, and :( 1 time, and :p no time. -4-After I check all the tweets I will have a big vector with the size of n*3 (n is the total number of my tweets). -5-Stages 1-4 should be done for other features. -6-Then I should merge all those features by using m models of SVM (m is the number of my features) and then classify by majority vote or some other method. -Or should create a long vector by concatenating all of the vectors, and feed it to the SVM. -Could you please correct me if there is any misunderstanding? If it is not correct I will delete it otherwise I should let it stay cause It can be practical for any beginners such as me... -Thanks a bunch…","basically, to make things very ""simple"" and ""shallow"", all algorithm takes some sort of a numeric vector represent the features -the real work is to find how to represent the features as vector which yield the best result, this depends by the feature itself and on the algorithm using -for example to use SVM which basically find a separator plane, you need to project the features on some vectors set which yield a good enough separation, so for instance you can treat your features like this: - -Emotion icons - create a vector which represent all the icons present in that tweet, define each icon to an index from 1 to n so tweet represented by [0,0,0,2,1] means the 4th and 5th icons are appearing in his body 2 and 1 times respectively -Exclamation marks - you can simply count the number of occurrences (a better approach will be to represent some more information about it like the place in a sentence and such...) -Intensity words - you can use the same approach as the Emotion icons - -basically each feature can be used alone in the SVM model to classify good and bad -you can merge all those features by using 3 models of SVM and then classify by majority vote or some other method -or -you can create a long vector by concatenating all of the vectors, and feed it to the SVM - -this is just a one approach, you might tweak it or use some other one to fit your data, model and goal better",0.999329299739067,False,1,6511 -2020-01-22 12:40:14.960,Search SVN for specific files,"I am trying to write a Python script to search a (very large) SVN repository for specific files (ending with .mat). Usually I would use os.walk() to walk through a directory and then search for the files with a RegEx. Unfortunately I can't use os.walk() for a repository, since it is not a local directory. -Does anyone know how to do that? The repository is too large to download, so I need to search for it ""online"". -Thanks in advance.","Something like -svn ls -R REPO-ROOT | grep PATTERN -will help",0.3869120172231254,False,1,6512 -2020-01-23 01:37:55.337,"How do you create a class, or function in python that allows you to make a sequence type with specific characteristics","1) My goal is to create a sequence that is a list that contains ordered dictionaries. The only problem for me will be described below. - -I want the list to represent a bunch of ""points"" which are for all intents and purposes just an ordered dictionary. However, I notice that when I use OrderedDict class, when I print the dictionary it comes up as OrderedDict([key value pair 1, key value pair 2, ... etc)] For me, I would rather it behave like an ordered dictionary, BUT not having those DOUBLE ""messy/ugly"" ""end marks"" which are the ""[( )]"". I don't mind if the points have ONE, and only one, type of ""end marks"". Also I would also like it if when I print this data type that stuff like OrderedDict() doesn't show up. However, I do not mind if it shows up in return values. Like you know how when you print a list it doesn't show up as list(index0, index1, ... etc) but instead it shows up as [index0, index1, ... etc]. That is what I mean. Inside the point, it would look like this - -point = {'height': 1, 'weight': 3, 'age': 5, etc} <- It could be brackets or braces or parentheses. Just some type of ""end mark"", but I preferably would like it to be in {} and having key value pairs indicated by key: value and have them separated by commas. -what_i_am_looking_for = [point0, point1, point2, point3, ... etc]","In Python 3.6, the ordinary dict implementation was re-written and maintains key insertion order like OrderedDict, but was considered an implementation detail. Python 3.7 made this feature an official part of the language spec, so if you use Python 3.6+ just use dict instead of OrderedDict if you don't care about backward-compatibility with Python 3.5 or earlier.",0.0,False,1,6513 -2020-01-23 04:51:14.783,Scrape and compare and Web page data,I have a web page with data in different tables. I want to extract a particular table and compare with an excel sheet and see whether there are any differences. Note the web page is in a internal domain. I tried with requests and beautifulsoup but I got 401 error. Could anyone help how I can achieve this?,"401 is an Unauthorized Error - which suggests your username and password may be getting rejected, or their format not accepted. Review your credentials and the exact format / data names expected by the page to ensure you're correctly trying to connect.",0.0,False,1,6514 -2020-01-25 09:25:18.143,USB Device/PyUSB on Windows and LInux behaving differently,"I have a device with USB interface which I can connect to both my Ubuntu 18.04 machine and my Windows 10 machine. On Windows 10 I have to install the CP210x driver and manually attach it to the device (otherwise Windows tries to find the device manufacturer's driver - it's a CP210x serial chip), and in Linux write the vendorID and productID to the cp210x driver to allow it to attach to ttyUSB0. This works fine. -The Windows driver is from SiliconLabs - the manufacturer of the UART-USB chip in the device. -So on Windows it is attached to COM5 and Linux to ttyUSB0 (Ubuntu, Raspbian) -Using Wireshark I can snoop the usb bus successfully on both operating systems. -The USB device sends data regularly over the USB bus and on Windows using Wireshark I can see this communication as ""URB_INTERRUPT in"" messages with the final few bytes actually containing the data I require. -On Linux it seems that the device connects but using Wireshark this time I can only see URB_BULK packets. Examining the endpoints using pyusb I see that there is no URB_Interrupt endpoint only the URB_Bulk. -Using the pyusb libraries on Linux it appears that the only endpoints available are URB_BULK. -Question mainly is how do I tell Linux to get the device to send via the Interrupt transfer mechanism as Windows seems to do. I don't see a method in pyusb's set_configuration to do this (as no Interrupt transfer endpoints appear) and haven't found anything in the manufacturer's specification. -Failing that, of course, I could snoop the configuration messages on Windows, but there has to be something I'm missing here?","Disregard this, the answer was simple in the end: Windows was reassigning the device address on the bus to a different device.",0.0,False,1,6515 -2020-01-25 21:41:28.487,How can I define an absolute path saved in one exe file?,"I'm writing a software in python for windows which should be connected to a database. Using py2exe i want to make an executable file so that I don't have to install python in the machines the software is running. The problem is that I want the user to define where the database is located the very first time the software starts, but I don't know how to store this information so that the user doesn't have to tell everytime where is the database. I have no idea how to deal with it. (the code cannot be changed because it's just a .exe file). How would you do that?","I can think of some solutions: - -You can assume the DB is in a fixed location - bad idea, might move or change name and then your program stop working -You can assume the DB is in the same folder as the .exe file and guide the user to run it in the same folder - better but still not perfect -Ask the user for the DB location and save the path in a configuration file. If the file doesn't exist or path doesn't lead to the file, the user should tell the program where is the DB, otherwise, read it from the config file - I think this is the best option.",0.0,False,1,6516 -2020-01-25 23:13:58.847,How to install python module local to a single project,"I've been going around but was not able to find a definitive answer... -So here's my question.. -I come from javascript background. I'm trying to pickup python now. -In javascript, the basic practice would be to npm install (or use yarn) -This would install some required module in a specific project. -Now, for python, I've figured out that pip install is the module manager. -I can't seem to figure out how to install this specific to a project (like how javascript does it) -Instead, it's all global.. I've found --user flag, but that's not really I'm looking for. -I've come to conclusion that this is just a complete different schema and I shouldn't try to approach as I have when using javascript. -However, I can't really find a good document why this method was favored. -It may be just my problem but I just can't not think about how I'm consistently bloating my pip global folder with modules that I'm only ever gonna use once for some single project. -Thanks.","A.) Anaconda (the simplest) Just download “Anaconda” that contains a lots of python modules pre installed just use them and it also has code editors. You can creat multiple module collections with the GUI. -B.) Venv = virtual environments (if you need something light and specific that contains specific packages for every project -macOS terminal commands: - -Install venv -pip install virtualenv -Setup Venve (INSIDE BASE Project folder) -python3 -m venv thenameofyourvirtualenvironment -Start Venve -source thenameofyourvirtualenvironment/bin/activate -Stop Venve -deactivate -while it is activated you can install specific packages ex.: -pip -q install bcrypt - -C.) Use “Docker” it is great if you want to go in depth and have a solide experience, but it can get complicated.",1.2,True,1,6517 -2020-01-27 10:22:09.743,How to stop Anaconda Navigator and Spyder from dropping libraries into User folder,"For reference, I'm trying to re-learn programming and python basics after years away. -I recently downloaded Anaconda as part of an online Python Course. However, every time I open Spyder or the Navigator they instantly create folders for what I assume are all the relevant libraries in C:Users/Myself. These include .conda, .anaconda, .ipython, .matplotlib, .config and .spyder-py3. -My goal is to figure out how change where these files are placed so I can clean things up and have more control. However, I am not entirely sure why this occurs. My assumption is it's due to that being the default location for the Working Directory, thought the solutions I've seen to that are currently above me. I'm hoping this is a separate issue with a simpler solution, and any light that can be shed on this would be appreciated.","They are automatically created to store configuration changes for those related tools. They are created in %USERPROFILE% under Windows. -The following is NOT recommended: -You can change this either via the setx command or by opening the Start Menu search for variables. -- This opens the System Properties menu on the Advanced tab -- Click on Environmental Variables -- Under the user section, add a new variable called USERPROFILE and set the value to a location of your choice.",0.0,False,2,6518 -2020-01-27 10:22:09.743,How to stop Anaconda Navigator and Spyder from dropping libraries into User folder,"For reference, I'm trying to re-learn programming and python basics after years away. -I recently downloaded Anaconda as part of an online Python Course. However, every time I open Spyder or the Navigator they instantly create folders for what I assume are all the relevant libraries in C:Users/Myself. These include .conda, .anaconda, .ipython, .matplotlib, .config and .spyder-py3. -My goal is to figure out how change where these files are placed so I can clean things up and have more control. However, I am not entirely sure why this occurs. My assumption is it's due to that being the default location for the Working Directory, thought the solutions I've seen to that are currently above me. I'm hoping this is a separate issue with a simpler solution, and any light that can be shed on this would be appreciated.","Go to: -~\anaconda3\Lib\site-packages\jupyter_core\paths.py -in def get_home_dir(): -You can specify your preferred path directly. -Other anaconda applications can be mortified by this way but you have to find out in which scripts you can change the homedir, and sometimes it has different names.",0.0,False,2,6518 -2020-01-28 07:32:40.877,"Is there an effective way to install 'pip', 'modules' and 'dependencies' in an offline environment?","The computer on which I want to install pip and modules is a secure offline environment. -Only Python 2.7 is installed on this computers(centos and ubuntu). -To run the source code I coded, I need another module. -But neither pip nor module is installed. -It looks like i need pip to install all of dependency files. -But I don't know how to install pip offline. -and i have no idea how to install the module offline without pip. -The only network connected is pypi from the my nexus3 repository. -Is there a good way? -Would it be better to install pip and install modules? -Would it be better to just install the module without installing pip?","using pip it is easier to install the packages as it manages certian things on its own. You can install modules manually by downloading its source code and then compiling it yourself. The choice is upto you, how you want to do things.",0.0,False,1,6519 -2020-01-29 15:00:21.157,Kubernetes log not showing output of python print method,"I've a python application in which I'm using print() method tho show text to a user. When I interact with this application manually using kubectl exec ... command I can see the output of prints. -However, when script is executed automatically on container startup with CMD python3 /src/my_app.py (last entry in Dockerfile) then, the prints are gone (not shown in kubectl logs). Ayn suggestin on how to fix it?","It turned out to be a problem of python environment. Setting, these two environment variables PYTHONUNBUFFERED=1 and PYTHONIOENCODING=UTF-8 fixed the issue.",0.5457054096481145,False,1,6520 -2020-01-30 15:17:27.757,Spotfire: Using Multiple Markings in a Data Function Without Needing Something Marked in Each,"In Spotfire I have a dashboard that uses both filtering (only one filtering scheme) and multiple markings to show the resulting data in a table. -I have created a data function which takes a column and outputs the data in the column after the active filtering scheme and markings are applied. -However, this output column is only calculated if I have something marked in every marking. -I want the output column to be calculated no matter how many of the markings are being used. Is there a way to do this? -I was thinking I could use an IronPython script to edit the data function parameters for my input column to only check the boxes for markings that are actively being used. However, I can't find how to access those parameters with IronPython. -Thanks!","I think it would be a combination of visuals being set to OR instead of AND for markings (if you have a set of markings that are being set from others). -Also are all the input parameters set to required parameter perhaps unchecking that option would still run the script. In the r script you may want to replace null values as well. -Not too sure without some example.",1.2,True,1,6521 -2020-01-30 17:22:41.407,Program to print all folder and subfolder names in specific folder,"I should do it wiht only import os -I have problem, that i don't know how to make program after checking the specific folder for folders to do the same for folders in these folders and so on.",You can use os.walk(directory),0.0,False,1,6522 -2020-01-30 21:18:10.660,Cannot import module from linux-ubuntu terminal,"I installed the keyboard module for python with pip3 and after I runned my code the terminal shows me this message: ""ImportError: You must be root to use this library on linux."" Can anybody help me how to run it well? I tried to run it by switching to ""su -"" and tried it on this place as well.","Can you please post your script? -If you are just starting the program without a shebang it probably should not run and probably throw an ImportError -Try adding a shebang (#!) at the first line of you script. -A shebang is used in unix to select the interpreter you want to run your script. -Write this in the first line: #!/usr/bin/env python3 -If this doesn't help try running it from the terminal using a precending dot like this: -python3 ./{file's_name}.py",1.2,True,1,6523 -2020-01-31 02:27:00.850,Easiest way to put python text game in html?,"I am trying to help someone put a python text game to be displayed with the inputs and output on his html website. What's the easiest way to do this, regarding the many outputs and inputs? Would it be to make it a flask app? I don't really know how else to describe the situation. Answers would be much appreciated.","I am developing a website with python3.8 and Sanic. It was pretty to use async, await and := ~",0.0,False,1,6524 -2020-02-01 21:35:48.343,is django overkill for a contact us website?,"I'm a complete beginner and a relative of mine asked me to build a simple 'contact us' website for them. It should include some information about his company and a form in which people that visit the website are able to send mails to my relative. I have been playing around with vue.js in order to build the frontend. I now want to know how to put the form to send mails and I read it has to be done with backend, so I thought I could use django as I have played with it in the past and I am confident using python. Is it too much for the work that I have to do? Should I use something simpler? I accept any suggestions please, Thanks.","You will probably should use something ready like Wix or Wordpress if want to do it fast if you prefer to learn in the process you can do it with Django and Vue, but this is indeed little bit overkill",0.0,False,2,6525 -2020-02-01 21:35:48.343,is django overkill for a contact us website?,"I'm a complete beginner and a relative of mine asked me to build a simple 'contact us' website for them. It should include some information about his company and a form in which people that visit the website are able to send mails to my relative. I have been playing around with vue.js in order to build the frontend. I now want to know how to put the form to send mails and I read it has to be done with backend, so I thought I could use django as I have played with it in the past and I am confident using python. Is it too much for the work that I have to do? Should I use something simpler? I accept any suggestions please, Thanks.","Flask -You can use Flask. it is simpler than Django and easy to learn. you can build a simple website like the one you want in less than 50 line. - -Wordpress -If you want you can use Wordpress. it's easy to install and many hosting services support it already. Wordpress has so many plugins and templates to build contact us website in 10 minutes. - -Wix -wix is easy, drag-n-drop website builder with many pre-build templates that you can use, check them out and you will find what you need.",0.0,False,2,6525 -2020-02-02 23:55:02.930,mitmproxy: shortcut for undoing edit,"new user of mitmproxy here. I've figured out how to edit a request and replay it, and I'm wondering how to undo my edit. -More specifically, I go to a request's flow, hit 'e', then '8' to edit the request headers. Then I press 'd' to delete one of the headers. What do I press to undo this change? 'u' doesn't work.","It's possible to revoke changes to a flow, but not while editing. In your case, 'e' -> '8' -> 'd' headers, now press 'q' to go back to the flow -> press 'V' to revoke changes to the flow.",0.0,False,1,6526 -2020-02-03 11:14:48.003,"Tktable module installation problem. _tkinter.TclError: invalid command name ""table""","This problem has been reported earlier but I couldn't find the exact solution for it. I installed ActiveTCL and downloaded tktable.py by ""Guilherme Polo "" to my site-packages, also added Tktable.dll, pkgindex.tcl, and tktable.tcl from ActiveTCL\lib\Tktable2.11 to my python38-32\tcl and dlls . I also tried setting the env variable for TCL_LIBRARY and TK_LIBRARY to tcl8.6 and tk8.6 respectively. But I am still getting invalid command name ""table"". -What is that I am missing? Those who made tktable work on windows 10 and python 3 , how did you do it? I am out of ideas and would be grateful for some tips on it.","Seems like there was problem running the Tktable dlls in python38-32 bit version. It worked in 64 bit version. -Thanks @Donal Fellows for your input.",0.0,False,1,6527 -2020-02-03 12:22:32.540,"Getting a ""Future Warning"" when importing for Yahoo with Pandas-Datareader","I am currently , successfully, importing stock information from Yahoo using pandas-datareader. However, before the extracted data, I always get the following message: - -FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead. - -Would anyone have an idea of what it means and how to fix it?","You may find the 'util.testing' code in pandas_datareader, which is separate from pandas.",-0.0679224682270276,False,3,6528 -2020-02-03 12:22:32.540,"Getting a ""Future Warning"" when importing for Yahoo with Pandas-Datareader","I am currently , successfully, importing stock information from Yahoo using pandas-datareader. However, before the extracted data, I always get the following message: - -FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead. - -Would anyone have an idea of what it means and how to fix it?","For mac OS open /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/pandas_datareader/compat/__init__.py -change: from pandas.util.testing import assert_frame_equal -to: from pandas.testing import assert_frame_equal",-0.1352210990936997,False,3,6528 -2020-02-03 12:22:32.540,"Getting a ""Future Warning"" when importing for Yahoo with Pandas-Datareader","I am currently , successfully, importing stock information from Yahoo using pandas-datareader. However, before the extracted data, I always get the following message: - -FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead. - -Would anyone have an idea of what it means and how to fix it?","Cause: The cause of this warning is that, basically, the pandas_datareader is importing a module from the pandas library that will be deprecated. Specifically, it is importing pandas.util.testing whereas the new preferred module will be pandas.testing. -Solution: First off this is a warning, and not an outright error, so it won't necessarily break your program. So depending on your exact use case, you may be able to ignore it for now. -That being said, there are a few options you can consider: - -Option 1: Change the code yourself -- Go into the pandas_datareader module and modify the line of code in compat_init.py that currently says from pandas.util.testing import assert_frame_equal simply to from pandas.testing import assert_frame_equal. This will import the same function from the correct module. -Option 2: Wait for pandas-datareader to update --You can also wait for the library to be upgraded to import correctly and then run pip3 install --upgrade pandas-datareader. You can go to the Github repo for pandas-datareader and raise an issue. -Option 3: Ignore it -- Just ignore the warning for now since it doesn't break your program.",0.0,False,3,6528 -2020-02-04 16:27:59.627,Multiple versions of Python in PATH,"I've installed Python 3.7, and since installed python 3.8. -I've added both their folders and script folders to PATH, and made sure 3.8 is first as I'd like that to be default. -I see that the Python scripts folder has pip, pip3 and pip3.8 and the python 3.7 folder has the same (but with pip3.7 of course), so in cmd typing pip or pip3 will default to version 3.8 as I have that first in PATH. -This is great, as I can explicitly decide which pip version to run. However I don't know how to do to the same for Python. ie. run Python3.7 from cmd. -And things like Jupyter Notebooks only see a ""Python 3"" kernel and don't have an option for both. -How can I configure the PATH variables so I can specify which version of python3 to run?","What OS are you running? If you are running linux and used the system package panager to install python 3.8 you should be able to invoke python 3.8 by typing python3.8. Having multiple binaries named python3 in your PATH is problematic, and having python3 in your PATH point to python 3.8 instead of the system version (which is likely a lower version for your OS) will break your system's package manager. It is advisable to keep python3 in your PATH pointing to whatever the system defaults to, and use python3.8 to invoke python 3.8. -The python version that Jupyter sees will be the version from which you installed it. If you want to be able to use Jupyter with multiple python versions, create a virtual environment with your desired python version and install Jupyter in that environment. Once you activate that specific virtual env you will be sure that the jupyter command that you invoke will activate the currect python runtime.",0.2012947653214861,False,1,6529 -2020-02-04 21:42:07.673,How does the pymssql library fall back on the named pipe port when port 1433 is closed?,"I'm trying to remove pymssql and migrate to pyodbc on a python 3.6 project that I'm currently on. The network topology involves two machines that are both on the same LAN and same subnet. The client is an ARM debian based machine and the server is a windows box. Port 1433 is closed on the MSSQL box but port 32001 is open and pymssql is still able to remotely connect to the server as it somehow falls back to using the named pipe port (32001). -My question is how is pymssql able to fall back onto this other port and communicate with the server? pyodbc is unable to do this as if I try using port 1433 it fails and doesn't try to locate the named pipe port. I've tried digging through the pymssql source code to see how it works but all I see is a call to dbopen which ends up in freetds library land. Also just to clarify, tsql -LH returns the named pip information and open port which falls in line with what I've seen using netstat and nmap. I'm 100% sure pymssql falls back to using the named pipe port as the connection to the named pipe port is established after connecting with pymssql. -Any insight or guidance as to how pymssql can do this but pyodbc can't would be greatly appreciated.",Removing the PORT= parameter and using the SERVER=ip\instance in the connection string uses the named pipes to do the connection instead of port 1433. I'm still not sure how the driver itself knows to do this but it works and resolved my problem.,0.3869120172231254,False,1,6530 -2020-02-04 22:07:19.503,PayPal Adaptive Payments ConvertCurrency Request (deprecated API) in Python,"I can't find any example on how to make a convertcurrency request using the paypal API in python, can you give me some examples for this simple request?","Is this an existing integration for which you have an Adaptive APP ID? If not, the Adaptive Payments APIs are very old and deprecated, so you would not have permissions to use this, regardless of whether you can find ready-made code samples for Python.",0.0,False,1,6531 -2020-02-04 22:26:47.110,Python was not found but can be installed,"I have just installed python3.8 and sublime text editor. I am attempting to run the python build on sublime text but I am met with ""Python was not found but can be installed"" error. -Both python and sublime are installed on E:\ -When opening cmd prompt I can change dir and am able to run py from there without an issue. -I'm assuming that my sublime is not pointing to the correct dir but don't know how to resolve this issue.","i had the same problem, so i went to the microsoft store (windos 10) and simply installed ""python 3.9"" and problem was gone! -sorry for bad english btw",-0.3869120172231254,False,1,6532 -2020-02-05 14:18:27.763,How to use logger with one basic config across app in Python,"I want to improve my understanding of how to use logging correctly in Python. I want to use .ini file to configure it and what I want to do: - -define basic logger config through .fileConfig(...) in some .py file -import logger, call logger = logging.getLogger(__name__) across the app and be sure that it uses my config file that I was loaded recently in different .py file - -I read few resources over Internet ofc but they are describing tricks of how to configure it etc, but want I to understand is that .fileConfig works across all app or works only for file/module where it was declared. -Looks like I missed some small tip or smth like that.","It works across the whole app. Be sure to configure the correct loggers in the config. logger = logging.getLogger(__name__) works well if you know how to handle having a different logger in every module, otherwise you might be happier just calling logger = logging.getLogger(""mylogger"") which always gives you the same logger. If you only configure the root logger you might even skip that and simply use logging.info(""message"") directly.",1.2,True,1,6533 -2020-02-05 17:03:07.580,Is there any way to remove BatchToSpaceND from tf.layers.conv1d?,"As I get, tf.layers.conv1d uses pipeline like this: BatchToSpaceND -> conv1d -> SpaceToBatchND. So the question is how to remove (or disable) BatchToSpaceND and SpaceToBatchND from the pipeline?","As I've investigated it's impossible to remove BatchToSpaceND and SpaceToBatchND from tf.layers.conv1d without changing and rebuilding tensorflow source code. One of the solutions is to replace layers to tf.nn.conv1d, which is low-level representation of convolutional layers (in fact tf.layers.conv1d is a wrapper around tf.nn.conv1d). These low-level implementations doesn't include BatchToSpaceND and SpaceToBatchND.",1.2,True,1,6534 -2020-02-07 09:05:30.220,ConnectionRefusedError: [WinError 10061][WinError 10061] No connection could be made because the target machine actively refused it,"What exactly does this error mean and how can i fix it, am running server on port 8000 of local host. -ConnectionRefusedError: [WinError 10061] No connection could be made because the target machine actively refused it","Is firewall running on the server? If so, that may be blocking connections. You could disable firewall or add an exception on the server side to allow connections on port 8000.",0.0,False,1,6535 -2020-02-07 17:33:20.807,"Is there a way to get one, unified, mapping from all indices under one alias?","For example if I have 10 indices with similar names and they all alias to test-index, how would I get test-index-1 test-index-2 test-index-3 test-index-4, test-index-5, test-index-6, test-index-7, test-index-8, test-index-9, and test-index-10 to all point to the mapping in use currently when you to a GET /test-index/_mapping?",Not sure what you define as 'unified' mapping - but you can always use wildcards in mapping request. For example : /test-inde*/_mapping would give mapping of all indices in that pattern .,0.0,False,1,6536 -2020-02-08 13:56:49.837,How do I add a mobile geolocation marker into a Folium webmap that automatically updates position?,"I have a webmap which is made in python using Folium. I am adding various geojson layers from an underlying database. I would like to do spatial analysis based on the user's location and their position relative to the various map overlays. As part of this I want to display a marker on the map which indicates the user's current position, and which updates regularly as they move around. -I know how to add markers to the map from within python, using Folium. -I know how to get a constantly updating latitude / longitude of the user using JS -navigator.geolocation.watchPosition(showPosition) -which then passes a position variable to the function showPosition. -I am currently just displaying this as text on the website for now. -What I have not been able to do is to add a marker to the Folium map from inside the webpage, using JS/Leaflet (as Folium is just a wrapper for Leaflet, i think I should be able to do this). -The Folium map object seems to be assigned a new variable name every time the webpage is loaded, and I don't know how to ""get"" the map element and add a marker using the Leaflet syntax -L.marker([lat, lon]).addTo(name_of_map_variable_which_keeps_changing) -Alternatively there might be a way to ""send"" the constantly changing lat/lon variables from the webpage back to the python script so that I can just use folium to add the marker. -But I have been unable to figure this out or find the right assistance online and would appreciate any help.","OK, I have figured out a main part of the question - how to add a user location marker to the Folium map. It is actually very simple: -https://python-visualization.github.io/folium/plugins.html#folium.plugins.LocateControl -I am still unable to pass the user's lat/lon through to my python script so that I can perform spatial queries using that location. So am looking forward to anyone being able to answer that part. Though I may have to post that as a separate question perhaps...",0.0,False,1,6537 -2020-02-09 11:13:26.383,Convert String into the format that readUTF() expects,I created a Client with Java and a Server with Python. The Java client receive data using readUTF() of the class DataInputStream. My problem is that the function readUTF() expects a modified version of 'utf-8' that I don't know how to generate in the (Python) server side.,"I got it!. Using the function read() of the class DataInputStream do work. The problem was that I initialized the destination buffer like this: byte[] ans = {}, instead of allocating some bytes. Thanks for everyone!",0.0,False,1,6538 -2020-02-09 20:57:38.340,Change default version of python in ubuntu 18.04,"I just installed ubuntu 18.04 and I really don't know how does everything work yet. I use the last version of python in my windows system (3.8.1) and would like to use that version as well in ubuntu, but the ""pre-installed"" version of python is 2.7. Is there a way to uninstall that old version of python instead of changin the alias of the python command to match the version I want to use? Can you do that or does ubuntu need to have that version? If you could help me or explain this to me I would appreciate it.","Some services and application in Ubuntu use Python 2.x to run. It is not advisable to remove it. Rather, virtual environments maybe a good practice. There, you can work on Python 3.x, as per your needs, without messing up with the system's dependencies.",0.0,False,1,6539 -2020-02-10 10:36:23.260,How to convert a 3d model into an array of points,"I'm building my own 3d engine, I need to import 3d models into it, but I don't know how to do it. -I wonder if it is possible to convert a 3d model into an array of points; if it is possible, how do you do it?","This isn't something I've done before; but the premise is interesting so I thought I'd share my idea as I have worked with grids (pretty much an array) in 3D space during my time at university. -If you consider 3D space, you could represent that space as a three dimensional array quite simply with each dimension representing an axis. You could then treat each element in that array as a point in space and populate that with a value (say a Boolean of true/false, 1/0) to identify the points of your model within that three dimensional space. -All you'd need is the Height, Width and Depth of your model, with each one of these being the dimensions in your array. Populate the values with 0/false if the model does not have a point in that space, or 1/true if it does. This would then give you a representation of your model as a 3D array.",0.0,False,1,6540 -2020-02-10 11:09:57.583,Discord.py: Adding someone to a discord server with just the discord ID,"I'm trying to add someone to a specific server and then DM said person with just the discord ID. -The way it works is that someone is logging himself in using discord OAuth2 on a website and after he is logged in he should be added to a specific server and then the bot should DM saying something like Welcome to the server! -Has anyone an idea how to do that? -Thanks for any help",It is not possible to leave or join servers with OAuth2. Nor is it possible to DM a user on Discord with a bot unless they share a mutual server.,0.0,False,1,6541 -2020-02-10 21:41:21.090,Run Python Script on AWS and transfer 5GB of files to EC2,"I am an absolute beginner in AWS: I have created a key and an instance, the python script I want to run in the EC2 environment needs to loop through around 80,000 filings, tokenize the sentences in them, and use these sentences for some unsupervised learning. -This might be a duplicate; but I can't find a way to copy these filings to the EC2 environment and run the python script in EC2, I am also not very sure as to how I can use boto3. I am using Mac OS. I am just looking for any way to speed things up. Thank you so so much! I am forever grateful!!!","Here's one way that might help: - -create a simple IAM role that allows S3 access to the bucket holding your files -apply that IAM role to the running EC2 instance (or launch a new instance with the IAM role) -install the awscli on the EC2 instance -SSH to the instance and sync the S3 files to the EC2 instance using aws s3 sync -run your app - -I'm assuming you've launched EC2 with enough diskspace to hold the files.",0.0,False,2,6542 -2020-02-10 21:41:21.090,Run Python Script on AWS and transfer 5GB of files to EC2,"I am an absolute beginner in AWS: I have created a key and an instance, the python script I want to run in the EC2 environment needs to loop through around 80,000 filings, tokenize the sentences in them, and use these sentences for some unsupervised learning. -This might be a duplicate; but I can't find a way to copy these filings to the EC2 environment and run the python script in EC2, I am also not very sure as to how I can use boto3. I am using Mac OS. I am just looking for any way to speed things up. Thank you so so much! I am forever grateful!!!","Here's what I tried recently: - -Create the bucket and keep the bucket accessible for public. -Create the role and add HTTP option. -Upload all the files and make sure the files are public accessible. -Get the HTTP link of the S3 file. -Connect the instance through putty. -wget copies the file into EC2 -instance. - -If your files are in zip format, one time copy enough to move all the files into instance.",1.2,True,2,6542 -2020-02-11 04:04:16.040,pyzk how to get the result of live capture,": 1495 : 2020-02-11 11:55:00 (1, 0) -Here is my sample result but then when I'm trying to split it gives me error -Process terminate : 'Attendance' object has no attribute 'split' -In the documentation it says -print (attendance) # Attendance object -How to access it?","found the solution -i check in the github repository of pyzk and look for the attendance class and found all the object being return by the live_capture thank you :)",1.2,True,1,6543 -2020-02-11 08:29:03.657,which python vs PYTHONPATH,"If I type in which python I get: /home/USER/anaconda3/bin/python -If I type in echo $PYTHONPATH I get: /home/USER/terrain_planning/devel/lib/python2.7/dist-packages:/opt/ros/melodic/lib/python2.7/dist-packages -Should that not be the same? And is it not better to set it: usr/lib/python/ -How would I do that? Add it to the PYTHONPATH or set the PYTHONPATH to that? But how to set which python?","You're mixing 2 environment variables: - -PATH where which looks up for executables when they're accessed by name only. This variable is a list (colon/semi-colon separated depending on the platform) of directories containing executables. Not python specific. which python just looks in this variable and prints the full path -PYTHONPATH is python-specific list of directories (colon/semi-colon separated like PATH) where python looks for packages that aren't installed directly in the python distribution. The name & format is very close to system/shell PATH variable on purpose, but it's not used by the operating system at all, just by python.",1.2,True,1,6544 -2020-02-11 08:37:01.607,Add a new column to multiple .csv and populate with filename,"I am new in python and I have a folder with 15 excel files and I am trying to rename a specific column in each file to a standard name, for instance I have a columns named ""name, and server"" on different files but they entail of the same information so I need to rename them to a standard name like "" server name"" and I don't know how to start","If the position of the columns are the same across all excel file, you can iterate all the 15 excel files, locate the position of the column and replace the text directly. -Alternatively, you can iterate all the files via read_xls (or read_csv depending on your context), reading them as dataframe and replace the necessary column name, and overwrite the file. Below is a reference syntax for your reference. -df.rename(columns={ df.columns[1]: ""your value"" }, inplace = True)",1.2,True,1,6545 -2020-02-11 13:48:39.953,Setting global JsonEncoder in Python,"Basically, I'm fighting with the age-old problem that Python's default json encoder does not support datetime. However all the solutions I can find call to json.dumps and manually pass the ""proper"" encoder on each invocation. And honestly, that can't be the best way to do it. Especially if you want to use a wrapper like jsonify to set up your response object properly, where you can't even specify these parameters. -So: long story short: how to override the global default encoder in Python's JSON implementation to a custom one, that actually does support the features I want? -EDIT: ok so I figured out how to do this for my specific use case (inside Flask). You can do app.json_encoder = MyCustomJSONEncoder there. However how to do this outside of flask would still be an interesting question.","Unfortunately, I could not find a way to set default encoders or decoders for the json module. -So the best way is to do what flask do, that is wrapping the calls to dump or dumps, and provide a default in that wrapper.",0.0,False,1,6546 -2020-02-11 14:46:10.263,How can I use radish with Pycharm to have behave step autocomplete,"Note : radish is a ""Gherkin-plus"" framework—it adds Scenario Loops and Preconditions to the standard Gherkin language, which makes it more friendly to programmers. -So how i can use it or use an other method to use Gherkin step autocomplete with Pycharm. -Thank's","I have solve this problem by buying a professional version of PyCharm, autocomplete is not available for Community version :(",1.2,True,1,6547 -2020-02-11 18:40:45.797,Estimating Dataframe memory usage from file sizes,"If I have a list of files in a directory is it possible to estimate a memory use number that would be taken up by reading or concatenating the files using pd.read_csv(file) or pd.concat([df1, df2])? -I would like to break these files up into concatenation 'batches' where each batch will not exceed a certain memory usage so I do not run into local memory errors. -Using os.path.getsize() will allow me to obtain the file sizes and df.memory_usage() will tell me how much memory the dataframe will use once it's already read in but is there a way to estimate this with just the files themselves?","You could open each CSV, read first 1000 lines only into DataFrame, and then check memory usage. Then scale estimated memory usage by number of lines in the file. -Note that memory_usage() isn't accurate with default arguments, because it won't count strings' memory usage. You need memory_usage(deep=True), although that might overestimate memory usage in some cases. But better to overestimate than underestimate.",0.0,False,1,6548 -2020-02-12 07:09:03.673,How do find correlation between time events and time series data in python?,"I have two different excel files. One of them is including time series data (268943 accident time rows) as below -The other file is value of 14 workers measured daily from 8 to 17 and during 4 months(all data merged in one file) -I am trying to understand correlation between accident times and values (hourly from 8 to 17 per one hour and daily from Monday to Friday and monthly) -Which statistical method is fit(Normalized Auto or cross correlation) and how can I do that? -Generally, in the questions, the correlation analysis are performed between two time series based values, but I think this is a little bit different. Also, here times are different. -Thank your advance..","I think the accident times and the bloodsugar levels are not coming from the same source, and so I think it is not possible to draw a correlation between these two separate datasets. If you would like to assume that the blood sugar levels of all 14 workers reflect that of the workers accident dataset, that is a different story. But what if those who had accidents had a significantly different blood sugar level profile than the rest, and what if your tiny dataset of 14 workers does not comprise such examples? I think the best you may do is to graph the blood sugar level of your 14 worker dataset and also similarly analyze the accident dataset separately, and try to see visually whether there is any correlation here.",0.6730655149877884,False,1,6549 -2020-02-12 13:25:20.013,How to get full path for any (including local) function in python?,"f""{f.__module__}.{f.__name__}"" doesn't work because function f can be local, eg inside another function. We need to add some kind of marked (..) in the path to specify that this function is local. But how to determine when we need to add this marker?",Use f.__qualname__ instead of __name__.,1.2,True,1,6550 -2020-02-12 14:33:31.370,How to use R models in Python,"I have been working on an algorithm trading project where I used R to fit a random forest using historical data while the real-time trading system is in Python. -I have fitted a model I'd like to use in R and am now wondering how can I use this model for prediction purposes in the Python system. -Thanks.","There are several options: -(1) Random Forest is a well researched algorithm and is available in Python through sci-kit learn. Consider implementing it natively in Python if that is the end goal. -(2) If that is not an option, you can call R from within Python using the Rpy2 library. There is plenty of online help available for this library, so just do a google search for it. -Hope this helps.",0.3869120172231254,False,1,6551 -2020-02-12 21:27:39.690,"How many users can Sqlite can handle , Django","I have a Django application, which I hosted on pythonanywhere. For the database, I have used SQLite(default). -So I want to know how many users my applications can handle? -And what if two user register form or make post at same time, will my application will crash?","SQLite supports multiple users, however it locks the database when write operations is being executed. -In other words,concurrent writes cannot be treated with this database, so is not recommended. -You can use PostgreSQL or MySQL as an alternative.",0.0,False,1,6552 -2020-02-13 03:37:40.343,How can I cycle through items in a DynamoDB table?,"How can I cycle through items in a DynamoDB table? -That is, if I have a table containing [A,B,C], how can I efficiently get item A with my first call, item B with my second call, item C with my third call and item A again with my fourth call, repeat? -This table could in the future expand to include D, E, F etc and I would like to incorporate the new elements into the cycle. -The current way I am doing it is giving each item an attribute ""seen"". We scan the whole table, find an element that's not ""seen"" and put it back as ""seen"". When everything has been ""seen"", make all elements not ""seen"" again. This is very expensive.","I think the simplest option is probably: - -use scan with Limit=1 and do not supply ExclusiveStartKey, this will get the first item -if an item was returned and LastEvaluatedKey is present in the response, then re-run scan with ExclusiveStartKey set to the LastEvaluatedKey of the prior response and again Limit=1, repeat step 2 until no item returned or LastEvaluatedKey is absent -when you get zero items returned, you've hit the end of the table, goto step 1 - -This is an unusual pattern and probably not super-efficient, so if you can share any more about what you're actually trying to do here, then we might be able to propose better options.",1.2,True,2,6553 -2020-02-13 03:37:40.343,How can I cycle through items in a DynamoDB table?,"How can I cycle through items in a DynamoDB table? -That is, if I have a table containing [A,B,C], how can I efficiently get item A with my first call, item B with my second call, item C with my third call and item A again with my fourth call, repeat? -This table could in the future expand to include D, E, F etc and I would like to incorporate the new elements into the cycle. -The current way I am doing it is giving each item an attribute ""seen"". We scan the whole table, find an element that's not ""seen"" and put it back as ""seen"". When everything has been ""seen"", make all elements not ""seen"" again. This is very expensive.","The efficient way to return items that haven't been seen would be to have an attribute of seen=no included when inserted. Then you could have a global secondary index over that attribute which you could then Query(). -There isn't an efficient way to reset all the seen=yes attributes back to no. Scan() and Query() would both end up returning the entire table and you'd end up updating records one by one. That will not be fast nor cheap with a large table. -EDIT -Once all the records have seen=""yes"" and you want to reset them back to seen=""no"" A query on the GSI suggested above will work exactly like a scan...every record will have to be read and then updated. -If you have 1M records, each about 1K, and you want to reset them...you're going to need -250K reads (since you can read 4 records with a single 4KB RCU) -1M writes",0.0,False,2,6553 -2020-02-13 12:00:17.947,Python: os.getcwd() randomly fails in mounted network drive,"I'm on Debian using python3.7. I have a network drive that I typically mount to /media/N_drive with dir_mode=0777 and file_mode=0777. I generally have no issues with reading/writing files in this network drive. -Occasionally, especially soon after mounting the drive, if I try to run any Python script with os.getcwd() (including any imported libraries like pandas) I get the error FileNotFoundError: [Errno 2] No such file or directory. If I cd up to the local drive (cd /media/) the script runs fine. -Doing some reading, it sounds like this error indicates that the working directory has been deleted. Yet I can still navigate to the directory, create files, etc. when I'm in the shell. It only seems to be Python's os.getcwd() that has problems. -What is more strange is that this behavior is not predictable. Typically if I wait ~1 hour after mounting the drive the same script will run just fine. -I suspect this has something to do with the way the drive is mounted maybe? Any ideas how to troubleshoot it?","To me, it seems a problem with the mount, e.g. the network disk will be disconnected, and reconnected. So your cwd is not more valid. Note: cwd is pointing to a disk+inode, it is not a name (which you will see). So /media/a is different to /media/a after a reconnection. -If you are looking on how to solve the mounting, you are in the wrong place. Try Unix&Linux sister site, or Serverfault (also a sister site). -If you are looking how to solve programmatically: save cwd at beginning of the script and use os.path.join() at every path access, so that you forcing absolute paths, and not relative paths, and so you should be on the correct location. This is not save, if you happen to read a file during disconnection.",0.3869120172231254,False,1,6554 -2020-02-13 14:20:37.003,Best practice for getting data from Django view into JS to execute on page?,"I have been told it is 'bad practice' to return data from a Django view and use those returned items in Javascript that is loaded on the page. -For example: if I was writing an app that needed some extra data to load/display a javascript based graph, I was told it's wrong to pass that data directly into the javascript on the page from a template variable passed from the Django view. -My first thought: - -Just get the data the graph needs in the django view and return it in a context variable to be used in the template. Then just reference that context variable directly in the javascript in the template. - -It should load the data fine - but I was told that is the wrong way. -So how is it best achieved? -My second thought: - -Spin up Django Rest Framework and create an endpoint where you pass any required data to and make an AJAX request when the page loads - then load the data and do the JS stuff needed. - -This works, except for one thing, how do I get the variables required for the AJAX request into the AJAX request itself? -I'd have to get them either from the context (which is the 'wrong way') or get the parameters from the URL. Is there any easy way to parse the data out of the URL in JS? It seems like a pain in the neck just to get around not utilizing the view for the data needed and accessing those variables directly in the JS. -So, is it really 'bad practice' to pass data from the Django view and use it directly in the Javascript? -Are both methods acceptable? -What is the Django appropriate way to get data like that into the Javascript on a given page/template?","Passing data directly is not always the wrong way to go. JS is there so you can execute code when everything else is ready. So when they tell you it's the wrong way to pass data directly, it's because there is no point in making the page and data heavier than it should be before JS kicks in. -BUT it's okay to pass the essential data so your JS codes knows what it has to do. To make it more clear, let's look into your case: -You want to render a graph. And graphs are sometimes heavy to render and it can make the first render slow. And most of the time, graphs are not so useful without the extra context that your page provides. So in order to make your web page load faster, you let JS load your graph after your webpage has been rendered. And if you're going to wait, then there is no point in passing the extra data needed because it makes the page heavier and slows down the initial render and it takes time to parse and convert those data to JSON objects. -By removing the data and letting JS load them in the background, you make your page smaller and faster to render. So while a user is reading the context needed for your graph, JS will fetch the data needed and renders the graph. This will cause your web page to have a faster initial render. -So in general: -When to pass data directly: - -When the initial data is necessary for JS to do what it has to (configs, defaults, etc). -When the time difference matters a lot and you can't wait too much for an extra request to complete the render. -When data is very small. - -When not to pass data directly: - -When rendering the extra data takes time anyway, so why not get the data latter too? -When the data size is big. -When you need to render something as fast as possible. -When there are some heavy processes needed for those data. -When JS can make your data size smaller (Decide what kind of data should be passed exactly using options that are only accessible by JS.)",1.2,True,1,6555 -2020-02-13 23:49:41.500,Interpreter won't show in Python 3.8.1,I recently downloaded python for the first time and when I load into pycharm to create a new project and it asks to select an interpreter python doesn't show up even when I click the plus sign and search through all my files it doesn't show even though I have the latest python version installed and I have windows 10 I tried deleting both programs and redownloading them but that doesn't seem to work either please if possible and the answer may be obvious but sorry I'm a beginner and also looking at videos didn't help either.,"You have no navigate to the folder where python is downloaded and just select there. -Try the following path C:\Users\YourName\AppData\Local\Programs\Python\Python38-32\python.exe",0.6730655149877884,False,1,6556 -2020-02-14 01:43:31.803,How did scipy ver 0.18 scipy.interpolate.UnivariateSpline deal with values not strictly increasing?,"I have a program written in python 2.7.5 scipy 0.18.1 that is able to run scipy.interpolate.UnivariateSpline with arrays that are non-sequential. When I try to run the same program in python 2.7.14 / scipy 1.0.0 I get the following error: -File ""/usr/local/lib/python2.7/site-packages/scipy/interpolate/fitpack2.py"", line 176, in init - raise ValueError('x must be strictly increasing') -Usually I would just fix the arrays to remove the non-sequential values. But in this case I need to reproduce the exact same solution produced by the earlier version of python/scipy. Can anyone tell me how the earlier code dealt with the situation where the values were not sequential?",IIRC this was whatever the FITPACK (the fortran library the univariatespline class wraps) was doing. So the first stop would be to remove the check from your local scipy install and see if this does the trick,1.2,True,1,6557 -2020-02-14 09:01:36.233,Send mail via Python logging in with Windows Authentication,"I've a conceptual doubt, I don't know if it's even possible. -Assume I log on a Windows equipment with an account (let's call it AccountA from UserA). However, this account has access to the mail account (Outlook) of the UserA and another fictional user (UserX, without any password, you logg in thanks to Windows authentication), shared by UserA, UserB and UserC. -Can I send a mail from User A using the account of User X via Python? If so, how shall I do the log in? -Thanks in advance","A interesting feature with Windows Authentication is that is uses the well known Kerberos protocol under the hood. In a private environment, that means if a server trusts the Active Directory domain, you can pass the authentication of a client machine to that server provided the service is Kerberized, even if the server is a Linux or Unix box and is not a domain member. -It is mainly used for Web servers in corporate environment, but could be used for any kerberized service. Postfix for example is know to accept this kind of authentication. - -If you want to access an external mail server, you will have to store the credential in plain text on the client machine, which is bad. An acceptable way would be to use a file only readable by the current user (live protection) in an encrypted folder (at rest protection).",1.2,True,1,6558 -2020-02-14 17:19:50.673,How to switch two words around in file document in python,"I was wondering how to switch two words around in a file document in python. Example: I want to switch the words motorcycle to car, and car to motorcycle. -The way I'm doing it is making it have all the words motorcycle change to car, and because car is being switched to motorcycle, it get's switched back to car. Hopefully that makes sense.","First, replace all the motocycle to carholder -Second, replace all car to motocycle -Third, replace all carholder to car -That's it",0.6730655149877884,False,1,6559 -2020-02-15 11:54:26.760,umqtt.robust on Wemos,"I am trying to install micropython-umqtt.robust on my Wemos D1 mini. -The way i tried this is as follow. -I use the Thonny editor - -I have connected the wemos to the internet. -in wrepl type: -import upip -upip.install('micropython-umqtt.simple') -I get the folowing error: Installing to: /lib/ -Error installing 'micropython-umqtt.simple': Package not found, packages may be partially installed -upip.install('micropython-umqtt.robust') -I get the folowing error: Error installing 'micropython-umqtt.robust': Package not found, packages may be partially installed - -Can umqtt be installed on Wemos D1 mini ? if yes how do I do this ?","I think the MicroPython build available from micropython.org already bundles MQTT so no need to install it with upip. Try this directly from the REPL: -from umqtt.robust import MQTTClient -or -from umqtt.simple import MQTTClient -and start using it from there -mqtt = MQTTClient(id, server, user, password)",1.2,True,2,6560 -2020-02-15 11:54:26.760,umqtt.robust on Wemos,"I am trying to install micropython-umqtt.robust on my Wemos D1 mini. -The way i tried this is as follow. -I use the Thonny editor - -I have connected the wemos to the internet. -in wrepl type: -import upip -upip.install('micropython-umqtt.simple') -I get the folowing error: Installing to: /lib/ -Error installing 'micropython-umqtt.simple': Package not found, packages may be partially installed -upip.install('micropython-umqtt.robust') -I get the folowing error: Error installing 'micropython-umqtt.robust': Package not found, packages may be partially installed - -Can umqtt be installed on Wemos D1 mini ? if yes how do I do this ?","Thanks for your help Reilly, -The way I solved it is as follow. With a bit more understanding of mqtt and micropython I found that the only thing that happens when you try to install umqtt simple and umqtt robust,is that it makes in de lib directory of your wemos a new directory umqtt. Inside this directory it installs two files robust.py and simple.py. While trying to install them I kept having error messages. But I found a GitHub page for these two files, so I copied these files. Made the umqtt directory within the lib directory and in this umqtt directory I pasted the two copied files. Now I can use mqtt on my wemos.",0.3869120172231254,False,2,6560 -2020-02-16 04:54:39.390,how to add python in xilinx vitis,"I have implemented a Zynq ZCU102 board in vivado and I want to use final "".XSA"" file into VITIS, but after creating a new platform, its languages are C and C++, While in the documentation was told that vitis supports python. -My question is how can I add python to my vitis platform? -Thank you",Running Python in FPGA needs an Operating System. I had to run Linux OS on my FPGA using petaLinux and then run python code on it.,1.2,True,1,6561 -2020-02-16 14:44:02.093,"How to create ""add to favorites"" functional using Django Rest Framework","I just can’t find any information about the implementation of the system of adding to favorites for registered users. -The model has a Post model. It has a couple of fields of format String. The author field, which indicates which user made the POST request, etc. -But how to make it so that the user can add this Post to his “favorites”, so that later you can get a JSON response with all the posts that he added to himself. Well, respectively, so that you can remove from favorites. -Are there any ideas?",You can add a favorite_posts field (many-to-many) in your Author model.,-0.3869120172231254,False,1,6562 -2020-02-17 04:26:06.590,"how to customized hr,month,year in python date time module?","How can i customize hrs,days,months of date time module in python? -day of 5 hrs only, a month of 20 days only, and a year of 10 months only. -using date time module.","I agree with @TimPeters . This just doesn't fit in what datetime does. -For your needs, I would be inclined to start my own class from scratch, as that is pretty far from datetime. -That said...you could look into monkeypatching datetime...but I would recommend against it. It's a pretty complex beast, and changing something as fundamental as the number of hours in a day will blow away unknown assumptions within the code, and would certainly turn its unit tests upside down. -Build your own from scratch is my advice.",0.0,False,1,6563 -2020-02-17 06:51:16.120,Bad interpreter file not found error when running flask commands,"Whenever I run a flask command in my project, I get an error of the form zsh: (correct file path)/venv/bin/flask: bad interpreter: (incorrect, old file path)/venv/bin/python3. I believe the error is due to the file paths not matching, and the second file path no longer existing. I changed the name of the directory for my project when I changed the name of the project, but I don't know how to change the path that flask searches for the interpreter in. -Thanks in advance. -Edit: I just tried going into the flask file at (correct file path)/venv/bin. I saw that it still had #!(incorrect, old file path)/venv/bin/python3 at the top. I tried changing this to #!(correct file path)/venv/bin/python3, but the same error as before persisted, as well as the flask app not being able to find the flask_login module, which it was not having issues with before.","Ok, I figured out how to fix it. I had to go into my (correct file path)/venv/bin/flask file and change the file path after the #! to the correct file path. I had to do the same for pip, pip3, and pip3.7 which were all in the same location as the flask file. Then I had to reinstall the flask_login package. This fixed everything.",0.0,False,1,6564 -2020-02-17 13:37:36.463,Implementing saved python regression model to react expo application,"I have a python regression model that predicts one's level of happiness based on user-input data, i have trained and tested it using Python. -But I'm using React Native to create my mobile application. -My mobile application will take in the user-input data needed and will output a prediction on their level of happiness. Anyone has an idea on how to implement this? Any advice would be appreciated! I lack the experience, but have an interest in this area, Im still learning so please help me out :)",You need to create python API and call it from the mobile application by passing the input features. Python API will return you the forecasted value. This API will load the regression model and make a forecast on given input features. I hope It will help.,0.3869120172231254,False,1,6565 -2020-02-18 14:37:41.817,I have set up a small flask webpage but in only runs on localhost while I would like to make it run on my local network python3.7,"I have set up a small flask webpage but in only runs on localhost while I would like to make it run on my local network, how do I do that?","Just my 2 cents on this, I just did some research, there are many suggestions online... -Adding a parameter to your app.run(), by default it runs on localhost, so change it to app.run(host= '0.0.0.0') to run on your machines IP address. -Few other things you could do is to use the flask executable to start up your local server, and then you can use flask run --host=0.0.0.0 to change the default IP which is 127.0.0.1 and open it up to non local connections. -The thing is you should use the app.run() method which is much better than any other methods. -Hope it helps a little, if not good luck :)",1.2,True,1,6566 -2020-02-18 23:36:54.463,Do .py Python files contain metadata?,".doc files, .pdf files, and some image formats all contain metadata about the file, such as the author. -Is a .py file just a plain text file whose contents are all visible once opened with a code editor like Sublime, or does it also contain metadata? If so, how does one access this metadata?","On Linux and most Unixes, .py's are just text (sometimes unicode text). -On Windows and Mac, there are cubbyholes where you can stash data, but I doubt Python uses them. -.pyc's, on the other hand, have at least a little metadata stuff in them - or so I've heard. Specifically: there's supposed to be a timestamp in them, so that if you copy a filesystem hierarchy, python won't automatically recreate all the .pyc's on import. There may or may not be more.",1.2,True,1,6567 -2020-02-19 12:51:11.683,Errors such as: 'Solving environment: failed with initial frozen solve. Retrying with flexible solve' & 'unittest' tab,"I am working with spyder - python. I want to test my codes. I have followed the pip install spyder-unittest and pip install pytest. I have restarted the kernel and restarted my MAC as well. Yet, Unit Testing tab does not appear. Even when I drop down Run cannot find the Run Unit test. Does someone know how to do this?","So, I solved the issue by running the command: -conda config --set channel_priority false. -And then proceeded with the unittest download with the command run: -conda install -c spyder-ide spyder-unittest. -The first command run conda config --set channel_priority false may solve other issues such as: -Solving environment: failed with initial frozen solve. Retrying with flexible solve",1.2,True,1,6568 -2020-02-19 17:27:39.387,JupyterLab - python open() function results in FileNotFoundError,"I am trying to open an existing file in a subfolder of the current working directory. This is my command: -fyle = open('/SPAdes/default/{}'.format(file), 'r') -The filevariable contains the correct filename, the folder structure is correct (working on macOS), and the file exists. -This command, however, results if this error message: -FileNotFoundError: [Errno 2] No such file or directory: [filename] -Does it have anything to do with the way JupyterLab works? How am I supposed to specify the folder srtucture on Jupyter? I am able to create a new file in the current folder, but I am not able to create one in a subfolder of the current one (results in the same error message). -The folder structure is recognized on the same Jupyter notebook by bash commands, but I am somehow not able to access subfolders using python code. -Any idea as to what is wrong with the way I specified the folder structure? -Thanks a lot in advance.","There shouldn’t be a forward slash in front of SPAdes. -Paths starting with a slash exist high up in file hierarchy. You said this is a sub-directory of your current working directory.",0.6730655149877884,False,1,6569 -2020-02-19 18:16:29.400,What's a good way to save all data related to a training run in Keras?,"I know how to do a few things already: - -Summarise a model with model.summary(). But this actually doesn't print everything about the model, just the coarse details. -Save model with model.save() and load model with keras.models.load_model() -Get weights with model.get_weights() -Get the training history from model.fit() - -But none of these seem to give me a catch all solution for saving everything from end to end so that I can 100% reproduce a model architecture, training setup, and results. -Any help filling in the gaps would be appreciated.","model.to_json() can be used to convert model config into json format and save it as a json. -You can recreate the model from json using model_from_json found in keras.models -Weights can be saved separately using model.save_weights. -Useful in checkpointing your model. Note that model.save saves both of these together. Saving only the weights and loading them back useful when you need to work with the variables used in defining the model. In that case create the model using the code and do model.load_weights.",0.0,False,1,6570 -2020-02-20 07:22:20.357,continuous log file processing and extract required data using python,"I have to analyze a log file which will generate continuously 24*7. So, the data will be huge. I will have credentials to where log file is generating. But how can I get that streaming data ( I mean like any free tools or processes) so that I can use it in my python code to extract some required information from that log stream and will have to prepare a real time dashboard with that data. please tell some possibilities to achieve above task.","Just a suggestion -You could try with ELK: -ELK, short for Elasticsearch (ES), Logstash, and Kibana, is the most popular open source log aggregation tool. Es is a NoSQL. Logstash is a log pipeline system that can ingest data, transform it, and load it into a store like Elasticsearch. Kibana is a visualization layer on top of Elasticsearch. -or -you could use Mongo DB to handle such huge amount of data: -MongoDB is an open-source document database and leading NoSQL. Mongo DB stores data in a json format. Process the logs and store it in a json format and retrieve it for any further use. -Basically its not a simple question to explain, it depends on the scenarios.",0.0,False,1,6571 -2020-02-20 12:40:01.843,Tika in Python Azure Function,"I'm trying to create a function on Azure Function Apps that is given back a PDF and uses the python tika library to parse it. -This setup works fine locally, and I have the python function set up in Azure as well, however I cannot figure out how to include Java in the environment? -At the moment, when I try to run the code on the server I get the error message - -Unable to run java; is it installed? - Failed to receive startup confirmation from startServer.","So this isnt possible at this time. To solve it, I abstracted out the tika code into a Java Function app and used that instead.",1.2,True,1,6572 -2020-02-20 14:06:16.640,When python is referred to as single threaded why does in not have the same pitfalls in processing as something like node.js?,"I've been doing Node programming for a while and one thing I'm just very tired of is having to worry about blocking the event loop with anything that requires lots of cpu time. I'd also like to expand my language skills to something more focused on machine learning, so python seemed like a good choice based on what I've read. -However, I keep seeing that python is also single threaded, but I get the feeling this wording is being used in a different way than how it's usually used in node. Python is the go to language for a lot of heavy data manipulation so I can't imagine it blocks the same way node does. Can someone with more familiarity with python (and some with node) explain how their processing of concurrent requests differs when 1 request is cpu intensive?","First of all Python is not single-threaded, but its standard library contains everything required to manage threads. It works fine for IO bound tasks, but does not for CPU bound tasks because of the Global Interpretor Lock which prevents more than one thread to execute Python code at the same time. -For data processing tasks, several modules exist that add low level (C code level) processing and internally manage the GIL to be able to use multi-core processing. The most used modules here are scipy and numpy (scientific and numeric processing) and pandas which is an efficient data frame processing tools using numpy arrays for its underlying containers. -Long story short: For io bound tasks, Python is great. If your problem is vectorizable through numpy or pandas, Python is great. If your problem is CPU intensive and neither numpy nor pandas will be used, Python is not at its best.",0.3869120172231254,False,1,6573 -2020-02-23 01:42:23.757,subprocess.check_call command called not using threads,"I'm running the following command using subprocess.check_call -['/home/user/anaconda3/envs/hum2/bin/bowtie2-build', '-f', '/media/user/extra/tmp/subhm/sub_humann2_temp/sub_custom_chocophlan_database.ffn', '/media/user/extra/tmp/subhm/sub_humann2_temp/sub_bowtie2_index', ' --threads 8'] -But for some reason, it ignores the --threads argument and runs on one thread only. I've checked outside of python with the same command that the threads are launched. This only happens when calling from subprocess, any idea on how to fix this? -thanks","You are passing '--threads 8' and not '--threads', '8'. Although it could be '--threads=8' but I don't know the command.",1.2,True,1,6574 -2020-02-23 14:42:25.190,How to change the name of a mp4 video using python,I just want to know how can I change the name of mp4 video using python. I tried looking on the internet but could not find it. I am a beginner in python,"you can use os module to rename as follows... - -import os -os.rename('full_file_path_old','new_file_name_path)",1.2,True,1,6575 -2020-02-23 16:13:40.300,Converting depth map to pointcloud on Raspberry PI for realtime application,"I am developing a robot based on StereoPI. I have successfully calibrated the cameras and obtained a fairly accurate depth map. However, I am unable to convert my depth map to point cloud so that I can obtain the actual distance of an object. I have been trying to use cv2.reprojectImageTo3D, but see no success. May I ask if there is a tutorial or guide which teaches how to convert disparity map to point cloud? -I am trying very hard to learn and find reliable sources but see on avail. So, Thank you very much in advance.","By calibrating your cameras you compute their interior orientation parameters (IOP - or intrinsic parameters). To compute the XYZ coordinates from the disparity you need also the exterior orientation parameters (EOP). -If you want your point cloud relative to the robot position, the EOP can be simplified, otherwise, you need to take into account the robot's position and rotation, which can be retrieved with a GNSS receiver and intertial measurement unit (IMU). Note that is very likely that such data need to be processed with a Kalman filter. -Then, assuming you got both (i) the IOP and EOP of your cameras, and (ii) the disparity map, you can generate the point cloud by intersection. There are several ways to accomplish this, I suggest using the collinearity equations.",0.0,False,1,6576 -2020-02-25 03:32:31.883,What is the best way to implement Django 3 Modal forms?,"I appreciate it if somebody gives the main idea of how to handle submission/retrieval form implementation in Bootstrap modals. I saw many examples on google but it is still ambiguous for me. Why it is required to have a separate Html file for modal-forms template? Where SQL commands will be written? What is the flow in submission/retrieval forms (I mean steps)? What is the best practice to implement these kind of forms? I'm fairly new to Django, please be nice and helpful.","No need for separate file for modal-form. Here MVT structure following, whenever forms are used. Easy interaction to template. Moreover if you go through Django documentation, you will get to know easily. -Submission - mention the form action url. It will call that and check the django forms",0.0,False,1,6577 -2020-02-25 03:53:05.757,How do we calculate the accuracy of a multi-class classifier using neural network,"When the outputs (prediction) are the probabilities coming from a Softmax function, and the training target is one-hot type, how do we compare those two different kinds of data to calculate the accuracy? -(the number of training data classified correctly) / (the number of the total training data) *100%","Usually, we assign the class label with highest probability in the output of the soft max function as the label.",1.2,True,1,6578 -2020-02-25 16:58:00.407,"When switching to zsh shell on mac terminal from bash, how do you update the base python version?","Mac has recently updated its terminal shell to Zsh from bash. As a python programmer, I'd like to have a consistency in python versions across all the systems that includes terminals, & IDE. -On a bash shell, to update the python version in the terminal to 3.8.1, I had followed the below process -nano ~/.bash_profile -alias python=python3 -ctrl + x -y -enter -This enabled me to update the python version from 2.7.6 to 3.8.1. However, repeating the same steps for zsh shell didn't work out. Tried a tweak of the above process, and somehow stuck with 3.7.3 -steps followed -which python3 #Location of the python3.8.1 terminal command file is found. Installed it. -python --version #returned python 3.7.3 -PS: I am an absolute beginner in python, so please consider that in your response. I hope i am not wasting your time.","it is actually not recommendet to update the default Python executable system-wide because some applications are depending on it. -Although, you can use venv (virtual environment) or for using another version of Python within your ZSH you can also put an alias like python='python3' in your ~/.zsh_profile and source it. -Hope that helps. -Greetings",0.3869120172231254,False,1,6579 -2020-02-25 19:43:50.643,How can I darken/lighten a RGB color,"So I'm trying to make a color gradient, from a color to completely black, as well as from a color to completely white. -So say I have (175, 250, 255) and I want to darken that color exactly 10 times to end at (0, 0, 0), how could I do this? -I'd also like to brighten the color, so I'd like to brighten it exactly 10 times and end at (255, 255, 255).","Many ways to solve this one. One idea would be to find the difference between your current value to the target value and divide that by 10. -So (175, 250, 255) to (0, 0, 0) difference is (175, 250, 255), then divide that by ten to have what you would subtract each of the ten steps. So subtract (-17.5, -25, -25.5) every step, rounding when needed.",0.0,False,1,6580 -2020-02-26 11:00:22.870,Django Queryset - Can I query at specific positions in a string for a field?,"I have a table field with entries such as e.g. 02-65-04-12-88-55. -Each position (separated by -) represents something. (There is no '-' in the database, that's how it's displayed to the user). -Users would like to search by the entry's specific position. I am trying to create a queryset to do this but cannot figure it out. I could handle startswith, endswith but the rest - I have no idea. -Other thoughs would be to split the string at '-' and then query at each specific part of the field (if this is possible). -How can a user search the field's entry at say positions 0-1, 6-7, 10-11 and have the rest wildcarded and returned? -Is this possible? I may be approaching this wrong? Thoughts?","You could use a something__like='__-__-__-__-88-__' query, but it's likely to not be very efficient (since the database will have to scan through all rows to find a match). -If you need to lots of these queries, it'd be better to split these out to actual fields (something_1, something_2, etc.)",0.0,False,1,6581 -2020-02-28 11:26:45.567,Python Script to compare du and df console outputs,"As part of a larger project, I'm currently writing a python script that runs Linux commands in a vApp. -I'm currently facing an issue where after working with a mounted iso, it may or may not unmount as expected. -To check the mount status, I want to run the df -hk /directory and du -sch /directory commands respectively, and compare the outputs. -If the iso is not unmounted, the result for the df command should return a larger value than the du command as the df command includes the mount size in the result, while du does not. -I'm just wondering how can i compare these values or if there is a better way for me to run this check in the first place.","why don't you use /proc/mounts ? -First column is you blockdevice, second is the mountpoint. -If you mountpoint is not in /proc/mounts you have nothing mounted here.",0.3869120172231254,False,1,6582 -2020-02-28 16:42:15.147,Why no need to load Python formatter (black) and linter (pylint) and vs code?,"I am learning how to use VS code and in the process, I learnt about linting and formatting with ""pylint"" and ""black"" respectively. -Importantly, I have Anaconda installed as I often use conda environments for my different projects. I have therefore installed ""pylint"" and ""black"" into my conda environment. -My questions are as follows: - -If ""pylint"" and ""black"" are Python packages, why do they not need to be imported into your script when you use them? (i.e. ""import pylint"" and ""import black"" at the top of a Python script you want to run). I am very new to VS code, linting and formatting so maybe I'm missing something obvious but how does VS code know what to do when I select ""Run Linting"" or ""Format document"" in the command palette? Or is this nothing to do with VS code ? - -I guess I am just suprised at the fact we don't need to import these packages to use them. In contrast you would always be using import for other packages (sys, os, or any other). - -I'm assuming if I used a different conda environment, I then need to install pylint and black again in it right?","Yes, black and pylint are only available in the conda environment you installed them in. You can find them in the ""Scripts""-folder of your environment. -VS Code knows where to look for those scripts, I guess you can set which package is used for ""Run Linting"" or ""Format document"". -You only need to import python modules or functions that you want to use inside your python module. But that's not what you do.",0.0,False,1,6583 -2020-02-29 17:31:05.917,Dask progress during task,"With dask dataframe using -df = dask.dataframe.from_pandas(df, npartitions=5) -series = df.apply(func) -future = client.compute(series) -progress(future) -In a jupyter notebook I can see progress bar for how many apply() calls completed per partition (e.g 2/5). -Is there a way for dask to report progress inside each partition? -Something like tqdm progress_apply() for pandas.","If you mean, how complete each call of func() is, then no, there is no way for Dask to know that. Dask calls python functions which run in their own python thread (python threads cannot be interrupted by another thread), and Dask only knows whether the call is done or not. -You could perhaps conceive of calling a function which has some internal callbacks or other reporting system, but I don't think I've seen anything like that.",0.0,False,1,6584 -2020-03-02 02:33:41.653,Using a Decision Tree to build a Recommendations Application,"First of all, my apologies if I am not following some of the best practices of this site, as you will see, my home is mostly MSE (math stack exchange). -I am currently working on a project where I build a vacation recommendation system. The initial idea was somewhat akin to 20 questions: We ask the user certain questions, such as ""Do you like museums?"", ""Do you like architecture"", ""Do you like nightlife"" etc., and then based on these answers decide for the user their best vacation destination. We answer these questions based on keywords scraped from websites, and the decision tree we would implement would allow us to effectively determine the next question to ask a user. However, we are having some difficulties with the implementation. Some examples of our difficulties are as follows: -There are issues with granularity of questions. For example, to say that a city is good for ""nature-lovers"" is great, but this does not mean much. Nature could involve say, hot, sunny and wet vacations for some, whereas for others, nature could involve a brisk hike in cool woods. Fortunately, the API we are currently using provides us with a list of attractions in a city, down to a fairly granular level (for example, it distinguishes between different watersport activities such as jet skiing, or white water rafting). My question is: do we need to create some sort of hiearchy like: - -nature-> (Ocean,Mountain,Plains) (Mountain->Hiking,Skiing,...) - -or would it be best to simply include the bottom level results (the activities themselves) and just ask questions regarding those? I only ask because I am unfamiliar with exactly how the classification is done and the final output produced. Is there a better sort of structure that should be used? -Thank you very much for your help.","I think using a decision tree is a great idea for this problem. It might be an idea to group your granular activities, and for the ""nature lovers"" category list a number of different climate types: Dry and sunny, coastal, forests, etc and have subcategories within them. -For the activities, you could make a category called watersports, sightseeing, etc. It sounds like your dataset is more granular than you want your decision tree to be, but you can just keep dividing that granularity down into more categories on the tree until you reach a level you're happy with. It might be an idea to include images too, of each place and activity. Maybe even without descriptive text.",0.0,False,2,6585 -2020-03-02 02:33:41.653,Using a Decision Tree to build a Recommendations Application,"First of all, my apologies if I am not following some of the best practices of this site, as you will see, my home is mostly MSE (math stack exchange). -I am currently working on a project where I build a vacation recommendation system. The initial idea was somewhat akin to 20 questions: We ask the user certain questions, such as ""Do you like museums?"", ""Do you like architecture"", ""Do you like nightlife"" etc., and then based on these answers decide for the user their best vacation destination. We answer these questions based on keywords scraped from websites, and the decision tree we would implement would allow us to effectively determine the next question to ask a user. However, we are having some difficulties with the implementation. Some examples of our difficulties are as follows: -There are issues with granularity of questions. For example, to say that a city is good for ""nature-lovers"" is great, but this does not mean much. Nature could involve say, hot, sunny and wet vacations for some, whereas for others, nature could involve a brisk hike in cool woods. Fortunately, the API we are currently using provides us with a list of attractions in a city, down to a fairly granular level (for example, it distinguishes between different watersport activities such as jet skiing, or white water rafting). My question is: do we need to create some sort of hiearchy like: - -nature-> (Ocean,Mountain,Plains) (Mountain->Hiking,Skiing,...) - -or would it be best to simply include the bottom level results (the activities themselves) and just ask questions regarding those? I only ask because I am unfamiliar with exactly how the classification is done and the final output produced. Is there a better sort of structure that should be used? -Thank you very much for your help.","Bins and sub bins are a good idea, as is the nature, ocean_nature thing. -I was thinking more about your problem last night, TripAdvisor would be a good idea. What I would do is, take the top 10 items in trip advisor and categorize them by type. -Or, maybe your tree narrows it down to 10 cities. You would rank those cities according to popularity or distance from the user. -I’m not sure how to decide which city would be best for watersports, etc. You could even have cities pay to be top of the list.",0.0,False,2,6585 -2020-03-02 05:24:30.433,How to use an exported model from google colab in Pycharm,"I have a LSTM Keras Tensorflow model trained and exported in .h5 (HDF5) format. -My local machine does not support keras tensorflow. I have tried installing. But does not work. -Therefore, i used google colabs and exported the model. -I would like to know, how i can use the exported model in pycharm -Edit : I just now installed tensorflow on my machine -Thanks in Advance",You still need keras and tensorflow to use the model.,0.0,False,1,6586 -2020-03-02 08:41:18.990,How to make PyQt5 program starts like pycharm,"As the title says i want to know how to make PyQt5 program starts like pycharm/spyder/photoshop/etc so when i open the program an image shows with progress bar(or without) like spyder,etc",Sounds like you want a splash screen. QSplashScreen will probably be your friend.,1.2,True,1,6587 -2020-03-02 12:31:11.530,What is the point of using sys.exit (or raising SystemExit)?,"This question is not about how to use sys.exit (or raising SystemExit directly), but rather about why you would want to use it. - -If a program terminates successfully, I see no point in explicitly exiting at the end. -If a program terminates with an error, just raise that error. Why would you need to explicitly exit the program or why would you need an exit code?","Letting the program exit with an Exception is not user friendly. More exactly, it is perfectly fine when the user is a Python programmer, but if you provide a program to end users, they will expect nice error messages instead of a Python stacktrace which they will not understand. -In addition, if you use a GUI application (through tkinter or pyQt for example), the backtrace is likely to be lost, specially on Windows system. In that case, you will setup error processing which will provide the user with the relevant information and then terminate the application from inside the error processing routine. sys.exit is appropriate in that use case.",1.2,True,1,6588 -2020-03-02 23:01:44.213,"VS Code Azure Functions: pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available","Trying to deploy Azure Functions written in Python and looks like the only option to do that is through VS Code. -I have Python and Azure Functions extensions, and normally use PyCharm with Anaconda interpreter. -I also have azure-functions-core-tools installed and calling ""func"" in PS works. -In the VS Code I create a virtual environment as it suggests. But when tyring to debug any Azure Function (using one of their templates for now) I get the error above. -As far as I understand it tries to install ""azure-functions"" module as specified in the ""requirements.txt"" file and tries to do that with pip. pip works normally if I use it through Anaconda prompt or with my global env python, but I have to use the virtual environment created by VS Code for this one. -Any suggestions on how to get through this? Thanks in advance.","Just solved the problem after wasting my valuable whole afternoon. The problem lies on the side of Anaconda. -As you described in your question, pip works normally (only) in your Anaconda prompt. Which means, it doesn't work anywhere outside, no matter in a CMD or a PowerShell (although pip and conda seem work outside of the prompt, SSL requests get somehow always refused). However, VS Code, when you simply press F5 instead of using func start command, uses an external PowerShell to call pip. No wonder it'll fail. -The problem can be solved, when you install Anaconda on Windows 10, by choosing to add Anaconda's root folder to PATH. This being said, Anaconda's installer strongly doesn't recommend choosing this option (conflicts with other apps blabla)... And if you try to install Anaconda through some package manager such as scoop, it'll install it without asking you for this detail, which is logical. -The ""fun"" part is philosophically Anaconda itself doesn't suggest using conda or pip command outside Anaconda Prompt, while other apps want and may have to do it the other way. Very very confusing and annoying.",0.3869120172231254,False,1,6589 -2020-03-03 11:36:54.743,wxPython wx.CallAfter(),I work with wxpython and threads in my project. I think that I didn't understand well how to use wx.CallAfter and when to us it. I read few thing but I still didn't got the point. Someone can explain it to me?,"In a nutshell, wx.CallAfter simply takes a callable and the parameters that should be passed to it, bundles that up into a custom event, and then posts that event to the application's pending event queue. When that event is dispatched the handler calls the given callable, passing the given parameters to it. -Originally wx.CallAfter was added in order to have an easy way to invoke code after the current and any other pending events have been processed. Since the event is always processed in the main UI thread, then it turns out that wx.CallAfter is also a convenient and safe way for a worker thread to cause some code to be run in the UI thread.",1.2,True,1,6590 -2020-03-03 18:37:00.133,How to use huggingface T5 model to test translation task?,"I see there exits two configs of the T5model - T5Model and TFT5WithLMHeadModel. I want to test this for translation tasks (eg. en-de) as they have shown in the google's original repo. Is there a way I can use this model from hugging face to test out translation tasks. I did not see any examples related to this on the documentation side and was wondering how to provide the input and get the results. -Any help appreciated","T5 is a pre-trained model, which can be fine-tuned on downstream tasks such as Machine Translation. So it is expected that we get gibberish when asking it to translate -- it hasn't learned how to do that yet.",0.2012947653214861,False,1,6591 -2020-03-03 19:05:01.693,Python: Create List Containing 10 Successive Integers Starting with a Number,"I want to know how to create a list called ""my_list"" in Python starting with a value in a variable ""begin"" and containing 10 successive integers starting with ""begin"". -For example, if begin = 2, I want my_list = [2,3,4,5,6,7,8,9,10,11]","Simply you can use extend method of list and range function. -start = 5 -my_list = [] -my_list.extend(range(start,start+11)) -print(my_list)",0.0,False,1,6592 -2020-03-04 04:29:00.517,How to change resetting password of django,"I am learning to use django and my question is if it is possible to change the system to reset the users' password, the default system of sending a link by mail I do not want to use it, my idea is to send a code to reset the password , but I don't know how it should be done and if possible, I would also need to know if it's safe. -What I want is for the user who wants to recover his password to go to the recovery section, fill in his email and choose to send and enable a field to put the code that was sent to the mail. -I don't know how I should do it or is there a package for this? -Thank you very much people greetings.","You can do this, when user clicks on reset password ask for users email id, verify that email id provided is same as what you have in DB. If the email id matches you can generate a OTP and save it in DB(for specific time duration like 3 mins) and send it to user's Email id. Now User enters the OTP. If the OTP provided by user matches the one you have in DB, open the page where user can enter new password.",0.0,False,1,6593 -2020-03-04 15:31:36.817,How are threads different from process in terms of how they are executed on hardware level?,"I was wondering how the threads are executed on hardware level, like a process would run on a single processing core and make a context switch on the processor and the MMU in order to switch between processes. How do threads switch? Secondly when we create/spawn a new thread will it be seen as a new process would for the processor and be scheduled as a process would? -Also when should one use threads and when a new process? -I know I probably am sounding dumb right now, that's because I have massive gaps in my knowledge that I would like fill. Thanks in advance for taking the time and explaining things to me. :)","Think of it this way: ""a thread is part of a process."" -A ""process"" owns resources such as memory, open file-handles and network ports, and so on. All of these resources are then available to every ""thread"" which the process owns. (By definition, every ""process"" always contains at least one (""main"") ""thread."") -CPUs and cores, then, execute these ""threads,"" in the context of the ""process"" which they belong to. -On a multi-CPU/multi-core system, it is therefore possible that more than one thread belonging to a particular process really is executing in parallel. Although you can never be sure. -Also: in the context of an interpreter-based programming language system like Python, the actual situation is a little bit more complicated ""behind the scenes,"" because the Python interpreter context does exist and will be seen by all of the Python threads. This does add a slight amount of additional overhead so that it all ""just works.""",0.1352210990936997,False,2,6594 -2020-03-04 15:31:36.817,How are threads different from process in terms of how they are executed on hardware level?,"I was wondering how the threads are executed on hardware level, like a process would run on a single processing core and make a context switch on the processor and the MMU in order to switch between processes. How do threads switch? Secondly when we create/spawn a new thread will it be seen as a new process would for the processor and be scheduled as a process would? -Also when should one use threads and when a new process? -I know I probably am sounding dumb right now, that's because I have massive gaps in my knowledge that I would like fill. Thanks in advance for taking the time and explaining things to me. :)","There are a few different methods for concurrency. The threading module creates threads within the same Python process and switches between them, this means they're not really running at the same time. The same happens with the Asyncio module, however this has the additional feature of setting when a thread can be switched. -Then there is the multiprocessing module which creates a separate Python process per thread. This means that the threads will not have access to shared memory but can mean that the processes run on different CPU cores and therefore can provide a performance improvement for CPU bound tasks. -Regarding when to use new threads a good rule of thumb would be: - -For I/O bound problems, use threading or async I/O. This is because you're waiting on responses from something external, like a database or browser, and this waiting time can instead be filled by another thread running it's task. -For CPU bound problems use multiprocessing. This can run multiple Python processes on separate cores at the same time. - -Disclaimer: Threading is not always a solution and you should first determine whether it is necessary and then look to implement the solution.",1.2,True,2,6594 -2020-03-05 07:24:31.123,"How can I replace an EXE's icon to the ""default"" icon?","I converted a python script to an exe using pyinstaller. I want to know how I can change the icon it gave me to the default icon. In case you don't know what I mean, look at C:\Windows\System32\alg.exe. There are many more files with that icon, but that is one of them. Sorry if this is the wrong place to ask this, and let me know if you have any questions","I would suggest to use auto-py-to-exe module for conversion of python script to exe. At first install using command pip install auto-py-to-exe after that run it through python command line just by typing auto-py-to-exe, you'll get an window where you'll get the icon option. -Please vote if you find your solution.",-0.1352210990936997,False,2,6595 -2020-03-05 07:24:31.123,"How can I replace an EXE's icon to the ""default"" icon?","I converted a python script to an exe using pyinstaller. I want to know how I can change the icon it gave me to the default icon. In case you don't know what I mean, look at C:\Windows\System32\alg.exe. There are many more files with that icon, but that is one of them. Sorry if this is the wrong place to ask this, and let me know if you have any questions","You'll need to extract the icon from the exe, and set that as the icon file with pyinstaller -i extracted.ico myscript.py. You can extract the icon with tools available online or you can use pywin32 to extract the icons.",0.0,False,2,6595 -2020-03-05 22:26:04.187,How to update python script/application remotely,"I'm trying to develop a windows gui app with python and i will distribute that later. I don't know how to set the app for some future releasing updates or bug fix from a server/remotely. How can I handle this problem? Can I add some auto-update future to app? What should write for that in my code and what framework or library should I use? -Do pyinstaller/ inno setup have some futures for this? -Thanks for your help.","How about this approach: - -You can use a version control service like github to version control your code. -Then checkout the repository on your windows machine. -Write a batch/bash script to checkout the latest version of your code and restart the app. -Then use the Windows task scheduler to periodically run this script.",1.2,True,1,6596 -2020-03-06 01:25:48.877,"While using Word2vec, how can I get a result from unseen words corpus?","I am using Word2vec model to extract similar words, but I want to know if it is possible to get words while using unseen words for input. -For example, I have a model trained with a corpus [melon, vehicle, giraffe, apple, frog, banana]. ""orange"" is unseen word in this corpus, but when I put it as input, I want [melon, apple, banana] for result. -Is this a possible situation?","The original word2vec algorithm can offer nothing for words that weren't in its training data. -Facebook's 'FastText' descendent of the word2vec algorithm can offer better-than-random vectors for unseen words – but it builds such vectors from word fragments (character n-gram vectors), so it does best where shared word roots exist, or where the out-of-vocabulary word is just a typo of a trained word. -That is, it won't help in your example, if no other words morphologically similar to 'orange' (like 'orangey', 'orangade', 'orangish', etc) were present. -The only way to learn or guess a vector for 'orange' is to have some training examples with it or related words. (If all else failed, you could scrape some examples from other large corpora or the web to mix with your other training data.)",0.6730655149877884,False,1,6597 -2020-03-06 05:12:48.407,import xmltodict module into visual studio code,"I am having a little tough time importing the xmltodict module into my visual studio code. -I setup the module in my windows using pip. it should be working on my visual studio as per the guidelines and relevant posts I found here. -but for some reasons it isn't working in the visual studio. -Please advise on how can I get the xmltodict module installed or imported on visual studio code -Thanks in Advance","I had the same issue and it turned out that it wasn't installed in that virtual environment even though that was what I had done. Try: -venv/Scripts/python.exe -m pip install xmltodict",0.0,False,1,6598 -2020-03-06 10:30:35.220,How to open a python file in Cmder Terminal Quicker?,"I want to open a python file in cmder terminal quickly. Currently, the fastest way i know how is to navigate to the directory of the python file in cmder terminal and then run it by calling ""python file.py"". This is slow and cumbersome. Is there a way for me to have a file or exe, that, when i run it (or drag the program onto it), automatically makes the program run in cmder straight away. -Windows 10 -Clarification: I'm using cmder terminal specifically because it supports text coloring. Windows terminal and powershell do not support this.","On windows you can go to the directory with the file in the explorer and then simply hold shift as you right click at the same time. This will open the menu and there you will have the option to use the command shell/powershell and then you don't have to navigate to the directory inside the shell anymore and can just execute the python file. -I hope that helps.",0.0,False,2,6599 -2020-03-06 10:30:35.220,How to open a python file in Cmder Terminal Quicker?,"I want to open a python file in cmder terminal quickly. Currently, the fastest way i know how is to navigate to the directory of the python file in cmder terminal and then run it by calling ""python file.py"". This is slow and cumbersome. Is there a way for me to have a file or exe, that, when i run it (or drag the program onto it), automatically makes the program run in cmder straight away. -Windows 10 -Clarification: I'm using cmder terminal specifically because it supports text coloring. Windows terminal and powershell do not support this.",Answer: The escape codes just weren't properly configured for the windows terminals. You can get around this by using colorama's colorama.init(). It should work after that.,1.2,True,2,6599 -2020-03-06 15:03:36.553,discord py: canceling a loop using a command,"My question is this: If I were to make a command with a loop (for example ""start"") where it would say something like:""It has been 3 hours since..."" and it loops for 10800 seconds (3 hours) and then says:""It has been 6 hours since..."" , so the part where I'm stuck is: If I were to make a command called ""stop"" how would I implement it in the command ""start"" where it would check if the command ""stop"" has been used. If yes the loop is cancelled, if it hasn't been used the loop continues.","but if you run the command several times or on different servers, one stop command stops them all. Is there not a way to stop just one loop with one command",0.0,False,1,6600 -2020-03-07 19:56:55.313,How to design a HTML parser that would follow the Single Responsibility Principle?,"I am writing an application which extracts some data from HTML using BeautifoulSoup4. These are search results of some kind, to be more specific. I thought it would be a good a idea to have a Parser class, storing default values like URL prefixes, request headers etc. After configuring those parameters, the public method would return a list of objects, each of them containing a single result or maybe even an object with a list composed into it alongside with some other parameters. I'm struggling to decouple small pieces of logic that build that parser implementation from the parser class itself. I want to write dozens of parser private utility methods like: _is_next_page_available, _are_there_any_results, _is_did_you_mean_available etc. However, these are the perfect candidates for writing unit tests! And since I want to make them private, I have a feeling that I'm missing something... -My other idea was to write that parser as a function, calling bunch of other utility functions, but that would be just equal to making all of those methods public, which doesn't make sense, since they're implementation details. -Could you please advice me how to design this properly?","I think you're interpreting the Single-Responsibility Principle (SRP) a little differently. It's actual meaning is a little off from 'a class should do only one thing'. It actually states that a class should have one and only one reason to change. -To employ the SRP you have to ask yourself to what/who would your parser module methods be responsible, what/who might make them change. If the answer for each method is the same, then your Parser class employs the SRP correctly. If there are methods that are responsible to different things (business-rule givers, groups of users etc.) then those methods should be taken out and be placed elsewhere. -Your overall objective with the SRP is to protect your class from changes coming from different directions.",0.6730655149877884,False,1,6601 -2020-03-08 18:20:41.720,How can I use the Twitter API to look up accounts from email addresses?,"I'm helping out a newly formed startup build a social media following, and I have a csv file of thousands of email addresses of people I need to follow. From looking at the twitter API, I see its possible to follow the accounts if I knew their usernames, but its unclear how to look them up by email. Any ideas?","This does not appear to be an option with their API, you can use either user_id or screen name with their GET users/show or GET users/lookup options.",0.2012947653214861,False,2,6602 -2020-03-08 18:20:41.720,How can I use the Twitter API to look up accounts from email addresses?,"I'm helping out a newly formed startup build a social media following, and I have a csv file of thousands of email addresses of people I need to follow. From looking at the twitter API, I see its possible to follow the accounts if I knew their usernames, but its unclear how to look them up by email. Any ideas?",There is no way to do a lookup based on email address in the Twitter API.,0.0,False,2,6602 -2020-03-09 06:36:25.567,Overfitting problem in convolutional neural Network and deciding the parameters of convolution and dence layer,I applied batch normalization technique to increase the accuracy of my cnn model.The accuracy of model without batch Normalization was only 46 % but after applying batch normalization it crossed 83% but a here arisen a bif overfitting problem that the model was giving validation Accuracy only 15%. Also please tell me how to decide no of filters strides in convolution layer and no of units in dence layer,"Batch normalization has been shown to help in many cases but is not always optimal. I found that it depends where it resides in your model architecture and what you are trying to achieve. I have done a lot with different GAN CNNs and found that often BN is not needed and can even degrade performance. It's purpose is to help the model generalize faster but sometimes it increases training times. If I am trying to replicate images, I skip BN entirely. I don't understand what you mean with regards to the accuracy. Do you mean it achieved 83% accuracy with the training data but dropped to 15% accuracy on the validation data? What was the validation accuracy without the BN? In general, the validation accuracy is the more important metric. If you have a high training accuracy and a low validation accuracy, you are indeed overfitting. If you have several convolution layers, you may want to apply BN after each. If you still over-fit, try increasing your strides and kernel size. If that doesn't work you might need to look at the data again and make sure you have enough and that it is somewhat diverse. Assuming you are working with image data, are you creating samples where you rotate your images, crop them, etc. Consider synthetic data to augment your real data to help combat overfiiting.",0.0,False,1,6603 -2020-03-09 09:01:08.040,How to create Dashboard using Python or R,"In my company, I have got task to create dash board using python whose complete look and feel should be like qlicksense. I am fresher in data science field I don't know how to do this. I did lots of R & D and plotly and dash is the best option as much according to R & D on internet dash table is also a good option but I am not able to create the things what it should look like. If any one know how to start plz help me ..","you can use django or other web framework to develop the solution, -keep in mind that you probably will need to handle lots of front end stuff like builiding the UI of the system, -Flask also is very lightweight option, but it needs lots of customization. -Django comes with pretty much everything you might need out of the box.",0.0,False,1,6604 -2020-03-09 17:18:29.683,is there any function or module in nlp that would find a specific paragraph headings,"I have a text file . I need to identify specific paragraph headings and if true i need to extract relevant tables and paragraph wrt that heading using python. can we do this by nlp or machine learning?. if so please help me out in gathering basics as i am new to this field.I was thinking of using a rule like: -if (capitalized) and heading_length <50: - return heading_text -how do i parse through the entire document and pick only the header names ? this is like automating human intervention of clicking document,scrolling to relevant subject and picking it up. -please help me out in this","You probably don't need NLP or machine learning to detect these headings. Figure out the rule you actually want and if indeed it is such a simple rule as the one you wrote, a regexp will be sufficient. If your text is formatted (e.g. using HTML) it might be even simpler. -If however, you can't find a rule, and your text isn't really formatted consistently, your problem will be hard to solve.",0.2012947653214861,False,2,6605 -2020-03-09 17:18:29.683,is there any function or module in nlp that would find a specific paragraph headings,"I have a text file . I need to identify specific paragraph headings and if true i need to extract relevant tables and paragraph wrt that heading using python. can we do this by nlp or machine learning?. if so please help me out in gathering basics as i am new to this field.I was thinking of using a rule like: -if (capitalized) and heading_length <50: - return heading_text -how do i parse through the entire document and pick only the header names ? this is like automating human intervention of clicking document,scrolling to relevant subject and picking it up. -please help me out in this","I agree with lorg. Although you could use NLP, but that might just complicate the problem. This problem could be an optimization problem if performance is a concern.",0.0,False,2,6605 -2020-03-10 07:31:06.813,How to find time take by whole test suite to complete in Pytest,"I want to know how much time has been taken by the whole test suite to complete the execution. How can I get it in Pytest framework. I can get the each test case execution result using pytest --durations=0 cmd. But, How to get whole suite execution time>","Use pytest-sugar -pip install pytest-sugar -Run your tests after it, -You could something like Results (10.00s) after finishing the tests",1.2,True,1,6606 -2020-03-10 16:13:15.393,"python + how to remove the message ""cryptography is not installed, use of crypto disabled""","First time programming in python and I guess you will notice it after reading my question: - + How can I remove the message ""cryptography is not installed, use of crypto disabled"" when running the application? -I have created a basic console application using the pyinstaller tool and the code is written in python. -When I run the executable, I am getting the message ""cryptography is not installed, use of crypto disabled"". The program still runs, but I would prefer to get rid off the message. -Can someone help me? -Thanks in advance.","cryptography and crypto are 2 different modules. -try: -pip install cryptography -pip install crypto",1.2,True,1,6607 -2020-03-11 11:00:09.743,Maya python (or MEL) select objects,"I need select all objects in Maya with name ""shd"" and after that I need assigned to them specific material. -I don't know how to do that because when I wrote: select -r ""shd""; it send me the message: More than one object matches name: shd // -So maybe I should select them one by one in some for loop or something. I am 3D artist so sorry for the lame question.","You can use select -r ""shd*"" to select all objects with a name stating with ""shd"".",0.0,False,1,6608 -2020-03-11 14:39:29.703,How to redirect to a different page in Django when I receive an input from a barcode scanner?,"The whole project is as follows: -I'm trying to build a Django based web-app for my college library. This app when idle will be showing a slideshow of pictures on the screen. However when an input is received from the barcode scanner, it is supposed to redirect it to a different age containing information related to that barcode. I'm not able to figure out how to get an input from the scanner and only then redirect it to the page for 3 seconds containing the relevant information, after the interval, it should redirect back to the page containing the slideshow.","you should communicate with the bar-code scanner to receive scanning-done event which has nothing to do with django but only javascript or even an interface software which the user must install, like a driver, so you can detect the bar-code scanner from javascript(web browser) then you can get your event in javascript and redirect the page on the event or do whatever you want",0.0,False,1,6609 -2020-03-11 23:42:50.313,Airflow Operator to pull data from external Rest API,"I am trying to pull data from an external API and dump it on S3 . I was thinking on writing and Airflow Operator rest-to-s3.py which would pull in data from external Rest API . -My concerns are : - -This would be a long running task , how do i keep track of failures ? -Is there a better alternative than writing an operator ? -Is it advisable to do a task that would probably run for a couple of hours and wait on it ? - -I am fairly new to Airflow so it would be helpful.","Errors - one of the benefits of using a tool like airflow is error tracking. Any failed task is subject to rerun (based on configuration) will persist its state in task history etc.. -Also, you can branch based on the task status to decide if you want to report error e.g. to email -An operator sounds like a valid option, another option is the built-in PythonOperator and writing a python function. -Long-running tasks are problematic with any design and tool. You better break it down to small tasks (and maybe parallelize their execution to reduce the run time?) Does the API take long time to respond? Or do you send many calls? maybe split based on the resulting s3 files? i.e. each file is a different DAG/branch?",0.9999092042625952,False,1,6610 -2020-03-13 05:38:38.123,How do I select a sub-folder as a directory containing tests in Python extension for Visual studio code,"I am using VScode with python code and I have a folder with sub-directories (2-levels deep) containing python tests. -When I try ""Python: Discover Tests"" it asks for a test framework (selected pytest) and the directory in which tests exist. At this option, it shows only the top-level directories and does not allow to select a sub-directory. -I tried to type the directory path but it does not accept it. -Can someone please help on how to achieve this?","There are two options. One is to leave the selection as-is and make sure your directories are packages by adding __init__.py files as appropriate. The other is you can go into your workspace settings and adjust the ""python.testing.pytestArgs"" setting as appropriate to point to your tests.",0.0,False,2,6611 -2020-03-13 05:38:38.123,How do I select a sub-folder as a directory containing tests in Python extension for Visual studio code,"I am using VScode with python code and I have a folder with sub-directories (2-levels deep) containing python tests. -When I try ""Python: Discover Tests"" it asks for a test framework (selected pytest) and the directory in which tests exist. At this option, it shows only the top-level directories and does not allow to select a sub-directory. -I tried to type the directory path but it does not accept it. -Can someone please help on how to achieve this?","Try opening the ""Output"" log (Ctrl+Shift+U) and run ""Python: Discover Tests"". Alternatively, you may type pytest --collect-only into the console. Maybe you are experiencing some errors with the tests themselves (such as importing errors). -Also, make sure to keep __init__.py file in your ""tests"" folder. -I am keeping the pytest ""tests"" folder within a subdirectory, and there are no issues with VS Code discovering the tests.",0.0,False,2,6611 -2020-03-13 10:02:01.843,how to fix CVE-2019-19646 Sqlite Vulnerability in python3,"I am facing issue with SQLite vulnerability which fixed in SQLite version 3.31.1. -I am using the python3.7.4-alpine3.10 image, but this image uses a previous version of SQLite that isn't patched. -The patch is available in python3.8.2-r1 with alpine edge branch but this image is not available in docker hub. -Please help how can i fix this issue?","Your choices are limited to two options: - -Wait for the official patched release -Patch it yourself - -Option 1 is easy, just wait and the patch will eventually propagate through to docker hub. Option 2 is also easy, just get the code for the image from github, update the versions, and run the build yourself to produce the image.",0.0,False,1,6612 -2020-03-13 18:51:28.017,How to see current cache size when using functools.lru_cache?,"I am doing performance/memory analysis on a certain method that is wrapped with the functools.lru_cache decorator. I want to see how to inspect the current size of my cache without doing some crazy inspect magic to get to the underlying cache. -Does anyone know how to see the current cache size of method decorated with functools.lru_cache?","Digging around in the docs showed the answer is calling .cache_info() on the method. - -To help measure the effectiveness of the cache and tune the maxsize parameter, the wrapped function is instrumented with a cache_info() function that returns a named tuple showing hits, misses, maxsize and currsize. In a multi-threaded environment, the hits and misses are approximate.",1.2,True,1,6613 -2020-03-14 04:25:07.913,Why use signals in Django?,"I have read lots of documentation and articles about using signals in Django, but I cannot understand the concept. - -What is the purpose of using signals in Django? -How does it work? - -Please explain the concept of signals and how to use it in Django code.","The Django Signals is a strategy to allow decoupled applications to get notified when certain events occur. Let’s say you want to invalidate a cached page everytime a given model instance is updated, but there are several places in your code base that this model can be updated. You can do that using signals, hooking some pieces of code to be executed everytime this specific model’s save method is trigged. -Another common use case is when you have extended the Custom Django User by using the Profile strategy through a one-to-one relationship. What we usually do is use a “signal dispatcher” to listen for the User’s post_save event to also update the Profile instance as well.",1.2,True,1,6614 -2020-03-14 15:27:13.413,Run generated .py files without python installation,"I am coding a PyQt5 based GUI application needs to be able to create and run arbitrary Python scripts at runtime. If I convert this application to a .exe, the main GUI Window will run properly. However, I do not know how I can run the short .py scripts that my application creates. Is it possible to runs these without a system wide Python installation? - -I don't want ways to compile my python application to exe. This problem relates to generated .py scripts","No, to run a Python file you need an interpreter. -It is possible that your main application can contain a Python interpreter so that you don't need to depend on a system-wide Python installation.",0.3869120172231254,False,1,6615 -2020-03-15 10:28:17.703,Can manim be used in pycharm?,"I have been programming with python for about half a year, and I would like to try manim ( the animation programme of 3blue1brown from youtube), but I am not sure where to start. I have not installed it, but I have tried to read up on it. And to be honest I do not understand much of the requirements of the program, and how to run it. -Google has left me without much help, so I decided to check here to see if anyone here is able to help. -From what I understand, you run manim directly in python and the animations are based on a textfile with code i assume is LaTex. I have almost no experience with python itself, but I have learned to use it through Thonny, and later Pycharm. -My main questions are: (Good sources to how to do this without being a wizard would be really helpful if they exist☺️) - -Is it possible to install manim in pycharm, and how? Do i need some extra stuff installed to pycharm in order to run it? (I run a windows 64-bit computer) -If i manage to do this in pycharm, Will I then be able to code the animations directly in pycharm (in .py or .txt files), or is it harder to use in pycharm? - -All help or insights is very appreciated As I said I am not extremely knowledgeable in computers, but I am enjoying learning how to code and applications of coding","Yes, you can -1.Write your code in pycharm -2.save it -3.copy that .py file to where you installed manim. In my case, it is - -This pc>> C drive >> manim-master >> manim-master - -4.select on the path and type ""cmd"" to open terminal from there - -Type this on the terminal - -python -m manim -pql projectname.py -This will do. -To play back the animation or image, open the media folder.",0.0,False,1,6616 -2020-03-15 13:08:38.213,"FFmpeg is in Path, but running in the CMD results in ""FFmpeg not recognized as internal or external command""","FFmpeg is installed in C:\FFmpeg, and I put C:\FFmpeg\bin in the path. Does anyone know how to fix? -Thanks!","You added C:\FFmpeg\bin\ffmpeg.exe to your path, instead, you need to add only the directory: -C:\FFmpeg\bin\",-0.3869120172231254,False,1,6617 -2020-03-15 16:27:58.057,How to put an icon for my android app using kivy-buildozer?,"I made an android app using python-kivy (Buildozer make it to apk file) -Now I want to put an image for the icon of the application. I mean the picture for the app-icon on your phone. -how can I do this? I cannot find any code in kv",Just uncomment icon.filename: in the buildozer spec file and write a path to your icon image.,0.3869120172231254,False,1,6618 -2020-03-15 19:50:38.793,How to activate google colab gpu using just plain python,"I'm new to google colab. -I'm trying to do deep learning there. -I have written a class to create and train a LSTM net using just python - not any specific deep learning library as tensorflow, pytorch, etc. -I thought I was using a gpu because I had chosen the runtime type properly in colab. -During the code execution, however, I was sometimes getting the message to quit gpu mode because I was not making use of it. -So, my question: how can one use google colab gpu, using just plain python, without special ai libraries? Is there something like ""decorator code"" to put in my original code so that the gpu get activated?","It's just easier to use frameworks like PyTorch or Tensorflow. -If not, you can try pycuda or numba, which are closer to ""pure"" GPU programming. That's even harder than just using PyTorch.",0.2012947653214861,False,1,6619 -2020-03-16 16:42:55.520,Overriding button functionality in kivy using an another button,Currently I am making a very simple interface which asks user to input parameters for a test and then run the test. The test is running brushless dc motor for several minutes. So when the run button is pressed the button is engaged for the time period till the function is finished executing. I have another stop button which should kill the test but currently cant use it since the run button is kept pressed till the function is finished executing and stop button cant be used during the test. I want to stop the test with pressing the stop button even if the run button function is currently being executed. The run button should release and the function should continuously check the stop function for stopping the test. Let me know how this can be executed.,"Your problem is that all your code it taking place sequentially in a single thread. Once your first button is pressed, all of the results of that pressing are followed through before anything else can happen. -You can avoid this by running the motor stuff in a separate thread. Your stop button will then need to interrupt that thread.",1.2,True,1,6620 -2020-03-17 13:45:02.310,how to compute the sum for each pair of rows x in matrice X and y in matrice Y?,"I am trying to write a function in python that takes as input two matrices X and Y and computes for every pair of rows x in X and y in Y, the norm ||x - y|| . I would like to do it without using for loops. -Do you have an idea about how to do it ?","I just solve it :D -instead of len(np.trnspose(y)) i had to do len(y) and it perfectly worked with a for loop.",0.0,False,1,6621 -2020-03-18 08:43:05.267,How to set text color to gradient texture in kivy?,"I have used to create a Texture with gradient color and set to the background of Label, Button and etc. But I am wondering how to set this to color of Label?","You can't set the color property to a gradient, that just isn't what it does. Gradients should be achieved using images or textures directly applied to canvas vertex instructions.",0.0,False,1,6622 -2020-03-18 13:18:18.393,'odict_items' object is not subscriptable how to deal with this?,"I've tried to run this code on Jupyter notebook python 3: -class CSRNet(nn.Module): - def __init__(self, load_weights=False): - super(CSRNet, self).__init__() - self.frontend_feat = [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'M', 512, 512, 512] - self.backend_feat = [512, 512, 512,256,128,64] - self.frontend = make_layers(self.frontend_feat) - self.backend = make_layers(self.backend_feat,in_channels = 512,dilation = True) - self.output_layer = nn.Conv2d(64, 1, kernel_size=1) - if not load_weights: - mod = models.vgg16(pretrained = True) - self._initialize_weights() - for i in range(len(self.frontend.state_dict().items())): - self.frontend.state_dict().items()[i][1].data[:] = mod.state_dict().items()[i][1].data[:] -it displays 'odict_items' object is not subscriptable as an error in the last line of code!!how to deal with this?","In python3, items() returns a dict_keys object, you should try to convert it to a list: - -list(self.frontend.state_dict().items())[i][1].data[:] = -list(mod.state_dict().items())[i][1].data[:]",0.3869120172231254,False,1,6623 -2020-03-18 14:26:59.470,Is there any way that I can insert a python file into a html page?,"I am currently trying to create a website that displays a python file (that is in the same folder as the html file) on the website, but I'm not sure how to do so. -So I just wanted to ask if anyone could describe the process of doing so (or if its even possible at all).","Displaying ""a python file"" and displaying ""the output"" (implied ""of a python script's execution) are totally different things. For the second one, you need to configure your server to run Python code. There are many ways to do so, but the two main options are -1/ plain old cgi (slow, outdated and ugly as f..k but rather easy to setup - if your hosting provides support for it at least - and possibly ok for one single script in an otherwise static site) -and -2/ a modern web framework (flask comes to mind) - much cleaner, but possibly a bit overkill for one simple script. -In both cases you'll have to learn about the HTTP protocol.",0.0,False,1,6624 -2020-03-19 00:25:35.587,How to append data to specific column Pandas?,"I have 2 dataframes: -FinalFrame: -Time | Monday | Tuesday | Wednesday | ... -and df (Where weekday is the current day, whether it be monday tuesday etc): -WEEKDAY -I want to append the weekday's data to the correct column. I will need to constantly keep appending weekdays data as weeks go by. Any ideas on how to tackle this?","You can add index of week days instead of their name. For example, -weekdays = ['Mon', Tue', 'Wed', 'Thu', 'Fri','Sat', 'Sun'] -Time | 0 | 1 | 2 ....",0.0,False,2,6625 -2020-03-19 00:25:35.587,How to append data to specific column Pandas?,"I have 2 dataframes: -FinalFrame: -Time | Monday | Tuesday | Wednesday | ... -and df (Where weekday is the current day, whether it be monday tuesday etc): -WEEKDAY -I want to append the weekday's data to the correct column. I will need to constantly keep appending weekdays data as weeks go by. Any ideas on how to tackle this?",So the way you could do it isolates the series by saying weekday[whatever day you are looking at .append.,0.0,False,2,6625 -2020-03-20 17:35:30.097,Displaying data from my python script on a webpage,"My case: -I want to display the meal plan from my University on my own online ""Dashboard"". I've written my python script to scrape that data and I get the data I need (plain Text). Now I need to put it on my website but I don't know how to start. On my first searching sessions, I have found something with CGI but I have no clue how to use it:( Is there maybe an even easier way to solve my problem? -Thanks","I suggest you to use the Django, if you don't want to use django, you can edit your output in HTML formate and publish html page, directly.",0.0,False,1,6626 -2020-03-21 17:11:32.527,How to read the documentation of a certain module?,"I've just finished my course of Python, so now I can write my own script. So to do that I started to write a script with the module Scapy, but the problem is, the documentation of Scapy is used for the interpreter Scapy, so I don't know how to use it, find the functions, etc. -I've found a few tutorials in Internet with a few examples but it's pretty hard. For example, I've found in a script the function ""set_payload"" to inject some code in the layer but I really don't know where he found this function. -What's your suggestion for finding how a module works and how to write correctly with it? Because I don't really like to check and pick through other scripts on Internet.","If I have understood the question correctly, roughly what you are asking is how to find the best source to understand a module. -If you are using an inbuilt python module, the best source is the python documentation. -Scapy is not a built-in python module. So you may have some issues with some of the external modules (by external I mean the ones you need to explicitly install). -For those, if the docs aren't enough, I prefer to look at some of the github projects that may use that module one way or the other and most of the times it works out. If it doesn't, then I go to some blogs or some third party tutorials. There is no right way to do it, You will have to put in the effort where its needed.",1.2,True,1,6627 -2020-03-21 17:12:42.693,Efficient Way to Run Multiple Instances of the Same Discord Bot (Discord),"I have a Discord bot I use on a server with friends. -The problem is some commands use web scraping to retrieve the bot response, so until the bot is finished retrieving the answer, the bot is out of commission/can't handle new commands. -I want to run multiple instances of the same bot on my host server to handle this, but don't know how to tell my code ""if bot 1 is busy with a command, use bot 2 to answer the command"" -Any help would be appreciated!","async function myFunction () {} -this should fix your problem -having multiple instances could be possible with threads, -but this is just a much more easy way",-0.2012947653214861,False,1,6628 -2020-03-21 18:13:00.810,Efficient unions and intersections in 2D plane when y_min = 0,"If got the following problem: -Given a series of rectangles defined by {x_min, height and x_max}, I want to efficiently compute their intersection and union, creating a new series. -For instance, if I got S1 = [{1,3,3}] and S2 = [{2,3,5}], the union would result in S3 = [{1,3,5}] and intersection in S3 = [{2,3,3}]. This would be a fairly simple case, but when S1 and S2 are a list of rectangles (unordered) It get's a little bit tricky. -My idea is trying some divide and conquer strategy, like using a modificated mergesort, and in the merge phase try to also merge those buildings. But I'm a little bit unsure about how to express this. -Basically I can't write down how to compare two rectangles with those coordinates and decide if they have to be in S3, or if I have to create a new one (for the intersection). -For the union I think the idea has to be fairly similar, but the negation (i.e. if they don't interesct). -This has to be O(nlogn) for sure, given this is in a 2D plane I surely have to sort it. Currently my first approach is O(n^2). -Any help how to reduce the complexity? -PD: The implementation I'm doing is in Python","I tried to write the whole thing out in psudo-code, and found that it was too psudo-y to be useful and too code-y to be readable. Here's the basic idea: -You can sort each of your input lists in O(n*log(n)). -Because we assume there's no overlap within each series, we can now replace each of those lists with lists of the form {start, height}. We can drop the ""end"" attribute by having a height-0 element start where the last element should have ended. (or not, if two elements were already abutting.) -Now you can walk/recurse/pop your way through both lists in a single pass, building a new list of {start, height} outputs as you go. I see no reason you couldn't be building both your union and intersection lists at the same time. -Cleanup (conversion to a minimal representation in the original format) will be another pass, but still O(n). -So the problem is O(n*log(n)), and could be O(n) if you could finagle a way to get your input pre-sorted.",0.0,False,1,6629 -2020-03-22 20:43:39.077,Python Methods (where to find additional resources),"I am learning Python by reading books, and I have a question about methods. Basically, all of the books that I am reading touch on methods and act like they just come out of thin air. For example, where can I find a list of all methods that can be applied? I can't find any documentation that lists all methods. -The books are using things like .uppercase and .lowercase but is not saying where to find other methods to use, or how to see which ones are available and where. I would just like to know what I am missing. Thanks. Do I need to dig into Python documentation to find all of the methods?","There is a lot of function in Python's modules. If you want to learn were you can find them, you should ask what you want. For example there is a random module and you can find some functions like random.randint.",0.0,False,1,6630 -2020-03-23 18:58:48.680,How to reach and add new row to web server database which made in Django framework?,"I am trying to create a web server which has Django framework and I am struggling with outer world access to server. While saying outer world I am trying to say a python program that created out of Django framework and only connects it from local PC which has only Internet connection. I can't figured it out how can I do this. -I am building this project in my local host, so I create the ""outer world python program"" outside of the project file. I think this simulation is proper. -I am so new in this web server/Django field. So maybe I am missing an essential part. If this happened here I'm sorry but I need an answer and I think it is possible to do. -Thanks in advance...","Django generated fields in database are just standard fields. The tables are named like 'applicationname'_'modelname', you are free to do requests to the database directly, without django. -If you want to do it through django, your outer program can request a web page from your web server, and deal with it. (You may want to take a look at RESTs frameworks)",1.2,True,1,6631 -2020-03-23 22:03:37.527,Failed to load dynlib/dll (Pyintaller),"After using the pyintaller to transfer the py file to exe file, the exe file throws the error: ""Failed to load dynlib/dll"". Here is the error line: - -main.PyInstallerImportError: Failed to load dynlib/dll 'C:\Users\YANGYI~1\AppData\Local\Temp\_MEI215362\sklearn\.libs\vcomp140.dll'. - Most probably this dynlib/dll was not found when the application was - frozen. [1772] Failed to execute script 2 - -after get this, I did check the path and I did not find a folder called ""_MEI215362"" in my Temp folder, I have already made all files visible. Also, I have re-download the VC but and retransferring the file to exe, but it didn't work. Any ideas how to fix the issue? Thank you in advance!","I also encountered a similar problem like Martin. -In my case, however, it was the ANSI64.dll missing... -So, I simply put the particular dll file into the dist directory. -Lastly, I keep the exe and related raw data files (e.g. xlsx, csv) inside the ""dist"" folder and to run the compiled program. It works well for me.",0.0,False,1,6632 -2020-03-23 23:50:42.940,How to delete drawn objects with OpenCV in Python?,"How to delete drawn objects with OpenCV in Python ? - -I draw objects on click (cv2.rectangle, cv2.circle) ... -Then I would like to delete only drawn objects. -I know that i need to make a layer in behind of the real image and to draw on another one. -But I do not know how to implement this in code.","Have a method or something that when it's executed, will replace the image with stuff drawn on it with an original unaltered image. It's best to create a clone of your original image to draw on.",0.0,False,1,6633 -2020-03-24 23:04:42.153,"Explain the necessity of database drivers, libraries, dlls in a python application that interacts with a remote database?","I have written a python script that connects to a remote Oracle database and inserts some data into its tables. -In the process I had to first import cx_Oracle package and install Oracle InstantClient on my local computer for the script to execute properly. -What I don't understand is why did I have to install InstantClient? -I tried to read through the docs but I believe I am missing some fundamental understanding of how databases work and communicate. -Why do I need all the external drivers, dlls, libraries for a python script to be able to communicate with a remote db? I believe this makes packaging and distribution of a python executable much harder. -Also what is InstantClient anyway? -Is it a driver? What is a driver? Is it simply a collection of ""programs"" that know how to communicate with Oracle databases? If so, why couldn't that be accomplished with a simple import of a python package? -This may sound like I did not do my own research beforehand, but I'm sorry, I tried, and like I said, I believe I am missing some underlying fundamental knowledge.","We have a collection of drivers that allow you to communicate with an Oracle Database. Most of these are 'wrappers' of a sort that piggyback on the Oracle Client. Compiled C binaries that use something we call 'Oracle Net' (not to be confused with .NET) to work with Oracle. -So our python, php, perl, odbc, etc drivers are small programs written such that they can be used to take advantage of the Oracle Client on your system. -The Oracle Client is much more than a driver. It can include user interfaces such as SQL*Plus, SQL*Loader, etc. Or it can be JUST a set of drivers - it depends on which exact package you choose to download and install. And speaking of 'install' - if you grab the Instant Client, there's nothing to install. You just unzip it and update your environment path bits appropriately so the drivers can be loaded.",1.2,True,1,6634 -2020-03-25 10:26:21.650,Module not appearing in jupyter,"I'm having issues with importing my modules into jupyter. I did the following: - -Create virtual env -Activate it (everything below is in the context of my venv) -install yahoo finance module: pip install yfinance -open python console and import it to test if working > OK! -open jupyter notebook -import yfinance throws ModuleNotFoundError: No module named 'yfinance' - -Any suggestions on how to fix this?","try this one in your jupyter and the run it -!pip install yfinance",0.0,False,1,6635 -2020-03-26 04:29:58.543,How do I create a template to store my HTML file when creating a web app with Python's Flask Framework in the PyCharm IDE?,I am trying to do a tutorial through FreeCodeCamp using Python's Flask Framework to create a web app in PyCharm and I am stuck on a section where it says 'Flask looks for HTML files in a folder called template. You need to create a template folder and put all your HTML files in there.' I am confused on how to make this template folder; is it just a regular folder or are there steps to create it and drag/drop the HTML files to it? Any tips or info would be of great help!!!,"As the tutorial ask you, you have to create a folder call ""templates"" (not ""template""). In PyCharm you can do this by right-clicking on the left panel and select New I Directory. In this folder you can then create your template files (right click on the newly created folder and select New I File, then enter the name of your file with the .html extension). -By default, flask looks in the ""templates"" folder to find your template when you call render_template(""index.html""). Notice that you don’t put the full path of your file at the first parameter but just the relative path to the ""templates"" folder.",1.2,True,1,6636 -2020-03-26 08:51:13.303,How to implement dct when the input image size is not a scale of 8?,"I learned that if one needs to implement dct on a image of size (H, W), one needs a matrix A that is of size (8, 8), and one needs to use this A to compute with a (8, 8) region F on the image. That means if the image array is m, one needs to compute m[:8, :8] first, and then m[8:16, 8:16], and so on. -How could I implement this dct when input image size is not a scale of 8. For example, when image size is (12, 12) that cannot hold two (8, 8) windows, how could I implement dct ? I tried opencv and found that opencv can cope with this scenario, but I do not know how it implemented it.","The 8x8 is called a ""Minimum Coded Unit"" (MCU) in the specification, though video enthusiasts call them ""macroblocks"". -Poorer implementations will pad to fill with zeroes - which can cause nasty effects. -Better implementations pad to fill by repeating the previous pixel from the left if padding to the right, or from above if padding downwards. -Note that only the right side and bottom of an image can be padded.",1.2,True,1,6637 -2020-03-26 10:29:19.013,Re-assign backslash to three dots in Python,"Is it possible in Python to re-assign the backslash character to something else, like to the three dots? -I hate the backslash character. It looks ugly. -There’s a long line in my code I really need to use the \ character. But I’d rather use the ... character. -I just need a simple yes/no answer. Is it possible? And in the case of yes, tell me how to re-assign that ugly thing.","Python syntactically uses the backslash to represent the escape character, as do other languages such as Java and C. As far as I am aware this cannot be overwritten unless you want to change the language itself.",0.0,False,1,6638 -2020-03-26 16:45:21.790,How do I receive a variable from python flask to JavaScript?,"I've seen how to make a post request from JavaScript to get data from the server, but how would I do this flipped. I want to trigger a function in the flask server that will then dynamically update the variable on the JavaScript side to display. Is there a way of doing this in a efficient manner that does not involve a periodic iteration. I'm using an api and I only want to the api to be called once to update.","There are three basic options for you: - -Polling - With this method, you would periodically send a request to the server (maybe every 5 seconds for example) and ask for an update. The upside is that it is easy to implement. The downside is that many requests will be unnecessary. It sounds like this isn't a great option for you. -Long Polling - This method means you would open a request up with the server and leave the request open for a long period of time. When the server gets new information it will send a response and close the request - after which the client will immediately open up a new ""long poll"" request. This eliminates some of the unnecessary requests with regular polling, but it is a bit of a hack as HTTP was meant for a reasonably short request response cycle. Some PaaS providers only allow a 30 second window for this to occur for example. -Web Sockets - This is somewhat harder to setup, but ultimately is the best solution for real time server to client (and vice versa) communication. A socket connection is opened between the server and client and data is passed back and forth whenever either party would like to do so. Javascript has full web socket support now and Flask has some extensions that can help you get this working. There are even great third party managed solutions like Pusher.com that can give you a working concept very quickly.",1.2,True,1,6639 -2020-03-26 20:40:16.497,Display result (image) of computation in website,"I have a python script that generates a heightmap depending on parameters, that will be given in HTML forms. How do I display the resulting image on a website? I suppose that the form submit button will hit an endpoint with the given parameters and the script that computes the heightmap runs then, but how do I get the resulting image and display it in the website? Also, the computation takes a few seconds, so I suppose I need some type of task queue to not make the server hang in the meanwhile. Tell me if I'm wrong. -It's a bit of a general question because I myself don't know the specifics of what I need to use to accomplish this. I'm using Flask in the backend but it's a framework-agnostic question.","Save the image to a file. Return a webpage that contains an element. The SRC should be a URL pointing at the file. -For example, suppose you save the image to a file called ""temp2.png"" in a subdirectory called ""scratch"" under your document root. Then the IMG element would be . -If you create and save the image in the same program that generates the webpage that refers to it, your server won't return the page until the image has been saved. If that only takes a few seconds, the server is unlikely to hang. Many applications would take that long to calculate a result, so the people who coded the server would make sure it can handle such delays. I've done this under Apache, Tomcat, and GoServe (an OS/2 server), and never had a problem. -This method does have the disadvantage that you'll need to arrange for each temporary file to be deleted after an expiry period such as 12 hours or whenever you think the user won't need it any more. On the webpage you return, if the image is something serious that the user might want to keep, you could warn them that this will happen. They can always download it. -To delete the old files, write a script that checks when they were last updated, compares that with the current date and time, and deletes those files that are older than your expiry period. -You'll need a way to automatically run it repeatedly. On Unix systems, if you have shell access, the ""cron"" command is one way to do this. Googling ""cron job to delete files older than 1 hour on web server"" finds a lot of discussion of methods. -Be very careful when coding any automatic-deletion script, and test it thoroughly to make sure it deletes the right files! If you make your expiry period a variable, you can set it to e.g. 1 minute or 5 minutes when testing, so that you don't need to wait for ages. -There are ways to stream your image back without saving it to a file, but what I'm recommending is (apart possibly from the file deleter) easy to code and debug. I've used it in many different projects.",1.2,True,1,6640 -2020-03-26 22:11:19.490,How to create a dynamic website using python connected to a database,"I would like to create a website where I show some text but mainly dynamic data in tables and plots. Let us assume that the user can choose whether he wants to see the DAX or the DOW JONES prices for a specific timeframe. I guess these data I have to store in a database. As I am not experienced with creating websites, I have no idea what the most reasonable setup for this website would be. - -Would it be reasonable for this example to choose a database where every row corresponds of 9 fields, where the first column is the timestamp (lets say data for every minute), the next four columns correspond to the high, low, open, close price of DAX for this timestamp and columns 5 to 9 correspond to high, low, open, close price for DOW JONES? -Could this be scaled to hundreds of columns with a reasonable speed -of the database? -Is this an efficient implementation? -When this website is online, you can choose whether you want to see DAX or DOW JONES prices for a specific timeframe. The corresponding data would be chosen via python from the database and plotted in the graph. Is this the general idea how this will be implemented? -To get the data, I can run another python script on the webserver to dynamically collect the desired data and write them in the database? - -As a total beginner with webhosting (is this even the right term?) it is very hard for me to ask precise questions. I would be happy if I could find out whats the general structure I need to create the website, the database and the connection between both. I was thinking about amazon web services.","You could use a database, but that doesn't seem necessary for what you described. -It would be reasonable to build the database as you described. Look into SQL for doing so. You can download a package XAMPP that will give you pretty much everything you need for that. This is easily scalable to hundreds of thousands of entries - that's what databases are for. -If your example of stock prices is actually what you are trying to show, however, this is completely unnecessary as there are already plenty of databases that have this data and will allow you to query them. What you would really want in this scenario is an API. Alpha Vantage is a free service that will serve you data on stock prices, and has plenty of documentation to help you get it set up with python. -I would structure the project like this: -Use the python library Flask to set up the back end. -In addition to instantiating the Flask app, instantiate the Alpha Vantage class as well (you will need to pip install both of these). -In one of the routes you declare under Flask, use the Alpha Vantage api to get the data you need and simply display it to the screen. -If I am assuming you are a complete beginner, one or more of those steps may not make sense to you, in which case take them one at a time. Start by learning how to build a basic Flask app, then look at the API. -YouTube is your friend for both of these things.",0.0,False,1,6641 -2020-03-27 01:17:46.150,"Python3: Does the built-in function ""map"" have a bug?","The following I had with Python 3.8.1 (on macOS Mojave, 10.14.6, as -well as Python 3.7 (or some older) on some other platforms). I'm new -to computing and don't know how to request an improvement of a -language, but I think I've found a strange behaviour of the built-in -function map. -As the code next(iter(())) raises StopIteration, I expected to -get StopIteration from the following code: -tuple(map(next, [iter(())])) -To my surprise, this silently returned the tuple ()! -So it appears the unpacking of the map object stopped when -StopIteration came from next hitting the ""empty"" iterator -returned by iter(()). However, I don't think the exception was -handled right, as StopIteration was not raised before the ""empty"" -iterator was picked from the list (to be hit by next). - -Did I understand the behaviour correctly? -Is this behaviour somehow intended? -Will this be changed in a near future? Or how can I get it? - -Edit: The behaviour is similar if I unpack the map object in different ways, such as by list, for for-loop, unpacking within a list, unpacking for function arguments, by set, dict. So I believe it's not tuple but map that's wrong. -Edit: Actually, in Python 2 (2.7.10), the ""same"" code raises -StopIteration. I think this is the desirable result (except that map in this case does not return an iterator).","Did I understand the behavior correctly? - - -Not quite. map takes its first argument, a function, and applies it to every item in some iterable, its second argument, until it catches the StopIteration exception. This is an internal exception raised to tell the function that it has reached the end of the object. If you're manually raising StopIteration, it sees that and stops before it has the chance to process any of the (nonexistent) objects inside the list.",0.1352210990936997,False,1,6642 -2020-03-27 03:41:18.327,SocketIO + Flask Detect Disconnect,"I had a different question here, but realized it simplifies to this: -How do you detect when a client disconnects (closes their page or clicks a link) from a page (in other words, the socket connection closes)? I want to make a chat app with an updating user list, and I’m using Flask on Python. When the user connects, the browser sends a socket.emit() with an event and username passed in order to tell the server a new user exists, after which the server will message all clients with socket.emit(), so that all clients will append this new user to their user list. However, I want the clients to also send a message containing their username to the server on Disconnect. I couldn’t figure out how to get the triggers right. Note: I’m just using a simple html file with script tags for the page, I’m not sure how to add a JS file to go along with the page, though I can figure it out if it’s necessary for this.","Figured it out. socket.on('disconnect') did turn out to be right, however by default it pings each user only once a minute or so, meaning it took a long time to see the event.",1.2,True,1,6643 -2020-03-27 05:09:55.327,Is it possible to create labels.txt manually?,"I recently convert my model to tensorflow lite but I only got the .tflite file and not a labels.txt for my Android project. So is it possible to create my own labels.txt using the classes that I used to classify? If not, then how to generate labels.txt?","You should be able to generate and use your own labels.txt. The file needs to include the label names in the order you provided them in training, with one name per line.",1.2,True,1,6644 -2020-03-27 17:12:09.890,Python 3.7 pip install - The system cannot find the path specified,"I am using Python 3.7 (Activestate) on a windows 10 laptop. All works well until I try to use pip to install a package (any package). From command prompt, when entering ""pip install anyPackage"" I get an error - ""The system cannot find the path specified."" no other explanation or detail. -Python is installed in ""C:\Python37"" and this location is listed in the Control Panel > System > Environment Variables > User Variables. -In the Environment Variables > System Variables I have: -C:\Python37\ -C:\Python37\DLLs\ -C:\Python37\Script\ -C:\Python37\Tools\ -C:\Python37\Tools\ninja\ -Any suggestions on how to get rid of that error, and make pip work? -Many thanks to all","Short : make sure that pip.exe and python.exe are running from the same location. If they don't (perhaps due to PATH environment variable), just delete the one that you don't need. -Longer: -when running pip install, check out where it tries to get python -For instance, in my own computer, it was: - -pip install -Fatal error in launcher: Unable to create process using '""c:\program files\python39\python.exe"" ""C:\Program Files\Python39\Scripts\pip.exe"" ': The system cannot find the file specified. - -Then I ran: -'where python.exe' // got several paths. -'where pip.exe' // got different paths. -removed the one that I don't use. Voila.",0.9950547536867304,False,1,6645 -2020-03-28 04:17:15.583,Change which python im using in terminal MacOs Catalina,"First of all, im really new at Machine Learning and Anaconda -Recently I´ve Installed Anaconda for MachineLearning but now when i try to run my old scripts from my terminal, all my packages are not there, even pip or numpy or pygame y don´t know how to change to my old python directory, I really don´t know how this works, please help me. I´m on MacOs Catalina","First of all, Python 3 is integrated in macOS X Catalina, just type python3. For pip, you can use pip3. Personally, I would prefer native over conda when using mac. -Next, you need to get all the modules up from your previous machine by pip freeze > requirements.txt or pip3 freeze > requirements.txt -If you have the list already, either it's from your previous machine or from a GitHub project repo, just install it via pip3 in your terminal: pip3 install -r requirements.txt -If not, you have to manually install via pip3, for example: pip3 install pygame etc. -After all dependencies are done installed, just run your .py file as usual. -Last, but not least, welcome to the macOS X family!",0.5457054096481145,False,1,6646 -2020-03-28 05:07:30.247,How to locate module inside PyCharm?,"I am a beginner in python 3. I want to locate where the time module is in PyCharm to study it's aspects/functions further. I can't seem to find it in the library. Can someone show me an example on how to find it ? -I know there are commands to find files, but I am not advanced enough to use them.","I think you may have a misconception - the time module is part of your Python installation, which PyCharm makes use of when you run files. Depending on your setup, you may be able to view the Python files under ""external libraries"" in your project viewer, but you could also view them from your file system, wherever Python is installed.",0.0,False,1,6647 -2020-03-28 05:40:42.997,How to emit different messages to different users based on certain criteria,"I am building a chat application using flask socketio and I want to send to a specific singular client and I'm wondering how to go about this. -I get that emit has broadcast and include_self arguments to send to all and avoid sending oneself, but how exactly would I go about maybe emitting to a single sid? -I've built this application using standard TCP/UDP socket connection where upon client connecting, there socket info was stored in a dictionary mapped to their user object with attributes that determined what would be sent and when I wanted to emit something to the clients I would iterate through this and be able to control what was being sent. -I'm hoping some mastermind could help me figure out how to do this in flask socket io","I ended up figuring it out. Using the flask request module, you can obtain the users sid using request.sid, which can be stored and emitted to within the room parameter emit(..... room=usersid",0.2012947653214861,False,1,6648 -2020-03-28 11:38:23.503,How to populate module internal progress status to another module?,"let us say I have a python 3.7+ module/script A which does extensive computations. Furthermore, module A is being used inside another module B, where module A is fully encapsulated and module B doesn't know a lot of module's A internal implementation, just that it somehow outputs the progress of its computation. -Consider me as the responsible person (person A) for module A, and anyone else (person B) that doesn't know me, is writing module B. So person A is writing basically an open library. -What would be the best way of module A to output its progress? I guess it's a design decision. - -Would a getter in module A make sense so that module B has to always call this getter (maybe in a loop) in order to retrieve the progress of A? -Would it possible to somehow use a callback function which is implemented in module A in such a way that this function is called every time the progress updates? So that this callback returns the current progress of A. -Is there maybe any other approach to this that could be used? - -Pointing me towards already existing solutions would be very helpful!","Essentially module B want to observe module A as it goes though extensive computation steps. And it is up to module A to decide how to compute progress and share this with module B. Module B can't compute progress as it doesn't know details of computation. So its is good use of observer pattern. Module A keeps notifying B about its progress. Form of progress update is also important. It can in terms of percentage, or ""step 5 of 10"" or time. It will actually define the notification payload structure with which module A will notify module B.",0.0,False,1,6649 -2020-03-28 15:31:06.740,Where does kivy.storage.jsonstore saves its files?,"I have a kivy app, where I use JsonStorage. Where does kivy save the json files, so how can I find it?",I just found out the json file is on the same level as the kivy_venv folder,1.2,True,1,6650 -2020-03-30 13:27:37.467,Python wait for request to be processed by queue and continue processing based on response,"I have the following setup: - -One thread which runs a directory crawler and parses documents -Another thread which processes database requests it gets in a queue - there are two basic database requests that come through - mark document processed (write operation) and is document already -processed (select operation) - -I understand that an sqlite connection object cannot be shared across threads, so the connection is maintained in the database thread. I am new to threading though and in my parser thread I want to check first if a document has been processed which means a database call, but obviously cannot do this call directly and have to send the request to the database thread which is fine. -However, where I am stuck is I am not sure how to make the parser thread wait for the result of the ""has document been processed"" request in the database thread. Is this where a threading event would come in? -Thanks in advance for your help!","Thanks to stovfl, used a threading event to realise this. Thanks again!",0.0,False,1,6651 -2020-03-30 18:29:23.523,How to accesss a python virtual enviroment when the command prompt is accidentally closed?,"I opened a virtual enviroment and accidentally closed the command prompt window in Windows. -I wanted to delete the virtual enviroment folder, but when I tried, it says program is running which still uses the files. -So how do I get back to the virtual enviroment, without opening a new one?",Just kill the daemon-process by command in Ctrl+Alt+Del interface. Then you can delete a folder,0.0,False,1,6652 -2020-03-30 19:49:43.310,Recommended python scientific workflow management tool that defines dependency completeness on parameter state rather than time?,"It's past time for me to move from my custom scientific workflow management (python) to some group effort. In brief, my workflow involves long running (days) processes with a large number of shared parameters. As a dependency graph, nodes are tasks that produce output or do some other work. That seems fairly universal in workflow tools. -However, key to my needs is that each task is defined by the parameters it requires. Tasks are instantiated with respect to the state of those parameters and all parameters of its dependencies. Thus if a task has completed its job according to a given parameter state, it is complete and not rerun. This parameter state is NOT the global parameter state but only what is relevant to that part of the DAG. This reliance on parameter state rather than time completed appears to be the essential difference between my needs and existing tools (at least what I have gathered from a quick look at Luigi and Airflow). Time completed might be one such parameter, but in general it is not the time that determines a (re)run of the DAG, but whether the parameter state is congruent with the parameter state of the calling task. There are non-trivial issues (to me) with 'parameter explosion' and the relationship to parameter state and the DAG, but those are not my question here. -My question -- which existing python tool would more readily allow defining 'complete' with respect to this parameter state? It's been suggested that Luigi is compatible with my needs by writing a custom complete method that would compare the metadata of built data ('targets') with the needed parameter state. -How about Airflow? I don't see any mention of this issue but have only briefly perused the docs. Since adding this functionality is a significant effort that takes away from my 'scientific' work, I would like to start out with the better tool. Airflow definitely has momentum but my needs may be too far from its purpose. -Defining the complete parameter state is needed for two reasons -- 1) with complex, long running tasks, I can't just re-run the DAG every time I change some parameter in the very large global parameter state, and 2) I need to know how the intermediate and final results have been produced for scientific and data integrity reasons.","I looked further into Luigi and Airflow and as far as I could discern neither of these is suitable for modification for my needs. The primary incompatibility is that these tools are fundamentally based on predetermined DAGs/workflows. My existing framework operates on instantiated and fully specified DAGs that are discovered at run-time rather than concisely described externally -- necessary because knowing whether each task is complete, for a given request, is dependent on many combinations of parameter values that define the output of that task and the utilized output of all upstream tasks. By instantiated, I mean the 'intermediate' results of individual runs each described by the full parameter state (variable values) necessary to reproduce (withstanding any stochastic element) identical output from that task. -So a 'Scheduler' that operates on a DAG ahead of time is not useful. -In general, most existing workflow frameworks, at least in python, that I've glanced at appear more to be designed to automate many relatively simple tasks in an easily scalable and robust manner with parallelization, with little emphasis put on the incremental building up of more complex analyses with results that must be reused when possible designed to link complex and expensive computational tasks the output of which may likely in turn be used as input for an additional unforeseen analysis. -I just discovered the 'Prefect' workflow this morning, and am intrigued to learn more -- at least it is clearly well funded ;-). My initial sense is that it may be less reliant on pre-scheduling and thus more fluid and more readily adapted to my needs, but that's just a hunch. In many ways some of my more complex 'single' tasks might be well suited to wrap an entire Prefect Flow if they played nicely together. It seems my needs are on the far end of the spectrum of deep complicated DAGs (I will not try to write mine out!) with never ending downstream additions. -I'm going to look into Prefect and Luigi more closely and see what I can borrow to make my framework more robust and less baroque. Or maybe I can add a layer of full data description to Prefect... -UPDATE -- discussing with Prefect folks, clear that I need to start with the underlying Dask and see if it is flexible enough -- perhaps using Dask delayed or futures. Clearly Dask is extraordinary. Graphchain built on top of Dask is a move in the right direction by facilitating permanent storage of 'intermediate' output computed over a dependency 'chain' identified by hash of code base and parameters. Pretty close to what I need, though with more explicit handling of those parameters that deterministically define the outputs.",0.3869120172231254,False,1,6653 -2020-04-01 00:12:57.697,"Is it possible to ""customize"" python?","Can I change the core functionality of Python, for example, rewrite it to use say(""Hello world"") instead of print(""Hello world"")? -If this is possible, how can this be done?","yes you can just write -say = print -say(""hello"")",0.0,False,1,6654 -2020-04-01 14:28:25.297,Python Arcsin Arccos radian and degree,"I am working on wind power and speed -u and vare zonal and meridional wind. (I have the values of these 2 vectors) -The wind speed is calculated by V = np.sqrt(u2*v2) -Wind direction is given by α between 0 and 360 degree -I know this relation holds - u / V = sin( abs(α)) and - v / V = cos( abs(α)) -In python I am using np.arccos and np.arcsin to try to find α between 0 and 360 with the 2 equation above. For the first one, it returns the radian so I convert with np.rad2deg(...) but it gives me a value between 0 and 180 degree for the second one, I also try to convert but it returns me a valus between 0 and 90. -Anyone knows how to code it? I am lost :(","The underlying problem is mathematics: cos(-x) == cos(x), so the function acos has only values in the [0,pi] interval. And for equivalent reasons asin has values in [-pi/2,pi/2] one. -But trigonometric library designers know about that, and provide a special function (atan2) which uses both coordinates (and not a ratio) to give a value in the [-pi, pi] interval. -That being said, be careful when processing wind values. A 360 wind is a wind coming from the North, and 90 is a wind coming from the East. Which is not the way mathematicians count angles...",0.3869120172231254,False,1,6655 -2020-04-01 19:04:15.633,How to play ogg files on python,"I looked everywhere, I don't find a way to properly play Ogg files, they all play wav! -My question is: Does somebody knows how to play Ogg files in python? -If somebody knows how I'll be very thankful :) -(I am on windows)","The easiest way is probably to start a media player application to play the file using subprocess.Popen. -If you already have a media player associated with Ogg files installed, using the start command should work.",0.0,False,1,6656 -2020-04-02 14:27:00.470,Restore file tabs above main editor in Spyder,I was modifying the layout in Spyder 4.1.1 and somehow lost the filename tabs (names of opened .py files) that used to appear above the central editor window. These were the tabs that had the 'X' button in them that allowed you to quickly close them. I've been toggling options in the View and Tools menus but can't seem to get it back. Anyone know how to restore this?,Try it. From menu View --> Panes --> Editor. Clicking on Editor and putting a tick there should bring that back if I understand your question properly,0.2012947653214861,False,2,6657 -2020-04-02 14:27:00.470,Restore file tabs above main editor in Spyder,I was modifying the layout in Spyder 4.1.1 and somehow lost the filename tabs (names of opened .py files) that used to appear above the central editor window. These were the tabs that had the 'X' button in them that allowed you to quickly close them. I've been toggling options in the View and Tools menus but can't seem to get it back. Anyone know how to restore this?,"(Spyder maintainer here) You can restore the tab bar in our editor by going to the menu -Tools > Preferences > Editor > Display -and selecting the option called Show tab bar.",1.2,True,2,6657 -2020-04-02 15:13:40.530,"In a child widget, how do I get the instance of a parent widget in kivy",How do I get the instance of a parent widget from within the child widget in kivy? This is so that I can remove the child widget from within the child widget class from the parent widget.,use parent. or root.ids.,0.0,False,1,6658 -2020-04-02 23:53:56.067,Python version in Visual Studio console,I have set the interpreter to 3.8.2 but when I type in the console python --version it gives me the python 2.7.2. Why is that and how to change the console version so I can run my files with Python 3? In windows console I have of course python 3 when I type the --version.,"(Assuming you use Visual Studio Code with the Python Extension) -The interpreter set in visual studio has nothing to do with the terminal python version when you run python --version. -python --version is bound to what python version is bound to 'python' in your environment variables. -Try python3 --version in the visual studio console to see what version is bound to python3. -If this is the right version, use python3 in the visual studio console from now on.",0.0,False,2,6659 -2020-04-02 23:53:56.067,Python version in Visual Studio console,I have set the interpreter to 3.8.2 but when I type in the console python --version it gives me the python 2.7.2. Why is that and how to change the console version so I can run my files with Python 3? In windows console I have of course python 3 when I type the --version.,"The console displayed by VSCode is basically an ordinary terminal. When you run the python file from VSCode using the green arrow at the top, it will call the appropriate python version displayed at the bottom of the VSCode window. You can also see what VSCode executes in the terminal seeing to which python its pointing to.",0.2012947653214861,False,2,6659 -2020-04-03 15:17:33.063,Print an UTF8-encoded smiley,"I am writing an ReactionRoles-Discord-Bot in Python (discord.py). -This Bot saves the ReactionRoles-Smileys as UFT8-Encoded. -The type of the encoded is bytes but it's converted to str to save it. -The string looks something like ""b'\\xf0\\x9f\\x98\\x82'"". -I am using EMOJI_ENCODED = str(EMOJI.encode('utf8')) to encode it, but bytes(EMOJI_ENCODED).decode('utf8') isn't working. -Do you know how to decode it or how to save it in a better way?","The output of str() is a Unicode string. EMOJI is a Unicode string. str(EMOJI.encode('utf8')) just makes a mangled Unicode string. -The purpose of encoding is to make a byte string that can be saved to a file/database/socket. Simply do b = EMOJI.encode() (default is UTF-8) to get a byte string and s = b.decode() to get the Unicode string back.",0.0,False,1,6660 -2020-04-06 15:12:45.273,How can I see the source code for a python library?,"I currently find myself using the bs4/BeautifulSoup library a lot in python, and have recently been wondering how it works. I would love to see the source code for the library and don't know how. Does anyone know how to do this? Thanks.","If you are using any IDE, you can right click on imported line and goto Implementation. -Otherwise you can find the source code in \Lib\site-packages directory.",0.1352210990936997,False,2,6661 -2020-04-06 15:12:45.273,How can I see the source code for a python library?,"I currently find myself using the bs4/BeautifulSoup library a lot in python, and have recently been wondering how it works. I would love to see the source code for the library and don't know how. Does anyone know how to do this? Thanks.","Go to the location where python is installed and inside the python folder, you will have a folder called Lib you can find all the packages there open the required python file you will get the code. -example location: C:\Python38\Lib",0.0,False,2,6661 -2020-04-08 01:37:55.407,Identifying positive pixels after color deconvolution ignoring boundaries,"I am analyzing histology tissue images stained with a specific protein marker which I would like to identify the positive pixels for that marker. My problem is that thresholding on the image gives too much false positives which I'd like to exclude. -I am using color deconvolution (separate_stains from skimage.color) to get the AEC channel (corresponding to the red marker), separating it from the background (Hematoxylin blue color) and applying cv2 Otsu thresholding to identify the positive pixels using cv2.threshold(blur,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU), but it is also picking up the tissue boundaries (see white lines in the example picture, sometimes it even has random colors other than white) and sometimes even non positive cells (blue regions in the example picture). It's also missing some faint positive pixels which I'd like to capture. -Overall: (1) how do I filter the false positive tissue boundaries and blue pixels? and (2) how do I adjust the Otsu thresholding to capture the faint red positives? -Adding a revised example image - - -top left the original image after using HistoQC to identify tissue regions and apply the mask it identified on the tissue such that all of the non-tissue regions are black. I should tru to adjust its parameters to exclude the folded tissue regions which appear more dark (towards the bottom left of this image). Suggestions for other tools to identify tissue regions are welcome. -top right hematoxylin after the deconvolution -bottom left AEC after the deconvolution -bottom right Otsu thresholding applied not the original RGB image trying to capture only the AEC positives pixels but showing also false positives and false negatives - -Thanks","I ended up incorporating some of the feedback given above by Chris into the following possible unconventional solution for which I would appreciate getting feedback (to the specific questions below but also general suggestions for improvement or more effective/accurate tools or strategy): - -Define (but not apply yet) tissue mask (HistoQC) after optimizing HistoQC script to remove as much of the tissue folds as possible without removing normal tissue area -Apply deconvolution on the original RGB image using hax_from_rgb -Using the second channel which should correspond to the red stain pixels, and subtract from it the third channel which as far as I see corresponds to the background non-red/blue pixels of the image. This step removes the high values in the second channel that which up because of tissue folds or other artifacts that weren't removed in the first step (what does the third channel correspond to? The Green element of RGB?) -Blur the adjusted image and threshold based on the median of the image plus 20 (Semi-arbitrary but it works. Are there better alternatives? Otsu doesn't work here at all) -Apply the tissue regions mask on the thresholded image yielding only positive red/red-ish pixels without the non-tissue areas -Count the % of positive pixels relative to the tissue mask area - -I have been trying to apply, as suggested above, the tissue mask on the deconvolution red channel output and then use Otsu thresholding. But it failed since the black background generated by the applying the tissue regions mask makes the Otsu threshold detect the entire tissue as positive. So I have proceeded instead to apply the threshold on the adjusted red channel and then apply the tissue mask before counting positive pixels. I am interested in learning what am I doing wrong here. -Other than that, the LoG transformation didn't seem to work well because it produced a lot of stretched bright segments rather than just circular blobs where cells are located. I'm not sure why this is happening.",0.1016881243684853,False,1,6662 -2020-04-09 08:59:03.497,Python: how to write a wrapper to make all variables decalared inside a function globals?,"I want a function to be run as if it was written in the main program, i.e. all the variables defined therein can be accessed from the main program. I don't know if there's a way to do that, but I thought a wrapper that gives this behaviour would be cool. It's just hacky and I don't know how to start writing it.","I have pieces of code written inside functions, and I really want to run them and have all the variables defined therein after run without having to write the lengthy return statements. How can I do that? - -That's what classes are for. Write a class with all your functions as methods, and use instance attributes to store the shared state. Problem solved, no global required.",1.2,True,1,6663 -2020-04-09 15:52:14.077,Python time series using FB Prophet with covid-19,"I have prepared a time series model using FB Prophet for making forecasts. The model forecasts for the coming 30 days and my data ranges from Jan 2019 until Mar 2020 both months inclusive with all the dates filled in. The model has been built specifically for the UK market -I have already taken care of the following: - -Seasonality -Holidaying Effect - -My question is, that how do I take care of the current COVID-19 situation into the same model? The cases that I am trying to forecast are also dependent on the previous data at least from Jan 2020. So in order to forecast I need to take into account the current coronavirus situation as well that would impact my forecasts apart from seasonality and holidaying effect. -How should I achieve this?","I have had the same issue with COVID at my work with sales forecasting. The easy solution for me was to make an additional regressor which indicates the COVID period, and use that in my model. Then my future is not affected by COVID, unless I tell it that it should be.",0.0,False,1,6664 -2020-04-10 21:53:54.647,How to get auto completion on jupyter notebook?,"I am new to Python language programming. I found that we can have auto completion on Jupyter notebook. I found this suggestion: -""The auto-completion with Jupyter Notebook is so weak, even with hinterland extension. Thanks for the idea of deep-learning-based code auto-completion. I developed a Jupyter Notebook Extension based on TabNine which provides code auto-completion based on Deep Learning. Here's the Github link of my work: jupyter-tabnine. -It's available on pypi index now. Simply issue following commands, then enjoy it:) -pip3 install jupyter-tabnine, -jupyter nbextension install --py jupyter_tabnine, -jupyter nbextension enable --py jupyter_tabnine, -jupyter serverextension enable --py jupyter_tabnine"" -I did 4 steps installation and it looked installed well. However, when I tried using Jupyter notebook its auto completion didn't work. Basically my question is please help how to get auto completion on Jupiter notebook? Thank you very much.","After installing Nbextensions, go to Nbextensions in jupyter notebook, tick on Hinterland. Then reopen your jupyter notebook.",1.2,True,2,6665 -2020-04-10 21:53:54.647,How to get auto completion on jupyter notebook?,"I am new to Python language programming. I found that we can have auto completion on Jupyter notebook. I found this suggestion: -""The auto-completion with Jupyter Notebook is so weak, even with hinterland extension. Thanks for the idea of deep-learning-based code auto-completion. I developed a Jupyter Notebook Extension based on TabNine which provides code auto-completion based on Deep Learning. Here's the Github link of my work: jupyter-tabnine. -It's available on pypi index now. Simply issue following commands, then enjoy it:) -pip3 install jupyter-tabnine, -jupyter nbextension install --py jupyter_tabnine, -jupyter nbextension enable --py jupyter_tabnine, -jupyter serverextension enable --py jupyter_tabnine"" -I did 4 steps installation and it looked installed well. However, when I tried using Jupyter notebook its auto completion didn't work. Basically my question is please help how to get auto completion on Jupiter notebook? Thank you very much.",Press tab twice while you are writing your code and the autocomplete tab will show for you. Just select one and press enter,0.5457054096481145,False,2,6665 -2020-04-10 22:10:08.740,Get pixel boundary coordinates from binary image in Python (not edges),"I have a binary image containing a single contiguous blob, with no holes. I would like create a polygon object based on the exterior edges of the edge pixels. I know how to get the edge pixels themselves, but I want the actual coordinates of the pixel boundaries, sorted clockwise or counter-clockwise. All of the pixels have integer coordinates. -For example, say I have a single pixel at (2,2). The vertices of the polygon would be: -(2.5, 2.5) -(2.5, 1.5) -(1.5, 1.5) -(1.5, 2.5) -(2.5, 2.5) -Is there an exact, non-approximate way to do this? Preferably in Python?","Based on the comments, here is the approach that I implemented: - -multiply all pixel coordinates by 10, so that we'll only deal with integers. - -For each pixel, generate the 4 corners by adding +/- 5. For example, for (20,20), the corners are (25, 25) (25, 15) (15, 15) (15, 25) (25, 25). And store all the corners in a list. - -Count the occurrences of each corner. If the count is odd, it is a corner to the blob. Making the coordinates integers makes this step easy. Counting floats has issues. - -Divide the blob corner coordinates by 10, getting back the original resolution. - -Sort the corners clockwise using a standard algorithm.",1.2,True,1,6666 -2020-04-11 11:45:48.813,Packaging Kivy application to Android - Windows,"I finished writing the code for a simple game using Kivy. I am having a problem converting it to Android APK, since I am using a windows computer. From some earlier research I got to know that using a Virtual machine is recommended, but I have no idea on how to download and use one :(, and if my slow PC can handle it... please help me. If possible, kindly recommend another way to convert to APK. -I am a beginner at coding as a whole, please excuse me if my question is stupid.",you could just try downloading a virtual box and installing linux operating system or you could directly install it and keep it a drive called F or E and you could just install python on that and all the required pakages and start the build using buildozer as it is not available for windows. So try doing it. But I need to do it just now. Tell me after you have tried that cuz there are a lot of people online on youtube who would heloo you doing that work,0.0,False,1,6667 -2020-04-11 14:36:32.187,How to stop vscode python terminal without deleting the log?,"when clicking ""Run Python file in terminal"" how do I stop the script? the only way that I found is by clicking the trashcan which deletes the log in the terminal.",When a Python file is running in the terminal you can hit Ctrl-C to try and interrupt it.,1.2,True,1,6668 -2020-04-11 21:09:01.457,python equivalent to matlab mvnrnd?,"I was just wondering how to go from mvnrnd([4 3], [.4 1.2], 300); in MATLAB code to np.random.multivariate_normal([4,3], [[x_1 x_2],[x_3 x_4]], 300) in Python. -My doubt namely lays on the sigma parameter, since, in MATLAB, a 2D vector is used to specify the covariance; whereas, in Python, a matrix must be used. -What is the theoretical meaning on that and what is the practical approach to go from one to another, for instance, in this case? Also, is there a rapid, mechanical way? -Thanks for reading.","Although python expects a matrix, it is essentially a symmetric covariance matrix. So it has to be a square matrix. -In 2x2 case, a symmetric matrix will have mirrored non diagonal elements. -I believe in python, it should look like [[.4 1.2],[1.2 .4]]",0.0,False,1,6669 -2020-04-12 14:40:25.627,how to get value of decision variable after maximum iteration limit in gekko,"I have written my code in python3 and solved it using Gekko solver. -After 10000 iterations, I am getting the error maximum iteration reached and solution not found. -So can I get the value of decision variables after the 10000th iteration? -I mean even when the maximum iteration is reached the solver must have a value of decision variable in the last iteration. so I want to access that values. how can I do that?","Question: -1) I am solving an MINLP problem with APOPT Solver. And my decision variables are defined as integer. I have retrieved the result of 10,000th iteration as you suggested. but the Decision variables values are non-integer. So why APOPT Solver is calculating a non-integer solution? -Answer: -There is an option on what is classified as an integer. The default tolerance is any number within 0.05 of an integer value. -you can change this by: -m.solver_options = ['minlp_integer_tol 1'] -2) I am running the code for ""m.options.MAX_ITER=100"" and using m = GEKKO() i.e. using remote server. But my code is still running for 10000th iterations. -Answer: Can do it alternatively by: -m.solver_options = ['minlp_maximum_iterations 100'] -Thanks a lot to Prof. John Hedengren for the prompt replies. -Gekko",0.0,False,1,6670 -2020-04-12 17:33:33.530,How to find the index of each leaf or node in a Decision Tree?,"The main question is to find which leaf node each sample is classified. There are thousands of posts on using tree.apply. I am well aware of this function, which returns the index of the leaf node. -Now, I would like to add the leaf index in the nodes of the graph (which I generate by using Graphviz). -Drawing the enumeration technique used for the indexes won't work. The decision tree that I am developing is quite big. Therefore, I need to be able to print the leaf index in the graph. -Another option that I am open to is to generate an array with all the leaf indexes (in the same order) of the leaf nodes of the decision tree. Any hint on how to do this?","There is a parameter node_ids of the command export_graphviz. When this parameter is set to True, then the indexes are added on the label of the decision tree.",1.2,True,1,6671 -2020-04-12 18:23:13.403,"Is it possible to reuse a widget in Tkinter? If so, how can I do it using classes?","I'm using classes and such to make a calculator in Tkinter, however I want to be able to be able to reuse widgets for multiple windows. How can I do this if this is possible?","A widget may only exist in one window at a time, and cannot be moved between windows (the root window and instances of Toplevel).",0.2012947653214861,False,2,6672 -2020-04-12 18:23:13.403,"Is it possible to reuse a widget in Tkinter? If so, how can I do it using classes?","I'm using classes and such to make a calculator in Tkinter, however I want to be able to be able to reuse widgets for multiple windows. How can I do this if this is possible?","As you commented: - -I'm making a calculator, as mentioned and I want to have a drop down menu on the window, that when selected it closes the root window and opens another, and I want to have the drop down menu on all the different pages, 5 or 6 in all - -In this case, just write a function that creates the menu. -Then call that function when creating each of the windows.",1.2,True,2,6672 -2020-04-12 23:48:11.210,Distributed computing for multiplying numbers,"Can you show me how I can multiply two integers which are M bits long using at most O(N^1.63) processors in O(N) parallel time in python. -I think that karatsuba algorithm would work but I don't understand how can I implement it parallely.","Yes, It is parallel karatsuba algorithm.",-0.3869120172231254,False,1,6673 -2020-04-13 08:07:07.280,how can i find IDLE in my mac though i installed my python3 with pip?,"I entered '''idle''' on terminal -and it only shows me python2 that has already been here. -How can i see python3 idle on my mac -while i installed python3 with pip?","you can specify the version -idle3",0.0,False,1,6674 -2020-04-13 10:32:47.843,how i ask for input in my telegram chat bot python telebot,"I am trying to get input from the user and send this input to all bot subscribers. -so I need to save his input in variable and use it after this in send_message method but I don't know how to make my bot wait for user input and what method I should use to receive user input -thanks :]","If you want to get an user input, the logic is a bit different. I suppose you are using longpolling. -When the bot asks the user for input, you can just save a boolean/string in a global variable, let's suppose the variable is user_input: -You receive the update, and ask the user for input, then you set user_input[user's id]['input'] = true -Then when you receive another update you just check that variable with an if (if user_input[userid]['input']: # do something). - -If your problem is 403 Forbidden for ""user has blocked the bot"", you can't do anything about it.",0.0,False,1,6675 -2020-04-13 10:39:34.980,How to monitor key presses in Python 3.7 using IDLE on Mac OSX,"Using Python 3.7 with IDLE, on Mac. I want to be able to monitor key presses and immediately return either the character or its ASCII or Unicode value. I can't see how to use pynput with Idle. Any ideas please?","You can't. IDLE uses the tkinter interface to the tcl/tk GUI framework. The IDLE doc has a section currently title 'Running user code' with this paragraph. - -When Shell has the focus, it controls the keyboard and screen. This is normally transparent, but functions that directly access the keyboard and screen will not work. These include system-specific functions that determine whether a key has been pressed and if so, which.",1.2,True,1,6676 -2020-04-13 14:37:13.480,OpenCV built from source: Pycharm doesn't get autocomplete information,"I'm trying to install OpenCV into my python environment (Windows), and I'm almost all of the way there, but still having some issues with autocomplete and Pycharm itself importing the library. I've been through countless other related threads, but it seems like most of them are either outdated, for prebuilt versions, or unanswered. -I'm using Anaconda and have several environments, and unfortunately installing it through pip install opencv-contrib-python doesn't include everything I need. So, I've built it from source, and the library itself seem to be working fine. The build process installed some things into ./Anaconda3/envs/cv/Lib/site-packages/cv2/: __init__.py, some config py files, and .../cv2/python-3.8/cv2.cp38-win_amd64.pyd. I'm not sure if it did anything else. -But here's where I'm at: - -In a separate environment, a pip install opencv-contrib-python both runs and has autocomplete working -In this environment, OpenCV actually runs just fine, but the autocomplete doesn't work and Pycharm complains about everything, eg: Cannot find reference 'imread' in '__init__.py' -Invalidate Caches / Restart doesn't help -Removing and re-adding the environment doesn't help -Deleting the user preferences folder for Pycharm doesn't help -Rebuilding/Installing OpenCV doesn't help -File->Settings->Project->Project Interpreter is set correctly -Run->Edit Configuration->Python Interpreter is set correctly - -So my question is: how does Pycharm get or generate that autocomplete information? It looks like the pyd file is just a dll in disguise, and looking through the other environment's site-packages/cv2 folder, I don't see anything interesting. I've read that __init__.py has something to do with it, but again the pip version doesn't contain anything (except there's a from .cv2 import *, but I'm not sure how that factors in). The .whl file you can download is a zip that only contains the same as what 'pip install' gets. -Where does the autocomplete information get stored? Maybe there's some way to copy it from one environment to another? It would get me almost all the way there, which at this point would be good enough I think. Maybe I need to rebuild it with another flag I missed?","Got it finally! Figures that would happen just after posting the question... -Turns out .../envs/cv/site-packages/cv2/python-3.8/cv2.cp38-win_amd64.pyd needed to be copied to .../envs/cv/DLLs/. Then PyCharm did it's magic and is now all good.",0.6730655149877884,False,2,6677 -2020-04-13 14:37:13.480,OpenCV built from source: Pycharm doesn't get autocomplete information,"I'm trying to install OpenCV into my python environment (Windows), and I'm almost all of the way there, but still having some issues with autocomplete and Pycharm itself importing the library. I've been through countless other related threads, but it seems like most of them are either outdated, for prebuilt versions, or unanswered. -I'm using Anaconda and have several environments, and unfortunately installing it through pip install opencv-contrib-python doesn't include everything I need. So, I've built it from source, and the library itself seem to be working fine. The build process installed some things into ./Anaconda3/envs/cv/Lib/site-packages/cv2/: __init__.py, some config py files, and .../cv2/python-3.8/cv2.cp38-win_amd64.pyd. I'm not sure if it did anything else. -But here's where I'm at: - -In a separate environment, a pip install opencv-contrib-python both runs and has autocomplete working -In this environment, OpenCV actually runs just fine, but the autocomplete doesn't work and Pycharm complains about everything, eg: Cannot find reference 'imread' in '__init__.py' -Invalidate Caches / Restart doesn't help -Removing and re-adding the environment doesn't help -Deleting the user preferences folder for Pycharm doesn't help -Rebuilding/Installing OpenCV doesn't help -File->Settings->Project->Project Interpreter is set correctly -Run->Edit Configuration->Python Interpreter is set correctly - -So my question is: how does Pycharm get or generate that autocomplete information? It looks like the pyd file is just a dll in disguise, and looking through the other environment's site-packages/cv2 folder, I don't see anything interesting. I've read that __init__.py has something to do with it, but again the pip version doesn't contain anything (except there's a from .cv2 import *, but I'm not sure how that factors in). The .whl file you can download is a zip that only contains the same as what 'pip install' gets. -Where does the autocomplete information get stored? Maybe there's some way to copy it from one environment to another? It would get me almost all the way there, which at this point would be good enough I think. Maybe I need to rebuild it with another flag I missed?","Alternatively add the directory containing the .pyd file to the interpreter paths. -I had exactly this problem with OpenCV 4.2.0 compiled from sources, installed in my Conda environment and PyCharm 2020.1. -I solved this way: - -Select project interpreter -Click on the settings button next to it and then clicking on the Show paths for selected interpreter -adding the directory containing the cv2 library (in my case in the Conda Python library path - e.g. miniconda3/lib/python3.7/site-packages/cv2/python-3.7). In general check the site-packages/cv2/python-X.X directory)",0.6730655149877884,False,2,6677 -2020-04-14 08:45:43.027,How to classify English words according to topics with python?,"How to classify English words according to topics with python? Such as THE COUNTRY AND GOVERNMENT: regime, politically, politician, official, democracy......besides, there are other topics: education/family/economy/subjects and so on. -I want to sort out The Economist magazine vocabularies and classify these according to frequency and topic. -At present, I have completed the words frequency statistics, the next step is how to classify these words automatically with python?","It sounds quite tough to handle it. Also it is not a simple task. If I were you, I consider 2 ways to do what you ask. - -Make your own rule for it - -If you complete counting the words, then you should match those word to topic. There is no free lunch. Make own your rule for classifying category. e.g. Entertainment has many ""TV"" and ""drama"" so If some text has it, then we can guess it belongs to Entertainment. - -Machine learning. - -If you can't afford to make rules, let machine do it. But even in this case, you should label the article with your desired class(topics). -Unsupervised pre-training(e.g. clustering) can also be used here. but at last, we need supervised data set with topics. -You should decide taxonomy of topics. - - -Welcome to ML world. -Hope it helps to get the right starting point.",0.0,False,1,6678 -2020-04-15 06:06:44.530,Can I use JWTAuthentication twice for a login authentication?,"In my login First place I wanted to send OTP and second place I wanted to verify the OTP and then return the token. -I am using rest_framework_simplejwt JWTAuthentication. First place I am verifying the user and sending the OTP, not returning the token and second place I am verifying the OTP and returning the token. -Let me know If this is the correct way to use? If not how can I implement this using JWTAuthentication. -OR If this is not correct way to use, can I implement like first place use Basic authentication to verify the user and second place jwt authentication to verify the OTP and send the tokens. Let me know your solution.","What I understood? -You need to send an OTP to the current user who is hitting your send_otp route after checking if the user exists or not in your system and then verify_otp route which will verify the OTP that the user has sent in the API alongwith it's corresponding mobile_number/email_id. -How to do it? - -send_otp - Keep this route open, you don't need an authentication for this, not even Basic Auth (that's how it works in industry), just get the mobile_number from the user in the request, check whether it exists in the DB, and send the OTP to this number, and set the OTP to the corresponding user in your cache maybe for rechecking (redis/memcache). Use throttling for this route so that nobody will be able to exploit this API of yours. -verify_otp - This route will also be open (no authentication_class/permission_classes), get the mobile_number/email id + OTP from the user, verify it in cache, if verified, generate the token using TokenObtainPairSerializer and send the refresh + access token in the response, if the OTP is incorrect, send 401.",0.0,False,1,6679 -2020-04-15 11:29:23.330,send request in selenium without clicking on the send button in python,"I have python script that use selenium to login website, you should insert the username and password and captcha for submit button to login.after this login webpage have send button for send form information with post request, how can i bypass this clicking in button and send the post request without clicking on the button ?","If you mean that you want to try and bypass the captcha and go straight to the send button, I doubt that's possible. If you need to solve recaptchas, check out 2captcha.com and use their API to solve it - which will unlock the send button, theoretically.",0.0,False,1,6680 -2020-04-15 23:29:13.073,What is the use of Celery in python?,I am confused in celery.Example i want to load a data file and it takes 10 seconds to load without celery.With celery how will the user be benefited? Will it take same time to load data?,"Normally, the user has to wait to load the data file to be done on the server. But with the help of celery, the operation will be performed on the server and the user will not be involved. Even if the app crashes, that task will be queued. - -Celery will keep track of the work you send to it in a database - back-end such as Redis or RabbitMQ. This keeps the state out of your - app server's process which means even if your app server crashes your - job queue will still remain. Celery also allows you to track tasks - that fail.",0.0,False,2,6681 -2020-04-15 23:29:13.073,What is the use of Celery in python?,I am confused in celery.Example i want to load a data file and it takes 10 seconds to load without celery.With celery how will the user be benefited? Will it take same time to load data?,"Celery, and similar systems like Huey are made to help us distribute (offload) the amount of processes that normally can't execute concurrently on a single machine, or it would lead to significant performance degradation if you do so. The key word here is DISTRIBUTED. -You mentioned downloading of a file. If it is a single file you need to download, and that is all, then you do not need Celery. How about more complex scenario - you need to download 100000 files? How about even more complex - these 100000 files need to be parsed and the parsing process is CPU intensive? -Moreover, Celery will help you with retrying of failed tasks, logging, monitoring, etc.",1.2,True,2,6681 -2020-04-16 12:20:31.583,Set Custom Discord Status when running/starting a Program,"I am working on a application, where it would be cool to change the Status of your Discord User you are currently logged in to. For example when i start the appplication then the Status should change to something like ""Playing Program"" and when you click on the User's Status then it should display the Image of the Program. -Now i wanted to ask if this is somehow possible to make and in which programming Languages is it makeable? -EDIT: Solved the Problem with pypresence","In your startup, where DiscordSocketClient is available, you can use SetGameAsync(). This is for C# using Discord.NET. -To answer your question, I think any wrapper for Discord's API allows you to set the current playing game.",0.0,False,1,6682 -2020-04-16 23:05:48.100,Save periodically gathered data with python,"I periodically receive data (every 15 minutes) and have them in an array (numpy array to be precise) in python, that is roughly 50 columns, the number of rows varies, usually is somewhere around 100-200. -Before, I only analyzed this data and tossed it, but now I'd like to start saving it, so that I can create statistics later. -I have considered saving it in a csv file, but it did not seem right to me to save high amounts of such big 2D arrays to a csv file. -I've looked at serialization options, particularly pickle and numpy's .tobytes(), but in both cases I run into an issue - I have to track the amount of arrays stored. I've seen people write the number as the first thing in the file, but I don't know how I would be able to keep incrementing the number while having the file still opened (the program that gathers the data runs practically non-stop). Constantly opening the file, reading the number, rewriting it, seeking to the end to write new data and closing the file again doesn't seem very efficient. -I feel like I'm missing some vital information and have not been able to find it. I'd love it if someone could show me something I can not see and help me solve the problem.","Saving on a csv file might not be a good idea in this case, think about the accessibility and availability of your data. Using a database will be better, you can easily update your data and control the size amount of data you store.",0.3869120172231254,False,1,6683 -2020-04-17 01:04:58.020,Tkinter - how to prevent user window resize from disabling autoresize?,"I have a question related to an annoying behavior I have observed recently in tkinter. When there's no fixed window size defined, the main window is expanded when adding new frames, which is great. However, if prior to adding a new widget the user only so much as touches the resizing handles, resizing the main window manually, then the window does not expand to fit the new widget. Why is that so and is there a way to prevent this behavior? -Thanks in advance!","The why is because tkinter was designed to let the user ultimately control the size of the window. If the user sets the window size, tkinter assumes it was for a reason and thus honors the requested size. -To get the resize behavior back, pass an empty string to the geometry method of the window.",0.3869120172231254,False,1,6684 -2020-04-17 06:16:21.087,how to get s3 object key by object url when I use aws lambda python?or How to get object by url?,"I use python boto3 -when I upload file to s3,aws lambda will move the file to other bucket,I can get object url by lambda event,like -https://xxx.s3.amazonaws.com/xxx/xxx/xxxx/xxxx/diamond+white.side.jpg -The object key is xxx/xxx/xxxx/xxxx/diamond+white.side.jpg -This is a simple example,I can replace ""+"" get object key, there are other complicated situations,I need to get object key by object url,How can I do it? -thanks!!","You should use urllib.parse.unquote and then replace + with space. -From my knowledge, + is the only exception from URL parsing, so you should be safe if you do that by hand.",0.2012947653214861,False,1,6685 -2020-04-17 17:10:44.803,Django REST Cache Invalidation,"I have a Django project and API view implemented with the Rest framework. I'm caching it using the @cache_page decorator but I need to implement a cache invalidation and I'm not seeing how to do that - do I need a custom decorator? -The problem: -The view checks the access of the API KEY and it caches it from the previous access check but, if the user changes the API KEY before the cache expires, the view will return an OK status of the key that no longer exists.","Yes, you'll need a cache decorator that takes the authentication/user context into account. cache_page() only works for GET requests, and keys based on the URL alone. -Better yet, though, - -Don't use a cache until you're sure you need one -If you do need it (think about why; cache invalidation is one of the two hard things), use a more granular cache within your view, not cache_page().",1.2,True,1,6686 -2020-04-17 17:23:20.373,Python3: can't open file 'sherlock.py' [Errno 2] No such file or directory,"So I am new to Kali Linux and I have installed the infamous Sherlock, nonetheless when I used the command to search for usernames it didn't work (Python3: can't open file 'sherlock.py' [Errno 2] No such file or directory). Naturally I tried to look up at similiar problems and have found that maybe the problem is located on my python path. -Which is currently located in /usr/bin/python/ and my pip is in /usr/local/bin/pip. Is my python and pip installed correctly in the path? If not, how do I set a correct path? -However if it is right and has no correlation with the issue, then what is the problem?",You have to change directory to sherlock twice. (it works for me),0.2012947653214861,False,1,6687 -2020-04-17 22:40:16.480,How to create individual node sets using abaqus python scripting?,"I am new to Python scripting in Abaqus. I am aware how to use the GUI but not really familiar with the scripting interface. However, I would like to know one specific thing. I would like to know how to assign a set to each individual node on a geometry's edges. I have thought about referencing the node numbers assigned to the geometry edges but don't know how I will do it. -The reason for creating a set for each node is that I would like to apply Periodic Boundary Conditions (PBC). Currently my model is a 2D Repeating Unit Cell (RUC) and I would like to apply a constraint equation between the opposite nodes on the opposite edges. To do that, I need to create a set for each node and then apply an equation on the corresponding set of nodes. -Just to add that the reason why I would like to use the Python scripting interface is because through the GUI, I can only make sets of nodes and create constraint equations for a simple mesh. But for a refined mesh, there will be a lot more constraint equations and a whole lot more sets. -Any suggestion of any kind would be really helpful.","One of the way would be with the help of getByBoundingBox(...) method available for selecting nodes inside of a particular bounding box. - -allNodes = mdb.models[name].parts[name].nodes - allNodes.getByBoundingBox(xMin, yMin, zMin, xMax, yMax, zMax) - mdb.models[name].parts[name].Set(name=, region=) - -One could always look for pointers in the replay file *.rpy of the current current session, which is mostly machine generated python code of the manual steps done in CAE. -Abaqus > Scripting Reference > Python commands > Mesh commands > MeshNodeArray object and Abaqus > Scripting Reference > Python commands > Region commands > Set object contains the relevant information.",1.2,True,1,6688 -2020-04-19 16:28:41.883,Python Seperate thread for list which automatically removes after time limit,"I want to have a list which my main process will add data to and this seperate thread will see the data added, wait a set amount of eg 1 minute then remove it from the list. Im not very experience with multi-threading in python so i dont know how to do this.","The way you could achieve this is by using a global variable as your list, as your thread will be able to access data from it. You can use a deque from the collections library, and each time you add something in the queue, you spawn a new thread that will just pop from the front after waiting that set amount of time. -Although, you have to be careful with the race conditions. It may happen that you try to write something at one end in your main thread and at the same time erase something from the beginning in one of your new threads, and this will cause unexpected behavior. -Best way to avoid this is by using a lock.",0.0,False,1,6689 -2020-04-20 05:00:44.977,How do you pass session object in TensorFlow v2?,"I have a function change_weight() that modifies weights in any given model. This function resides in a different python file. -So if I have a simple neural network that classifies MNIST images, I test the accuracy before and after calling this function and I see that it works. This was easy to do in TensorFlow v1, as I just had to pass the Session sess object in the function call, and I could get the weights of this session in the other file. -With eager execution in TensorFlow v2, how do I do this? I don't have a Session object anymore. What do I pass?",I was able to do this by passing the Model object instead and getting the weights by model.trainable_variables in the other function.,1.2,True,1,6690 -2020-04-20 14:29:58.670,PyQt5 Designer is not working: This application failed to start because no Qt platform plugin could be initialized,"i have a problem with PyQt5 Designer. I install PyQt with -pip install PyQt5 and then -pip install PyQt5-tools -everything OK. But when i try to run Designer it open messagebox with error: This application failed to start because no Qt platform plugin could be initialized! -how to deal with it?","I found a way of solving this: -Go to you python instalation folder Python38\Lib\site-packages\PyQt5\Qt\bin -Then copy all of that files to your clipboard and paste them at Python38\Lib\site-packages\pyqt5_tools\Qt\bin -Then open the designer.exe and it should work.",0.4961739557460144,False,3,6691 -2020-04-20 14:29:58.670,PyQt5 Designer is not working: This application failed to start because no Qt platform plugin could be initialized,"i have a problem with PyQt5 Designer. I install PyQt with -pip install PyQt5 and then -pip install PyQt5-tools -everything OK. But when i try to run Designer it open messagebox with error: This application failed to start because no Qt platform plugin could be initialized! -how to deal with it?","Go to => -Python38>lib>site-packages>PyQt5>Qt>plugins -In plugins copy platform folder -After that go to -Python38>lib>site-packages>PyQt5_tools>Qt>bin -paste folder here . Do copy and replace. - -This will surely work.. -Now you can use designer tool go and do some fun with python...",0.9999999406721016,False,3,6691 -2020-04-20 14:29:58.670,PyQt5 Designer is not working: This application failed to start because no Qt platform plugin could be initialized,"i have a problem with PyQt5 Designer. I install PyQt with -pip install PyQt5 and then -pip install PyQt5-tools -everything OK. But when i try to run Designer it open messagebox with error: This application failed to start because no Qt platform plugin could be initialized! -how to deal with it?","Try running it using command: pyqt5designer -It should set all the paths for libraries. -Works on Python 3.8, pyqt5-tool 5.15",0.9974579674738372,False,3,6691 -2020-04-21 06:23:06.150,What happens in background when we pass the command python manage.py createsuperuser in Django?,"I'm working on Django and I know to create a account to log in into admin page I have to create a superuser.And for that we have to pass the command python manage.py createsuperuser. -But my question is when we pass this command what happens first and because of what in the background and after that what happens?? Which all methods and classes are called to create a superuser?? -I know its a weird question but I wanted to know how this mechanism works.. -Thanks in advance!!","Other people will answer this in detail but let me tell you in short what happens. -First when you pass the command python manage.py createsuperuser you will be prompted to fill the fields mentioned in USERNAME_FIELD and REQUIERD_FIELDS, when you will fill those fields then django will call your user model manager to access your create_superuser function and then code in that will execute to return a superuser. -I hope this will help you.",1.2,True,1,6692 -2020-04-21 12:05:53.207,Stripe too many high risk payments,"I'm using the stripe subscription API to provide multi tier accounts for my users. but about 50% of the transactions that i get in stripe are declined and flagged as fraudulent. how can i diagnose this issue knowing that i'm using the default base code provided in the stripe documentation (front end) and using the stripe python module (backend). -I know that i haven't provided much information, but that is only because there isn't much to provide. the code is known to anyone who has used stripe before, and there isn't any issue with it as there are transaction that work normally. -Thank you !","After contacting stripe support, i found that many payments were done by people from an IP address that belongs to a certain location with a card that is registered to a different location. -for example if someone uses a French debit card from England. i did ask stripe to look into this issue.",1.2,True,1,6693 -2020-04-21 13:14:00.810,how to make my python project as a software in ubuntu,"I've made a python program using Tkinter(GUI) and I would like to enter it by creating a dedicated icon on my desktop (I want to send the file to my friend, without him having to install python or any interpreter). -The file is a some-what game that I want to share with friends and family, which are not familiar with coding. -I am using Ubuntu OS.","you can use pip3 install pyinstaller then use pyinstaller to convert your file to a .exe file than can run on windows using this command pyninstaller --onefile -w yourfile. -it can now run without installing anything on windows. and you can use wine to run it on ubuntu",1.2,True,1,6694 -2020-04-21 19:42:41.703,How to set attribute in nifi processor using pure Python not jython?,"How to set properties (attribute) in nifi processor using pure Python in ExecuteStreamCommand processor -I don't want to use Jython, I know it can be done using pinyapi. But don't know how to do it. I just want to create an attribute using Python script.","How to set properties(attribute) in nifi processor using pure python in ExecuteStreamCommand processor I don't want to use Jython - -You can't do it from ExecuteStreamCommand. The Python script doesn't have the ability to interact with the ProcessSession, which is what it would need to set an attribute. You'd need to set up some operations after it to add the attributes like an UpdateAttribute instance.",0.0,False,1,6695 -2020-04-22 10:04:50.737,Navigating through Github repos,"I am currently trying to find end-to-end speech recognition solutions to implement in python (I am a data science student btw). I have searched for projects on github and find it very hard to comprehend how these repositories work and how I can use them for my own project. -I am mainly confused with the following: - -how do repositories usually get used by other developers and how can I use them best for my specific issue? -How do I know if the proposed solution is working in python? -What is the usual process in installing the project from the repo? - -Sorry for the newbie question but I am fairly new to this. -Thank you","You can read the documentation(README.md) there you can have all the information you need. -You can install the project from a repo by cloning or by downloading zip.",0.0,False,1,6696 -2020-04-22 14:02:22.120,"Agent repeats the same action circle non stop, Q learning","How can you prevent the agent from non-stop repeating the same action circle? -Of course, somehow with changes in the reward system. But are there general rules you could follow or try to include in your code to prevent such a problem? - -To be more precise, my actual problem is this one: -I'm trying to teach an ANN to learn Doodle Jump using Q-Learning. After only a few generations the agent keeps jumping on one and the same platform/stone over and over again, non-stop. It doesn't help to increase the length of the random-exploration-time. -My reward system is the following: - -+1 when the agent is living -+2 when the agent jumps on a platform --1000 when it dies - -An idea would be to reward it negative or at least with 0 when the agent hits the same platform as it did before. But to do so, I'd have to pass a lot of new input-parameters to the ANN: x,y coordinates of the agent and x,y coordinates of the last visited platform. -Furthermore, the ANN then would also have to learn that a platform is 4 blocks thick, and so on. -Therefore, I'm sure that this idea I just mentioned wouldn't solve the problem, contrarily I believe that the ANN would in general simply not learn well anymore, because there are too many unuseful and complex-to-understand inputs.","This is not a direct answer to the very generally asked question. - -I found a workaround for my particular DoodleJump example, probably someone does something similar and needs help: - -While training: Let every platform the agent jumped on disappear after that, and spawn a new one somewhere else. -While testing/presenting: You can disable the new ""disappear-feature"" (so that it's like it was before again) and the player will play well and won't hop on one and the same platform all the time.",1.2,True,1,6697 -2020-04-22 19:47:18.237,Is vectorization a hardware/framework specific feature or is it a good coding practice?,"I am trying to wrap my head around vectorization (for numerical computing), and I'm coming across seemingly contradictory explanations: - -My understanding is that it is a feature built into low-level libraries that takes advantage of parallel processing capabilities of a given processor to perform operations against multiple data points simultaneously. -But several tutorials seem to be describing it as a coding practice that one incorporates into their code for more efficiency. How is it a coding practice, if it is also a feature you have or you don't have in the framework you are using. - -A more concrete explanation of my dilemma: - -Let's say I have a loop to calculate an operation on a list of numbers in Python. To vectorize it, I just import Numpy and then use an array function to do the calculation in one step instead of having to write a time consuming loop. The low level C routines used by Numpy will do all the heavy lifting on my behalf. - -Knowing about Numpy and how to import it and use it is not a coding practice, as far as I can tell. It's just good knowledge of tools and frameworks, that's all. -So why do people keep referring to vectorization as a coding practice that good coders leverage in their code?","Vectorization leverage the SIMD (Single Instruction Multiple Data) instruction set of modern processors. For example, assume your data is 32 bits, back in the old days one addition would cost one instruction (say 4 clock cycles depending on the architecture). Intel's latest SIMD instructions now process 512 bits of data all at once with one instruction, enabling you to make 16 additions in parallel. -Unless you are writing assembly code, you better make sure that your code is efficiently compiled to leverage the SIMD instruction set. This is being taking care of with the standard packages. -Your next speed up opportunities are in writing code to leverage multicore processors and to move your loops out of the interpreted python. Again, this is being taking care of with libraries and frameworks. -If you are a data scientist, you should only care about calling the right packages/frameworks, avoid reimplementing logic already offered by the libraries (with loops being a major example) and just focus on your application. If you are a framework/low-level code developer, you better learn the good coding practices or your package will never fly.",0.2655860252697744,False,2,6698 -2020-04-22 19:47:18.237,Is vectorization a hardware/framework specific feature or is it a good coding practice?,"I am trying to wrap my head around vectorization (for numerical computing), and I'm coming across seemingly contradictory explanations: - -My understanding is that it is a feature built into low-level libraries that takes advantage of parallel processing capabilities of a given processor to perform operations against multiple data points simultaneously. -But several tutorials seem to be describing it as a coding practice that one incorporates into their code for more efficiency. How is it a coding practice, if it is also a feature you have or you don't have in the framework you are using. - -A more concrete explanation of my dilemma: - -Let's say I have a loop to calculate an operation on a list of numbers in Python. To vectorize it, I just import Numpy and then use an array function to do the calculation in one step instead of having to write a time consuming loop. The low level C routines used by Numpy will do all the heavy lifting on my behalf. - -Knowing about Numpy and how to import it and use it is not a coding practice, as far as I can tell. It's just good knowledge of tools and frameworks, that's all. -So why do people keep referring to vectorization as a coding practice that good coders leverage in their code?","Vectorization can mean different things in different contexts. In numpy we usually mean using the compiled numpy methods to work on whole arrays. In effect it means moving any loops out of interpreted Python and into compiled code. It's very specific to numpy. -I came to numpy from MATLAB years ago, and APL before that (and physics/math as a student). Thus I've been used to thinking in terms of whole arrays/vectors/matrices for a long time. -MATLAB now has a lot just-in-time compiling, so programmers can write iterative code without a performance penalty. numba (and cython) lets numpy users do some of the same, though there are still a lot of rough edges - as can be seen in numpa tagged questions. -Parallelization and other means of taking advantage of modern multi-core computers is a different topic. That usually requires using additional packages. -I took issue with a comment that loops are not Pythonic. I should qualify that a bit. Python does have tools for avoiding large, hard to read loops, things like list comprehensions, generators and other comprehensions. Performing a complex task by stringing together comprehensions and generators is good Python practice, but that's not 'vectorization' (in the numpy sense).",0.5916962662253621,False,2,6698 -2020-04-23 09:33:37.790,How to fix not updating problem with static files in Django port 8000,"So when you make changes to your CSS or JS static file and run the server, sometimes what happens is that the browser skips the static file you updated and loads the page using its cache memory, how to avoid this problem?","Well there are multiple ways to avoid this problem - -the simplest way is by: - -if you are using Mac: Command+Option+R -if you are using Windows: Ctrl+F5 - - -What it does is that it re-downloads the cache files enabling the update of the static files in the browser. - -Another way is by: - -making a new static file and pasting the existing code of the previously used static file and then -running the server - - -What happens, in this case, is that the browser doesn't use the cache memory for rendering the page as it assumes it is a different file.",1.2,True,2,6699 -2020-04-23 09:33:37.790,How to fix not updating problem with static files in Django port 8000,"So when you make changes to your CSS or JS static file and run the server, sometimes what happens is that the browser skips the static file you updated and loads the page using its cache memory, how to avoid this problem?",U have DEBUG = False in your settings.py. Switch on DEBUG = True and have fun,0.0,False,2,6699 -2020-04-23 13:21:50.967,How to monitor memory usage of individual celery tasks?,"I would like to know the max memory usage of a celery task, but from the documentations none of the celery monitoring tools provide the memory usage feature. How can one know how much memory a task is taking up? I've tried to get the pid with billiard.current_process and use that with memory_profiler.memory_usage but it looks like the current_process is the worker, not the task. -Thanks in advance.","Celery does not give this information unfortunately. With little bit of work it should not be difficult to implement own inspect command that actually samples each worker-process. Then you have all necessary data for what you need. If you do this, please share the code around as other people may need it...",0.0,False,1,6700 -2020-04-23 16:53:28.387,How can I use the PyCharm debugger with Google Cloud permissions?,"I have a simple flask app that talks to Google Cloud Storage. -When I run it normally with python -m api.py it inherits Google Cloud access from my cli tools. -However, when I run it with the PyCharm debugger it can no longer access any Google Services. -I've been trying to find a way to have the PyCharm debugger inherit the permissions of my usual shell but I'm not seeing any way to do that. -Any tips on how I can use the PyCharm debugger with apps that require access to Google Cloud?","I usually download the credentials file and set GOOGLE_APPLICATION_CREDENTIALS=""/home/user/Downloads/[FILE_NAME].json environment variable in PyCharm. -I usually create a directory called auth and place the credential file there and add that directory to .gitignore -I don't know if it is a best practice or not but it gives me an opportunity to limit what my program can do. So if I write something that may have disrupting effect, I don't have to worry about it. Works great for me. I later use the same service account and attach it to the Cloud Function and it works out just fine for me.",1.2,True,1,6701 -2020-04-24 02:29:15.327,How often should i run requirements.txt file in my python project?,Working on a python project and using pycharm . Have installed all the packages using requirements.txt. Is it a good practice to run it in the beginning of every sprint or how often should i run the requirements.txt file ?,"The answer is NO. -Let's say you're working on your project, already installed all the packages in the requirements.txt into your virtual environment, etc etc, at this point your environment is already setup. -Keep working on the project and installed a new package with pip or whatever, now your environment is ok but you're requirements.txt is not up to date, you need to update it adding the new package, but you don't need to reinstall all the packages in it every time this happens. -You only runs pip install -r requirements.txt when you want to run your project on a different virtual environment",0.0,False,1,6702 -2020-04-24 18:18:18.077,"Converting days into years, months and days","I know I can use relativedelta to calculate difference between two dates in the calendar. However, it doesn't feet my needs. -I must consider 1 year = 365 days and/or 12 months; and 1 month = 30 days. -To transform 3 years, 2 months and 20 days into days, all I need is this formula: (365x3)+(30x2)+20, which is equal to 1175. -However, how can I transform days into years, months and days, considering that the amount of days may or may not be higher than 1 month or 1 year? Is there a method on Python that I can use? -Mathemacally, I could divide 1175 by 365, multiply the decimals of the result by 365, divide the result by 30 and multiply the decimals of the result by 30. But how could I do that in Python?",You can use days%365 to get number of years from days.,-0.3869120172231254,False,1,6703 -2020-04-25 11:10:01.107,Microsoft Visual C++ 14.0 is required error while installing a python module,"I was trying to pip install netfilterqueue module with my Windows 7 system, in python 3.8 -It returned an error ""Microsoft Visual C++ 14.0 is required"" -My system already has got a Microsoft Visual C++ 14.25. Do I still need to install the 14.0, or is there a way that I can get out of this error? -If no, how do I install a lower version without uninstalling or replacing the higher version?","Alright, try uninstalling the higher version and go for the lower version making sure you download it with the same computer not with another, remembering that windows 7 no longer support some operations, and i will advice you upgrade to windows 10",0.3869120172231254,False,1,6704 -2020-04-25 16:20:35.730,how do i go back to my system python using pyenv in Ubuntu,i installed pyenv and switched to python 3.6.9 (using pyenv global 3.6.9). How do i go back to my system python? Running pyenv global system didnt work,"pyenv sets the python used according to ~/.pyenv/version. For a temporary fix, you can write system in it. Afterwards, you'll need to fiddle through your ~/.*rc files and make sure eval ""$(pyenv init -)"" is called after any changes to PATH made by other programs (such as zsh).",0.1352210990936997,False,1,6705 -2020-04-25 17:06:38.983,Advanced game made in pygame is too slow,"I've been working on a game for a month and it's quite awesome. I'm not very new to game developing. -There are no sprites and no images, only primitive drawn circles and rectangles. -Everything works well except that the FPS gets slow the more I work on it, and every now and then the computer starts accelerating and heating up. -My steps every frame (besides input handling): - -updating every object state (physics, collision, etc), around 50 objects some more complex than the other -drawing the world, every pixel on (1024,512) map. -drawing every object, only pygame.draw.circle or similar functions - -There is some text drawing but font.render is used once and all the text surfaces are cached. -Is there any information on how to increase the speed of the game? -Is it mainly complexity or is there something wrong with the way I'm doing it? There are far more complex games (not in pygame) that I play with ease and high FPS on my computer. -Should I move to different module like pyglet or openGL? -EDIT: thank you all for the quick response. and sorry for the low information. I have tried so many things but in my clumsiness I heavent tried to solve the ""draw every pixel every single frame proccess"" I changed that to be drawn for changes only and now it runs so fast I have to change parameters in order to make it reasonably slow again. thank you :)","Without looking at the code its hard to say something helpful. -Its possible that you got unnecessary loops/checks when updating objects. -Have you tried increasing/decreasing the amount of objects? -How does the performance change when you do that? -Have you tried playing other games made with pygame? -Is your pc just bad? -I dont think that pygame should have a problem with 50 simple shapes. I got some badly optimized games with 300+ objects and 60+ fps (with physics(collision, gravity, etc.)) so i think pygame can easily handle 50 simple shapes. You should probably post a code example of how you iterate your objects and what your objects look like.",1.2,True,1,6706 -2020-04-26 18:20:13.677,Are there any other ways to share / run code?,"So I just created a simple script with selenium that automates the login for my University's portal. The first reaction I got from a friend was: ah nice, you can put that on my pc as well. That would be rather hard as he'd have to install python and run it through an IDE or through his terminal or something like that and the user friendliness wouldn't be optimal. -Is there a way that I could like wrap it in a nicer user interface, maybe create an app or something so that I could just share that program? All they'd have to do is then fill in their login details once and the program then logs them in every time they want. I have no clue what the possibilities for that are, therefore I'm asking this question. -And more in general, how do I get to use my python code outside of my IDE? Thusfar, I've created some small projects and ran them in PyCharm and that's it. Once again, I have no clue what the possibilities are so I also don't really know what I'm asking. If anyone gets what I mean by using my code further than only in my IDE, I'd love to hear your suggestions!","The IDE running you program is the same as you running your program in the console. But if you dont want them to have python installed (and they have windows) you can maybe convert them to exe with py2exe. But if they have linux, they probably have python installed and can run you program with ""python script.py"". But tell your friends to install python, if they program or not, it will always come in handy",0.3869120172231254,False,1,6707 -2020-04-27 17:40:11.477,Modular python admin pages,"I'm building a personal website that I need to apply modularity to it for purpose of learning. What it means is that there is a model that contains x number of classes with variations, as an example a button is a module that you can modify as much depending on provided attributes. I also have a pages model that need to select any of created modules and render it. I can't find any documentation of how to access multiple classes from one field to reference to. -Model structure is as below: - -Modules, contains module A and module B -Pages should be able to select any of module A and order its structure. - -Please let me know if not clear, this is the simplest form I could describe. Am I confusing this with meta classes? How one to achieve what I'm trying to achieve?","I ended up using Proxy models but will also try polymorphic approach. This is exactly what is designed to do, inherit models from a parent model in both one to many and many to many relationships.",1.2,True,1,6708 -2020-04-27 18:30:21.580,"How to deploy changes made to my django project, which is hosted on pythonanywhere?","I am new to git and Pythonanywhere. So, I have a live Django website which is hosted with the help of Pythonanywhere. I have made some improvements to it. I have committed and pushed that changes to my Github repository. But, now I don't know that how to further push that changes to my Pythonanywhere website. I am so confused. Please help!!! Forgive me, I am new to it.","You need to go to the repo on PythonAnywhere in a bash console, run git pull (You may need to run ./mange.py migrate if you made changes to your models) and then reload the app on ""Web"" configuration page on PythonAnywhere. .",1.2,True,1,6709 -2020-04-30 04:45:05.280,How is variable assignment implemented in CPython?,"I know that variables in Python are really just references/pointers to some underlying object(s). And since they're pointers, I guess they somehow ""store"" or are otherwise associated with the address of the objects they refer to. -Such an ""address storage"" probably happens at a low level in the CPython implementation. But -my knowledge of C isn't good enough to infer this from the source code, nor do I know where in the source to begin looking. -So, my question is: -In the implementation of CPython, how are object addresses stored in, or otherwise associated with, the variables which point to them?","In module scope or class scope, variables are implemented as entries in a Python dict. The pointer to the object is stored in the dict. In older CPython versions, the pointer was stored directly in the dict's underlying hash table, but since CPython 3.6, the hash table now stores an index into a dense array of dict entries, and the pointer is in that array. (There are also split-key dicts that work a bit differently. They're used for optimizing object attributes, which you might or might not consider to be variables.) -In function scope, Python creates a stack frame object to store data for a given execution of a function, and the stack frame object includes an array of pointers to variable values. Variables are implemented as entries in this array, and the pointer to the value is stored in the array, at a fixed index for each variable. (The bytecode compiler is responsible for determining these indices.)",1.2,True,1,6710 -2020-04-30 09:44:08.403,How to access Flask API from Flask Frontend?,"I am using Blueprints to create two separate modules, one for api and one for website. My APIs have a route prefix of api. Now, I am having a route in my website called easy and it will be fetching a JSON from a route in api called easy and it's route is /api/easy. -So, how can i call /api/easy from /easy. -I have tried using requests to call http:localhost:5000/api/easy and it works fine in development server but when I am deploying it on Nginx server, it fails probably because I am exposing port 80 there. -When I deploy my webapp on nginx, it show up perfectly just that route /easy throws Internal Server Error.","Okay so what worked for me is I simply ended up calling the api function from the frontend rather than doing the POST requests. Obviously, it makes no sense creating backend routes for flask seperately when you are using Flask too in frontend. Simply, a seperate utility function would be fine.",1.2,True,1,6711 -2020-05-01 14:00:39.287,How to acknowledge the waypoints are done [DJI ROS Python],"I have DJI M600 Drone and I'm using ROS DJI SDK on Raspberry PI to communicate with it. -I can successfully send waypoint commands and execute them. However, I don't know how to acknowledge that the waypoints are finished. What comes to my mind is that I can check where the drone is in order to compare it with the coordinates I sent. The second solution might be to check how many waypoints are left (haven't tried it yet). -I wonder if there is a topic that I can subscribe to so that I can ask if the waypoints are completed. What is the proper way to do that? -Thanks in advance, -Cheers","I use a different SDK so can't help with code example but I think you need to look into: -the wayPointEventCallback and wayPointCallback.",1.2,True,1,6712 -2020-05-01 23:28:20.050,Does pywin32 install cause any changes to registry settings that could affect MAPI,"I recently installed pywin32 at a client site and after this occurred, they started experiencing MAPI errors. I cannot see how the install would have had any effect on their emails. pywin32 was simply installed with no errors. I am a novice with Python so I apologise if there is not enough detail or for the lack of understanding on my part. -Pywin32 was installed on remote desktop and the error they were receiving around this time was ""241938E error - can't open default message store (MAPI)"". The actual python script using win32.com makes no use of MAPI (simply used for word application tasks) and worked without any issues. -The IT firm for the client wants to know if pywin32 causes any changes to registry settings that could have impacted them and caused this error? Incidentally, they had an office365 change around the same time. I think the 'finger pointing' is more in that direction but I do need to rule out any related registry setting changes that pywin32 may make on install that could have caused or contributed to the problem they were experiencing.",SOLVED: Problem found to be a Microsoft error - reported April 27,0.3869120172231254,False,1,6713 -2020-05-02 12:55:13.677,Python: Extracting Text from Applications?,"I spend each month a lot of time extracting numbers from an application into an Excel-spreadsheet where our company saves numbers, prices, etc. This application is not open-source or so, so unfortunately, sharing the link might not help. -Now, I was wondering whether I could write a Python program that would do this for me instead? But I'm not sure how to do this, particularly the part with extracting the numbers. Once this is done, transfering this to an Excel spreadsheet is particularly trivial.","1)For this you can create a general function like getApplicationError(), -2)in this method you can get the text of the Application Error(create xpath of the application error, and check that if that element is visible than get text) and throw an exception to terminate the Script and you can send that got message into Exception constructor and print that message with Exception. -As you are creating this method for general use so you need to call this method every posible place from where the chances are to get Application Error. like just after click(Create,Save,Submit,Delet,Edit, also just entering value in mendatory Fields)",0.0,False,1,6714 -2020-05-02 14:39:50.133,NiFi Parse PDF using Python Tika error: ExecuteStreamCommand,"I'm trying to do the following, but I'm getting errors on my ExecuteStreamCommand: -Cannot run program ""C:\Python36\pythonscript.py"" error=193 not a valid Win32 application"" -This is being run on my home Windows work station. - -GetFile (Get my PDF) -ExecuteStreamCommand (Call Python script to parse PDF with Tika, and create JSON file) -PutFile (Output file contains JSON that I will use later) - -Does NiFi have a built in PDF parser? Is there something more NiFi compatible that Tika? -If not, how do I call one from ExecuteStreamCommand? -Regards and thanks in advance!","Cannot run program ""C:\Python36\pythonscript.py"" error=193 not a valid Win32 application"" - -You need to add a reference to your Python executable to the command to run with ExecuteStreamCommand as you cannot run Python scripts on Windows with the shebang (#!/usr/bin/python for example on Linux).",0.0,False,1,6715 -2020-05-02 16:30:41.347,How can i get someone telegram chat id with python?,"Hi everyone i want to create a new telegram bot similar to @usinifobot. -this is how it works: -you just send someone username to this bot and it will give you back his/her chat id. -i want to do same thing but i dont khow how to do it with python-telegram-bot.","python-telegram-bot can't help with this, because telegram bot api can't provide ids of unrelated entities to the bot - only ones with which bot iteracts -you need telegram client api for this and to hold a real account (not a bot) online to make checks by another library -and with bot library like python-telegram-bot you could only get ids of users who write to bot, channels and groups bot is in and reposted messages from other users -I created similar bot @username_to_id_bot for getting ids of any entity and faced all these moments",0.0,False,1,6716 -2020-05-02 16:39:53.077,Image not loading on mobile device,"I have an app that includes some images, however when I package for my android phone the images are blank. Right now in my kv file, the images are being loaded from my D drive, so how would I get them to load on my phone?",Include the images during the packaging and then load them using a file path relative to your main.py.,0.0,False,1,6717 -2020-05-02 20:44:09.857,What options are there to setup automatic reporting processes for Pandas on AWS?,"I'm currently using elastic beanstalk and apscheduler to run Pandas reports everyday automatically on it. The data set is getting larger and i've already increased the memory size 3x. -Elastic Beanstalk is running Dash - dashboard application and runs the automated Pandas reports once every night. -I've tried setting up AWS Lambda to run Pandas reports on there but I couldn't figure out how to use it. -I'm looking for the most cost-effective way to run my reports without having to increase memory usage on Beanstalk. When I run it locally it takes 1gb but running it on beanstalk, it's using more than 16gb. -Curious if someone else has a better option or process how they automatically run their Pandas reports.","Create an .exe using Pyinstaller -Schedule .exe on Task Scheduler on computer -Cheaper than scaling AWS Beanstalk resources which use more resources calculating pandas than your computer locally at least for my case.",0.0,False,1,6718 -2020-05-03 11:20:16.410,"How to estimate the optimal model, following from the granger causality test?","Suppose I run the GC-test: -grangercausalitytests(np.vstack((df['target'], df['feature'])).T, maxlag=5) -I can pick the lag of the ""feature"" variable, which most likely Granger-causes the ""target"" variable. - -But what number of lags does the ""target"" variable have in this model? -Further, how do I estimate this ADL model (some autoregressive lags + some lags of the independent variable)? I've seen somewhere, that ADL should be substituted with OLS/FGLS in Python, since there is no package for ADL. Yet I do not understand how to do that","I found out that the model, corresponding to each particular number -of lags in the GC-test has already been fit and is contained in the -test return. The output looks messy, but it's there. -Unfortunately, -there seems to be no capacity to estimate ADL models in Python yet -:(",1.2,True,1,6719 -2020-05-03 19:56:51.107,Updating code for my application every week,"I’m creating a bot app in python using selenium, for people, but I would need to change the xpath code every week, how do I do this once I distribute the app to people? -Thanks In advance","I think the best approach is to locate selectors using Id rather than XPath since there won't be any change to Id selector once a new feature(adding a table/div to the HTML)is added. Also, this reduces the rework effort to a large extend.",0.0,False,1,6720 -2020-05-03 20:50:01.953,How to managing a large number of clients in the socket programming,"In examples, I saw of socket programming projects (most of which were chat projects), they often saved all the clients in one array, and when a message was received from a client, in addition to saving it in the database, to all clients also was sent. -The question that comes to my mind is: How can this message received from the client and saved in the database and send to clients when number of clients is very large? (I mean, the number of customers is so large that a single server can't meet their demand alone, and several servers are needed to connect sockets). -In this case, not all clients can be managed through the array. So how do you transfer a message that is now stored on another server (by another customer) to a customer on this server? (Speed ​​is important). -Is there a way to quickly become aware of database changes and provide them to the customer? (For example, Telegram.) -I'm looking for a perspective, not a code.","You should use your database as your messaging center. Have other servers watch for changes in the database either by subscription or by pulling on a time interval. Obviously subscription would be fastest possible. -When a message is inserted, each server picks this up and sends to their list of clients. This should be quite fast for broadcasting messages.",1.2,True,1,6721 -2020-05-03 23:39:19.313,Command payload validation in event sourced micro-service architecture,"I am confused about how to realize the data validation in event sourced micro-service architecture. -Let sum up some aspects that related to the micro-services. -1. Micro-services must be low coupled. -2. Micro-services better to be domain oriented -Then due to tons of materials in the internet and the books in DDD (Domain Driven Design) -I create the next event sourced micro-service architecture. -Components -1. API getaway to receive the REST calls from the clients and transform them into the commands. -2 Command handler as a service. Receive the commands from API getaway make the validations. Save the events to the event store and publish events to the event bus. -3. Event store is the storage for all events in the system. Allows us to recreate the state of the app. The main state of truth. -4. Micro-services is small services responsible to handle the related to its domain event. Make some projections to the local private databases. Make some events too. -And I have questions that I could not answer both by myself and the internet. -1. What is actually aggregates. They are the class objects/records in databases as I think or what? -2. Who carry about aggregates. I found example that is some cases command handler use them. But in that way if aggregates stored in the private micro-services databases then we will have very high coupling between the command handler and the each of micro-services and it is wrong due to micro-service concept. -To sum up. -I am confused about how to implement aggregation in event source micro service architecture. -For example let focus on the user registration implementation in event source micro-service architecture. -We have the user domain so the architecture will be next. -API getaway -Command handler -Auth micro-service -User micro-service -Please explain me the realization of command validation due to example above.","Command handler as a service - -I think this is the main source of your confusion. -The command handler isn't normally a service in itself. It is a pattern. It will normally be running in the same process as the ""microservice"" itself. -IE: the command handler reads a message from so storage, and itself invokes the microservice logic that computes how to integrate the information in this message into its own view of the world. - -What is actually aggregates - -""Aggregate"" is a lifecycle management pattern; an aggregate is a graph of one or more domain entities that together will establish and maintain some interesting invariant. It's one of three patterns described in detail in the Domain Driven Design book written by Eric Evans. -The command handler plus your aggregate is, in a sense, your microservice. The microservice will typically handle messages for several instances of a single aggregate - it will subscribe to all of the input messages for that kind of aggregate. The ""handler"" part just reads the next message, loads the appropriate instance of the aggregate, then executes the domain logic (defined within the aggregate entities) and stores the results.",0.3869120172231254,False,1,6722 -2020-05-04 07:09:36.600,List Comprehension to remove unwanted objects. Validating it works as expected,"I'm using Python 3. I am trying to remove certain lists from a list of lists. I found an excellent article that explained how to do that using list comprehension. It appears to work as expected, but it got me thinking ... In my original efforts I was appending any list object that was to be deleted to a new list. I could then actually look at these objects and assure myself the right ones were being removed. With the comprehension method I can only ""see"" the ones that remain. Is there a way of ""seeing"" what's ""failed"" the list comprehension condition? It would be reassuring to know that only the correct objects gave been removed.","I actually managed to answer my own question by making a mistake. To see what will be removed from a list by the list comprehension, simply temporarily invert the condition logic. This will allow you to look at all the elements that will be removed. If you're happy that the removals are as you expect, then simply re-invert the logic again, back to original and execute.",0.0,False,1,6723 -2020-05-04 07:38:36.543,Installing python 3.5 or higher in a virtual environment on a raspberry pi,"I am new to this, so apologize if the step is easy. -I have a Device which I am programming, which uses a raspberry pi (Debian). I have connected via SSH using PuTTY. -I wish to create a virtual environment, and test a program on the device to search the WiFi network SSIDs and pick them up. I found that a great package to use is wpa_supplicant. -However, here is the problem: -The device currently has Python 2.7.9 on it. When ever I create a virtual environment using python3, it creates a venv with python 3.4. Unfortunately, wpa_supplicantm requires python 3.5 or higher to work. -When I run sudo apt-get install python3-venv, I can see in the prompt that it automatically starts installing packages for python3.4. -Does anyone know how I can specify that I wish to install python 3.5 or 3.7? -Any help would be greatly appreciated. -Regards -Scott","Does it not have the python3.7 command? -I just checked a venv I have running on a 3b+ and it seems to have it.",0.0,False,1,6724 -2020-05-04 12:17:02.553,import sqlite3.dll from from another file location python,"I new to python but how do I import the sqlite3.dll file from a custom file location as I can't find anything about it. I can accept any option including building a new pyd,dll,etc file. -Edit: -I need it to be in a separate location.","Note: The following answers the above question with more thorough steps. -I had the same issue as administrative rights to the default python library is blocked in a corporate environment and its extremely troublesome to perform installations. -What works for me: - -Duplicate the sqlite3 library in a new location -Put in the latest sqlite3.dll (version you want from sqlite3 web) and the old _sqlite3.pyd into the new location or the new sqlite3 library. The old _sqlite3.pyd can be found in the default python library lib/DLLs folder. -Go to the new sqlite3 library and amend the dbapi2.py as follows: Change ""from _sqlite3 import *"" to ""from sqlite3._sqlite3 import *"" -Make sure python loads this new sqlite3 library first. Add the path to this library if you must.",0.0,False,1,6725 -2020-05-04 13:11:48.527,How do i check if the elements in a list are contained in other list?,"I´m new to Python and I´m having a problem. I have 2 lists containing the names of the columns of a dataset: one has all the columns names (columnas = total.columns.values.tolist()); and the other one has a subset of them ( in the form of ""c = [a,b,c,d,c,e,...]"". -I would like to know how could I check if each element in ""c"" is contained in the longer list ""columnas"". The result i have been trying to get is as it follows ( this is just an example): -a: True -b: True -c: False -... -Looking forward to your answers, Santiago","a=[ ] -for i in c: - if i in columns: - a.append(true) - else: - a.append(false) -a=[ ] -for i in c: - if i in columns: - a.append(true) - else: - a.append(false)",0.2012947653214861,False,1,6726 -2020-05-04 15:25:36.053,Why does deleting a variable assigned to another variable not influence the new variable?,"If I do li = [1,2,3], and then do a = li, a is assigned to li, right? However, when I do del li and then print a, it still shows [1,2,3]. When I do li.append(4) and print a then, why does it show [1,2,3,4]? -I understand that a didn't make a copy of li (as the .copy() method is used for that), but why would a still show the value li used to have?","del does not delete the variable. del only deletes the name, and the garbage collector will (on its own time) search for variables that aren't referenced by anything, and properly deallocate their memory. -In this case, you're assigning the name a to reference the same variable that the name li is referencing. When you use .append(), it modifies the variable, and all names referencing the variable will be able to see the change. And when you do del li to remove the name li, it doesn't remove the name a, which is still referencing the variable. Thus, the variable doesn't get deallocated and removed.",0.3869120172231254,False,1,6727 -2020-05-05 01:52:32.367,Splitting a Large S3 File into Lines per File (not bytes per file),"I have an 8 GB file with text lines (each line has a carriage return) in S3. This file is custom formatted and does NOT follow any common format like CSV, pipe, JSON ... -I need to split that file into smaller files based on the number of lines, such that each file will contains 100,000 lines or less -(assuming the last file can have the remainder of the lines and thus may have less than 100,000 lines). - -I need a method that is not based on the file size (i.e. bytes), but the number of lines. Files can't have a single line split across the two. -I need to use Python. -I need to use server-less AWS service like Lambda, Glue ... I can't spin up instances like EC2 or EMR. - -So far I found a lot of posts showing how to split by byte size but not by number of lines. -Also, I do not want to read that file line by line as it will be just too slow an not efficient. -Could someone show me a starter code or method that could accomplish splitting this 6 GB file that would -run fast and not require more than 10 GB of available memory (RAM), at any point? -I am looking for all possible options, as long as the basic requirements above are met... -BIG thank you! -Michael","boto3.S3.Client.get_object() method provides object of type StreamingBody as a response. -StreamingBody.iter_lines() method documentation states: - -Return an iterator to yield lines from the raw stream. -This is achieved by reading chunk of bytes (of size chunk_size) at a - time from the raw stream, and then yielding lines from there. - -This might suit your use case. General idea is to get that huge file streaming and process its contents as they come. I cannot think of a way to do this without reading the file in some way.",0.3869120172231254,False,1,6728 -2020-05-05 09:27:57.100,Is there any way to use the python IDLE Shell in visual studio code?,"I was programming python in Visual Studio Code and every time that I ran something it would use the integrated terminal (logically, because I have not changed any settings) and I was wondering, how could I get it to use the Python IDLE's shell instead of the integrated terminal (which for me is useless)? -I have also got Python IDLE installed in my mac but due to Visual Studio Code having ""intellisense"", it is way easier.","In VS Code you should be able to select the file which is supposed to be used in the terminal. -Under : -Preferences -> Settings -> Terminal",0.0,False,1,6729 -2020-05-05 12:56:51.043,How to work with virtual environment and make it default,"I have created a virtual environment named knowhere and I activate it in cmd using code .\knowhere\Scripts\activate. I have installed some libraries into this environment. -I have some python scripts stored on my pc. When I try to run them they are not working since they are not running in this virtual environment. Now how to make these scripts run. -Also is there any way to make ""knowhere"" as my default environment.","Virtual environments are only necessary when you want to work on two projects that use different versions of the same external dependency, e.g. Django 1.9 and Django 1.10 and so on. In such situations virtual environment can be really useful to maintain dependencies of both projects. -If you simply want your scripts to use Python libraries just install them on your system and you won't have that problem.",1.2,True,1,6730 -2020-05-05 14:28:18.167,Pandas/Dask - Very long time to write to file,"I have a few files. The big one is ~87 million rows. I have others that are ~500K rows. Part of what I am doing is joining them, and when I try to do it with Pandas, I get memory issues. So I have been using Dask. It is super fast to do all the joins/applies, but then it takes 5 hours to write out to a csv, even if I know the resulting dataframe is only 26 rows. -I've read that some joins/applies are not the best for Dask, but does that mean it is slower using Dask? Because mine have been very quick. It takes seconds to do all of my computations/manipulations on the millions of rows. But it takes forever to write out. Any ideas how to speed this up/why this is happening?",You can use Dask Parallel Processing or try writing into Parquet file instead of CSV as Parquet operation is very fast with Dask,0.0,False,1,6731 -2020-05-05 16:01:09.690,Combining C with Python,"I'd like to mix C code with Python GUI libraries. I thought about creating C library and using it with ctypes. How to create library for both Linux and Windows at the same time? On Linux, I simply use gcc -fPIC -shared -o lib.so main.c, but how to do that for Windows?","Many IDE for C/C++ already prepared DLL program template,such as Visual Studio,Code::Blocks,VC++6.0 etc.. Using DLL files is similar to using SO files",-0.3869120172231254,False,1,6732 -2020-05-06 05:20:34.160,How to use multimetric python library for calculating code parameters?,"I am working on project which deals with calculation of halsted matrix and mccabe cyclomatic complexity for the codes in various languages. I found this library multimeric but its documentation is not intuitive. Please explain to me how to use this library for finding code metrics. -if you know any other library which does this work then please suggest.","install multimetric follow the instruction from PyPI. go to the code(example.py) location. e.g. cd /user/Desktop/code -Then type these in terminal: -multimetric example.py -then you can see the result.",0.3869120172231254,False,1,6733 -2020-05-06 07:01:59.710,Passing authorization code to Python Notebook,"As I am trying to connect to Google's BigQuery environment from a Python notebook using the google.cloud library, the response from the server is to visit a link that generates a code and to ""Enter the authorization code:"" . However, as this response is just text, I do not know how to pass the code back to the server response. I am running this notebook in a Databricks environment. -Does anyone know how I can push this code back to the server and complete the authorization?",It doesnn't look like you are using the correct . The flow which you mentioned will work on a UI not not with any automation . I suggest you to share more dtails here and also check for more documenataion on the same .,0.0,False,1,6734 -2020-05-06 13:48:33.700,Python pptx - pass html formatted text to paragraph in powerpoint,"I have string with html tags and I would like to pass formatting to power point. -The only idea I have now is to split it using some xml library and add bunch of ifs adding formatting to run depending on a tag. -Did you encounter similar problem or have better idea how to approach it?","I don't think there is a method of doing this. For a start, some HTML elements and attributes aren't likely to translate. -I've done a very limited amount of this - and actually it was mostly Markdown I was translating. (The HTML relevance is that I did work with entity references and also
.) -I'm sorry to say my code is useless to you. -My advice would be to support a small subset of HTML. Perhaps with some limited styling and things like
.",0.3869120172231254,False,1,6735 -2020-05-07 13:35:13.030,how do i divide a number by two multiple times,"Can you help me help my son with python homework? -His homework this week is on iteration. We've worked through most of it, but we can't make much headway with the following: -""• Write a program that will ask a user to enter a number between 1 and 100. The program should keep dividing the number by 2 until it reaches a number less than 2. The program should tell the user how many times it had to divide by 2. "" -Can you help us with this, and preferably include some # lines in the code so we can better understand what's happening?","Great that you're helping your son with his homework! Very exciting! -If I summarize the question, it is: - -take a number n -divide it by 2 -repeat step 2 until your number is less than 2 -output how often it had to be divided - -Let's do this by hand: - -I take a number, 15: -I divide once, I get 7.5 -it's not less than 2, so I continue -I divide by 2 again (2 times total), I get 3.75 -it's not less than 2, so I continue -I divide by 2 again (3 times total), I get 1.875 -it's less than 2, so I stop - -I had to divide by three times total. -If you were to take these steps and write it in code, how would you do this? (Hint: use a while loop!)",0.9866142981514304,False,1,6736 -2020-05-08 16:25:25.377,Why is my python discord bot suddenly duplicating responses to commands,"Some people were using my bot on a server I am a part of, and for some reason, the bot suddenly started duplicating responses to commands. Basically, instead of doing an action once, it would do it twice. I tried restarting it multiple times which didn't work, and I know it isn't a problem with my code because it was working perfectly well a few seconds ago. It probably wasn't lag either, because only a couple of people were using it. Any ideas on why this may be and how to fix it? I am also hosting it on repl.it, just so you know what ide Im using",Its probably because you run the script and run the host at them same time so it sends the command thorught host and code. If you dont run the code but just the host and it still dupliactes it might be an error with the host or it runs somewhere else in the backround.,0.0,False,1,6737 -2020-05-09 10:23:55.847,Prevent the user with session from entering the URL in browser and access the data in Python Django Application,"I have a Python Django web application. -In Get method how to prevent the user from entering url and access the Data. -How do i know weather the url accessed by Code or Browser. -I tried with sessionid in Cookie, But if session exist's it allow to access the data. -Thanks.","I achieve it by -if 'HTTP_REFERER' not in request.META: -it not exist's when hit directly from browser url.",0.0,False,2,6738 -2020-05-09 10:23:55.847,Prevent the user with session from entering the URL in browser and access the data in Python Django Application,"I have a Python Django web application. -In Get method how to prevent the user from entering url and access the Data. -How do i know weather the url accessed by Code or Browser. -I tried with sessionid in Cookie, But if session exist's it allow to access the data. -Thanks.","To detect if request is from browser, you can check HTTP_USER_AGENT header -request.META.get(""HTTP_USER_AGENT"")",0.0,False,2,6738 -2020-05-09 19:15:59.793,How to add a Spacy model to a requirements.txt file?,"I have an app that uses the Spacy model ""en_core_web_sm"". I have tested the app on my local machine and it works fine. -However when I deploy it to Heroku, it gives me this error: -""Can't find model 'en_core_web_sm'. It doesn't seem to be a shortcut link, a Python package or a valid path to a data directory."" -My requirements file contains spacy==2.2.4. -I have been doing some research on this error and found that the model needs to be downloaded separately using this command: -python -m spacy download en_core_web_sm -I have been looking for ways to add the same to my requirements.txt file but haven't been able to find one that works! -I tried this as well - added the below to the requirements file: --e git://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.2.0/en_core_web_sm-2.2.0.tar.gz#egg=en_core_web_sm==2.2.0 -but it gave this error: -""Cloning git://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.2.0/en_core_web_sm-2.2.0.tar.gz to /app/.heroku/src/en-core-web-sm -Running command git clone -q git://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.2.0/en_core_web_sm-2.2.0.tar.gz /app/.heroku/src/en-core-web-sm -fatal: remote error: - explosion/spacy-models/releases/download/en_core_web_sm-2.2.0/en_core_web_sm-2.2.0.tar.gz is not a valid repository name"" -Is there a way to get this Spacy model to load from the requirements file? Or any other fix that is possible? -Thank you.","Ok, so after some more Googling and hunting for a solution, I found this solution that worked: -I downloaded the tarball from the url that @tausif shared in his answer, to my local system. -Saved it in the directory which had my requirements.txt file. -Then I added this line to my requirements.txt file: ./en_core_web_sm-2.2.5.tar.gz -Proceeded with deploying to Heroku - it succeeded and the app works perfectly now.",1.2,True,1,6739 -2020-05-10 05:44:15.673,How to be undetectable with chrome webdriver?,"I've already seen multiple posts on Stackoverflow regarding this. However, some of the answers are outdated (such as using PhantomJS) and others didn't work for me. -I'm using selenium to scrape a few sports websites for their data. However, every time I try to scrape these sites, a few of them block me because they know I'm using chromedriver. I'm not sending very many requests at all, and I'm also using a VPN. I know the issue is with chromedriver because anytime I stop running my code but try opening these sites on chromedriver, I'm still blocked. However, when I open them in my default web browser, I can access them perfectly fine. -So, I wanted to know if anyone has any suggestions of how to avoid getting blocked from these sites when scraping them in selenium. I've already tried changing the '$cdc...' variable within the chromedriver, but that didn't work. I would greatly appreciate any ideas, thanks!","Obviously they can tell you're not using a common browser. Could it have something to do with the User Agent? -Try it out with something like Postman. See what the responses are. Try messing with the user agent and other request fields. Look at the request headers when you access the site with a regular browser (like chrome) and try to spoof those. -Edit: just remembered this and realized the page might be performing some checks in JS and whatnot. It's worth looking into what happens when you block JS on the site with a regular browser.",0.3869120172231254,False,1,6740 -2020-05-10 12:01:13.383,"Allow End User To Add To Model, Form, and Template Django","Is there anything that someone could point me towards (a package, an example, a strategy, etc) of how I could implement the ability for an end user of my app to create a new field in a model, then add that model field to a model form and template? I’m thinking of the way that Salesforce allows users to add Custom fields. -I don’t really have any start point here I am only looking to learn if/how this might be possible in Django. -Thanks!","I'm also looking for same type of solution. But with some research, I came to know that we have this using ContentTypes framework. -How to do it? We can utilize ContentType's GenericForeignKeys and GenericRelations.",0.0,False,1,6741 -2020-05-11 16:59:06.173,Python - save BytesIO in database,"So I am trying to create a binary file and save it into my database. I am using REDIS and SQLALCHEMY as a framework for my database. I can use send_file to send the actual file whenever the user accesses a URL but how do I make sure that the file is saved in the route and it could stay there every time a user accesses the URL. - -I am sending the file from a client-python it's not in my - directory - -what I need in a nutshell is to save the file from the client-python to a database to ""downloadable"" it to the browser-client so it would actually be available for the browser-client is there any way of doing this? Maybe a different way that I didn't think about","I had to encode the data with base64, send it to the database and then decode it and send the file as binary data.",1.2,True,1,6742 -2020-05-12 11:13:21.930,How to use html file upload method for google site verification for a flask app,"The pretty straight forward way to do this is to upload the google-provided.html file to the root folder of your app on the server. But how to do it for a flask application? -For example, I have a flask app running on heroku and I want to do the site verification for my app using html file upload method(though alternative methods are available). I tried uploading the google-provided.html in the templates folder. Verification failed! -I have searched the internet but found no relevant answers.","Everything mentioned in the above answer is correct. But, just make sure that you don't rename the file given from google search console. Use the same name as it is.",-0.2012947653214861,False,1,6743 -2020-05-12 21:08:40.313,How do I open the 'launch.json' file in Visual Studio Code?,"I am a new programmer that started learning Python, but there's something bothering me which I'd like to change. -As I've seen that it is possible to remove the unwanted path from the terminal when executing code, I cannot figure out how to access the Visual Studio Code launch.json file and all of the explanations on Google are quite confusing.","Note that if Visual Studio Code hasn't created a launch.json file for your project yet, do the following: - -Click the Run | Add Configuration menu option, and one will automatically be generated for you, and opened in the editor.",0.3869120172231254,False,1,6744 -2020-05-13 01:59:52.960,How to install Beautiful Soup 4 without *any* assumptions,"I need to install Beautiful Soup 4, but every tutorial or list of instructions seems to assume I know more than I do. I am here after a number of unsuccessful attempts and at this point I am afraid of damaging something internally. -Apparently I need something called pip. I have Python 3.8, so everyone says I should have pip. Great. I have found no less than 14 different ways to check if I actually have pip and am using it. They all say to type something. One of them said to type pip --version. We are already assuming too much. Where do I type it? IDLE? The Cmd prompt? The Python shell? What folder do I need to be in? Etc Etc. I need someone to assume I am a complete beginner. -Then, how do I use it to install bs4? Again, I am supposed to type things, but no one says where. One person said to go to the folder where python is installed in the command line. So, I did, and surprise surprise, pip is not ""valid syntax"". How can I proceed with this?","With a little help from the kind user Tenacious B, I have solved my problem, I think. In the command prompt, I needed to type -cd C:\Users\%userprofilenamegoeshere%\AppData\Local\Programs\Python\Python38-32\scripts -No source that I found in my initial search included that last bit: \scripts. From here, the common suggestion of pip install beautifulsoup4 seems to have worked.",0.0,False,1,6745 -2020-05-13 07:06:12.187,Run multiple python version on SQL Server (2017),"Is it possible to run multiple Python versions on SQL Sever 2017? -It is possible to do on Windows (2 Python folders, 2 shortcuts, 2 environment paths). But how to launch another Python version if I run Python via sp_execute_external_script in SQL Management Studio 18? -In SQL server\Launchpad\properties\Binary path there is the parameter -launcher Pythonlauncher. Probably, by changing this, it is possible to run another Python version. -Other guess: to create multiple Python folders C:\Program Files\Microsoft SQL Server\MSSQL14.MSSQLSERVER\PYTHON_SERVICES. But how to switch them? -Other guess: in C:\Program Files\Microsoft SQL Server\MSSQL14.MSSQLSERVER\MSSQL\Binn\pythonlauncher.config - in PYTHONHOME and ENV_ExaMpiCommDllPath parameters substitute the folder C:\Program Files\Microsoft SQL Server\MSSQL14.MSSQLSERVER\PYTHON_SERVICES\ with the folder with new Python version.","The answer is: - -Copy in - - -C:\Program Files\Microsoft SQL Server\MSSQL14.MSSQLSERVER\ - -folder as many Python versions as you want (Python version = folder with Python like PYTHON_SERVICES) - -Stop Launchpad -Change in - - -C:\Program Files\Microsoft SQL - Server\MSSQL14.MSSQLSERVER\MSSQL\Binn\pythonlauncher.config - -file: in PYTHONHOME and ENV_ExaMpiCommDllPath parameters substitute the folder - -C:\Program Files\Microsoft SQL - Server\MSSQL14.MSSQLSERVER\PYTHON_SERVICES\ - -with the folder with new Python version. - -Start Launchpad",0.3869120172231254,False,1,6746 -2020-05-13 14:06:05.600,"What do I do when the terminal says ""Check the logs for full command output""?","When I type pip install pygame or pip3 install pygame on terminal, it says ""check the logs for full command output"". I already upgraded pip and it still says that. Can you tell me what this means and how to fix it?","You can try running the pip command by adding the --verbose flag. This will print out the logs in the Terminal, which you then can inspect. These logs often help you indicate the cause of the error. -For example: -pip install --verbose pygame -or -pip3 install --verbose pygame",0.3869120172231254,False,2,6747 -2020-05-13 14:06:05.600,"What do I do when the terminal says ""Check the logs for full command output""?","When I type pip install pygame or pip3 install pygame on terminal, it says ""check the logs for full command output"". I already upgraded pip and it still says that. Can you tell me what this means and how to fix it?","When you get this message, there will be a few lines above it which tell you where the log file is and some more details about what went wrong.",0.0,False,2,6747 -2020-05-13 16:21:22.403,Use pipenv whenever possible in vscode,"How could I force vscode to always find and prefer pipenv's virtual environment for python instead of the python's global settings? -When I create a pipenv environment on my workspace, it keeps using the global python version at /usr/bin/python (as defined in settings as ""python.pythonPath"": ""/usr/bin/python"") but I wonder how could I switch to something like ~/.local/share/virtualenvs/Selenium-10eAXqZ4/bin/python automatically when there is Pipenv environment detected. -Is this even possible? If this is how can I configure it? -(I'm not talking about simply overriding the python.pythonPath with local .vscode/settings.json I need this to detect the path from pipenv automatically when it exists for the current project)",Add PIPENV_VENV_IN_PROJECT=1 to your environment and the .venv folder will be added to your project root. VSCode has zero problems picking up Python from there. (I find it also very convenient to have everything in one place and not spread around on the entire disk.),1.2,True,1,6748 -2020-05-13 17:15:13.727,How to speed up large data frame joins in Spark,"I have 2 dataframes in Spark 2.4, they are close to the same size. Each has about 40 million records. One is generated simply by loading the dataframe from S3, the other loads a bunch of dataframes and uses sparkSQL to generate a big dataframe. Then I join these 2 dataframes together multiple times into multiple dataframes and try to write them as CSV to S3... However I am seeing my write times upwards of 30 minutes, I am not sure if it is re-evaluating the dataframe or if perhaps I need more CPUs for this task. Nonetheless, I was hoping someone may have some advice on how to optimize these write times.","So when a dataframe is created from other dataframes it seems an execution plan is what is first created. Then when executing a write operation that plan gets evaluated. -The best way to take care of this particular situation is to take advantage of the spark lazy-loading caching (I have not seen an eager-loading solution for spark but if that exists it could be even better). -By doing: -dataframe1.cache() -And -dataframe2.cache() -when you join these 2 dataframes the first time both dataframes are evaluated and loaded into cache. Then when joining and writing again the 2 dataframe execution plans are already evaluated and the join and write becomes much faster. -This means the first write still takes over 30 minutes but the other 2 writes are much quicker. -Additionally, you can increase performance with additional CPUs and proper paritioning and coalesce of the dataframes. That could help with the evaluation of the first join and write operation. -Hope this helps.",1.2,True,1,6749 -2020-05-14 14:44:06.843,Flask Python - multiple URL parameter with brackets,"hope you are all doing well. -Im working on api project using python and flask. -The question I have to ask is, how can I get the values of multiple query string parameter? -The api client is built in PHP, and when a form is submitted, if some of the parameters are multiple the query string is built like filter[]=1&filter[]=2&filter[]=3... and so on. -When I dump flask request, it shows something like (filter[], 1), (filter[], 2), (filter[], 3), it seems ok, but then when I do request.args.get('filter[]') it returns only the first item in the args ImmutableDict, filter[]=1, and I can't access the other values provided. -Any help regarding this issue would be aprreciated. -Happy programming!",try this request.args.to_dict(flat=False) to convert,0.0,False,1,6750 -2020-05-14 16:41:02.153,Accessing Pyramid Settings throughout the program,"I have a pyramid API which has basically three layers. - -View -> validates the request and response -Controller -> Does business logic and retrieves things from the DB. -Services -> Makes calls to external third party services. - -The services are a class for each external API which will have things like authentication data. This should be a class attribute as it does not change per instance. However, I cannot work out how to make it a class attribute. -Instead I extract the settings in the view request.registry.settings pass it to the controller which then passes it down in the init() for the service. This seems unnecessary. -Obviously I could hard code them in code but that's an awful idea. -Is there a better way?","Pyramid itself does not use global variables, which is what you are asking for when you ask for settings to be available in class-level or module-level attributes. For instance-level stuff, you can just pass the settings from Pyramid into the instance either from the view or from the config. -To get around this, you can always pass data into your models at config-time for your Pyramid app. For example, in your main just pull settings = config.get_settings() and pass some of them to where they need to be. As a general rule, you want to try to pass things around at config-time once, instead of from the view layer all the time. -Finally, a good way to do that without using class-level or module-level attributes is to register instances of your services with your app. pyramid_services library provides one approach to this, but the idea is basically to instantiate an instance of a service for your app, add it to your pyramid registry config.registry.foo = ... and when you do that you can pass in the settings. Later in your view code you can grab the service from there using request.registry.foo and it's already setup for you!",0.6730655149877884,False,1,6751 -2020-05-15 11:10:49.963,Django: safely deleting an unused table from database,"In my django application, I used to authenticate users exploiting base django rest framework authentication token. Now I've switched to Json Web Token, but browsing my psql database, I've noticed the table authtoken_token, which was used to store the DRF authentication token, is still there. I'm wondering how to get rid of it. I've thought about 2 options: - -deleting it through migration: I think this is the correct and safer way to proceed, but in my migrations directory inside my project folder, I didn't find anything related to the tokens. Only stuff related to my models; -deleting it directly from the database could be another option, but I'm afraid of messing with django migrations (although it shoudn't have links with other tables anymore) - -I must clarify I've already removed rest_framework.authtoken from my INSTALLED_APPS","You can choose the first option. There are 3 steps should you do to complete uninstall authtoken from your Django app - -Remove rest_framework.authtoken from INSTALLED_APPS, this action will tell your Django app to do not take any migrations file from that module -Remove authtoken_token table, if you will -Find the record with authtoken app name in table django_migrations, you can remove it. - -Note: There are several error occurs in your code, because authtoken module is removed from your INSTALLED_APPS. My advice, backup your existing database first before you do above step",0.2012947653214861,False,1,6752 -2020-05-16 15:11:39.083,Can't find 'Scripts' folder or 'pip' file in Python 3.8.2 folder for Windows 10,Can't find 'Scripts' folder or 'pip' file in Python 3.8.2 folder for Windows 10. Trying to install pip for python. Any ideas how I can get past this problem?,"Pip should have already been installed when you install your python, if you wanna check if you pip is install try typing pip in your command prompt or terminal and if you wanna see the file directory of the pip you have installed say pip show (here you put name of file like pygame)",0.0,False,2,6753 -2020-05-16 15:11:39.083,Can't find 'Scripts' folder or 'pip' file in Python 3.8.2 folder for Windows 10,Can't find 'Scripts' folder or 'pip' file in Python 3.8.2 folder for Windows 10. Trying to install pip for python. Any ideas how I can get past this problem?,"As others answered already you should have got pip already as you installed python. Well pip isnt the apllication. You use pip in the application: Command prompt. You have to search for command prompt on your computer (if you have python you already have it installed) and then in command prompt you install packages that you still son't have. -For example you wan't pygame you write: pip install pygame. If you have PyCharm then there is something else you need to do. If you have PyCharm tell me as a comment to this answer and I'll tell you what to do then because command prompt would almost be useless",0.0,False,2,6753 -2020-05-16 17:56:13.050,Python not operating with super long list,"Hi i have a list with 15205 variables inside, im trying to find the relative frequency of each variable but python don't react with such a big size. -if i try len(list) it works, but max(list) gives me '>' not supported between instances of 'list' and 'int', and set(list) gives me 'type' object is not utterable. If i try to work with it as a data frame it gives me TypeError: unhashable type: 'list' -Plus, if i use a small sample of the list everything works fine. -Can anyone explain me why does this happen and how can i work it out? -thanks","Firstly, you shouldn't name your list 'list', since this is a reserved word in Python referring to the type. This is the origin of your 'set(list)' error. -As for the other error, at least one of the items in your list appears to be itself a list, and you can't compare the magnitude of a list and an integer.",0.6730655149877884,False,1,6754 -2020-05-16 19:30:33.470,How do i run a python project nonstop ( even when i close the computer ),I have a discord.py bot using the datetime and random libraries ( and discord.py of course ). My question is how can i run it even when my computer is off. I think the answer is a rented server but i think there are cheeper options,"You'll either have to run it on a machine you don't turn off. Or deploy it to a server. You can get cheap servers through Linode, Digital Ocean and others.",0.0,False,1,6755 -2020-05-16 20:10:07.260,Pandas :Record count inserted by Python TO_SQL funtion,"I am using Python to_sql function to insert data in a database table from Pandas dataframe. -I am able to insert data in database table but I want to know in my code how many records are inserted . -How to know record count of inserts ( i do not want to write one more query to access database table to get record count)? -Also, is there a way to see logs for this function execution. like what were the queries executed etc.","There is no way to do this, since python cannot know how many of the records being inserted were already in the table.",0.0,False,1,6756 -2020-05-18 08:41:40.830,Understanding the sync method from the python shelve library,"The python documentation says this about the sync method: - -Write back all entries in the cache if the shelf was opened with - writeback set to True. Also empty the cache and synchronize the - persistent dictionary on disk, if feasible. This is called - automatically when the shelf is closed with close(). - -I am really having a hard time understanding this. -How does accessing data from cache differ from accessing data from disk? -And does emptying the cache affect how we can access the data stored -in a shelve?","For whoever is using the data in the Shelve object, it is transparent whether the data is cached or is on disk. If it is not on the cache, the file is read, the cache filled, and the value returned. Otherwise, the value as it is on the cache is used. -If the cache is emptied on calling sync, that means only that on the next value fetched from the same Shelve instance, the file will be read again. Since it is all automatic, there is no difference. The documentation is mostly describing how it is implemented. -If you are trying to open the same ""shelve"" file with two concurrent apps, or even two instances of shelve on the same program, chances are you are bound to big problems. Other than that, it just behaves as a ""persistent dictionary"" and that is it. -This pattern of writing to disk and re-reading from a single file makes no difference for a workload of a single user in an interactive program. For a Python program running as a server with tens to thousands of clients, or even a single big-data processing script, where this could impact actual performance, Shelve is hardly a usable thing anyway.",0.0,False,1,6757 -2020-05-18 09:36:32.503,How two Django applications use same database for authentication,"previously we implemented one django application call it as ""x"" and it have own database and it have django default authentication system, now we need to create another related django application call it as ""y"", but y application did n't have database settings for y application authentication we should use x applications database and existing users in x application, so is it possible to implement like this?, if possible give the way how can we use same database for two separated django applications for authentication system. -Sorry for my english -Thanks for spending time for my query","So, to achieve this. In your second application, add User model in the models.py and remember to keep managed=False in the User model's Meta class. -Inside your settings.py have the same DATABASES configuration as of your first application. -By doing this, you can achieve the User model related functionality with ease in your new application.",0.0,False,1,6758 -2020-05-18 12:02:43.860,The real difference between MEDIA_ROOT (media files) and STATIC_ROOT (static files) in python django and how to use them correctly,"The real difference between MEDIA_ROOT and STATIC_ROOT in python django and how to use them correctly? -I just was looking for the answer and i'm still confused about it, in the end of the day i got two different answers: -- First is that the MEDIA_ROOT is for storing images and mp3 files maybe and the STATIC_ROOT for the css, js... and so on. --Second answer is that they were only using MEDIA_ROOT in the past for static files, and it caused some errors so eventually we are only using STATIC_ROOT. -is one of them right if not be direct and simple please so everybody can understand and by how to use them correctly i mean what kind of files to put in them exactly","Understanding the real difference between MEDIA_ROOT and STATIC_ROOT can be confusing sometimes as both of them are related to serving files. -To be clear about their differences, I could point out their uses and types of files they serve. - -STATIC_ROOT, STATIC_URL and STATICFILES_DIRS are all used to serve the static files required for the website or application. Whereas, MEDIA_URL and MEDIA_ROOT are used to serve the media files uploaded by a user. - -As you can see that the main difference lies between media and static files. So, let's differentiate them. - -Static files are files like CSS, JS, JQuery, scss, and other images(PNG, JPG, SVG, etc. )etc. which are used in development, creation and rendering of your website or application. Whereas, media files are those files that are uploaded by the user while using the website. - -So, if there is a JavaScript file named main.js which is used to give some functionalities like show popup on button click then it is a STATIC file. Similarly, images like website logo, or some static images displayed in the website that the user can't change by any action are also STATIC files. -Hence, files(as mentioned above) that are used during the development and rendering of the website are known as STATIC files and are served by STATIC_ROOT, STATIC_URL or STATICFILES_DIRS(during deployment) in Django. -Now for the MEDIA files: any file that the user uploads, for example; a video, or image or excel file, etc. during the normal usage of the website or application are called MEDIA files in Django. -MEDIA_ROOT and MEDIA_URL are used to point out the location of MEDIA files stored in your application. -Hope this makes you clear.",1.2,True,1,6759 -2020-05-18 22:30:37.343,Python not starting: IDLE's subprocess didn't make connection,"When I try to open Python it gives me an error saying: -IDLE's subprocess didn't make connection. See the 'startup failure' section of the IDLE doc online -I am not sure how to get it to start. I am on the most recent version of windows, and on the most recent version of python.","I figured it out, thanks. All I needed to do was uninstall random.py.",0.0,False,2,6760 -2020-05-18 22:30:37.343,Python not starting: IDLE's subprocess didn't make connection,"When I try to open Python it gives me an error saying: -IDLE's subprocess didn't make connection. See the 'startup failure' section of the IDLE doc online -I am not sure how to get it to start. I am on the most recent version of windows, and on the most recent version of python.",Open cmd and type python to see if python was installed. If so fix you IDE. If not download and reinstall python.,0.0,False,2,6760 -2020-05-19 04:10:54.070,Python backend -Securing REST APIs With Client Certificates,"We have a small website with API connected using AJAX. -We do not ask for usernames and passwords or any authentication like firebase auth. -So it's like open service and we want to avoid the service to be misused. -OAuth 2 is really effective when we ask for credentials to the user. -Can you suggest the security best practice and how it can be implemented in this context using python? -Thanks","Use a firewall -Allow for third-party identity providers if possible - Separate the concept of user identity and user account",0.3869120172231254,False,1,6761 -2020-05-19 13:54:18.343,How to add pylint for Django in vscode manually?,"I have created a Django project in vscode. Generally, vscode automatically prompts me to install pylint but this time it did not (or i missed it). Even though everything is running smoothly, I am still shown import errors. How do I manually install pytlint for this project? -Also,in vscode i never really create a 'workspace'. I just create and open folders and that works just fine. -ps. Im using pipenv. dont know how much necessary that info was.","Hi you must active your venv at the first then install pylint (pip install pylint) -In vscode: ctrl+shift+P then type linter (choose ""python:select linter"") now you can choose your linter (pylint) -I hope it helps you",0.3869120172231254,False,1,6762 -2020-05-19 20:35:03.707,Can I execute 1 python script by 3 different caller process at same time with respective arguments,"I have situation in centos where 3 different/Independent caller will try to execute same python script with respective command line args. eg: python main.py arg1, python main.py arg2, python main.py arg3 at same time. -My question is - Is it possible in the first place or I need to copy that python script, 3 times with 3 different names to be called by each process. -If it is possible then how it should be done so that these 3 processes will not interfare and python script execution will be independent from each other.","All the python processes will run entirely isolated from each other, even if executing the same source file. -If they interact with any external resource other than process memory (such as files on disk), then you may need to take measures to make sure the processes don't interfere (by making sure each instance uses a different filename, for example).",0.3869120172231254,False,1,6763 -2020-05-19 20:36:14.493,How to interpose RabbitMQ between REST client and (Python) REST server?,"If I develop a REST service hosted in Apache and a Python plugin which services GET, PUT, DELETE, PATCH; and this service is consumed by an Angular client (or other REST interacting browser technology). Then how do I make it scale-able with RabbitMQ (AMQP)? -Potential Solution #1 - -Multiple Apache's still faces off against the browser's HTTP calls. -Each Apache instance uses an AMQP plugin and then posts message to a queue -Python microservices monitor a queue and pull a message, service it and return response -Response passed back to Apache plugin, in turn Apache generates the HTTP response - -Does this mean the Python microservice no longer has any HTTP server code at all. This will change that component a lot. Perhaps best to decide upfront if you want to use this pattern as it seems it would be a task to rip out any HTTP server code. -Other potential solutions? I am genuinely puzzled as to how we're supposed to take a classic REST server component and upgrade it to be scale-able with RabbitMQ/AMQP with minimal disruption.","I would recommend switching wsgi to asgi(nginx can help here), Im not sure why you think rabbitmq is the solution to your problem, as nothing you described seems like that would be solved by using this method. -asgi is not supported by apache as far as I know, but it allows the server to go do work, and while its working it can continue to service new requests that come in. (gross over simplification) -If for whatever reason you really want to use job workers (rabbitmq, etc) then I would suggest returning to the user a ""token"" (really just the job_id) and then they can call with that token, and it will report back either the current job status or the result",1.2,True,1,6764 -2020-05-20 07:41:11.573,Create package with dependencies,"Do you know how to create package from my python application to be installable on Windows without internet connection? I want, for example, to create tar.gz file with my python script and all dependencies. Then install such package on windows machine with python3.7 already installed. I tried setuptools but i don't see possibility to include dependencies. Can you help me?",Their are several Java tutorials on how to make installers that are offline. You have your python project and just use a preprogrammed Java installer to then put all of the 'goodies' inside of. Then you have an installer for windows. And its an executable.,-0.3869120172231254,False,1,6765 -2020-05-20 08:14:01.817,Debug function not appearing in the menu bar in VS Code. I am using it for Python,"I am new at learning Python and i am trying to trying to set up the environment on VS code. However, the Debug icon and function is not on the menu bar. Please how do I rectify this?",right click on the menu bar. you can select which menus are active. it's also called run i believe.,0.0,False,1,6766 -2020-05-20 10:08:58.617,How can i solve AttributeError: module 'dis' has no attribute 'COMPILER_FLAG_NAMES' in anaconda3/envs/untitled/lib/python3.7/inspect.py,"i am trying implement from scipy.spatial import distance as dist library however it gives me File ""/home/afeyzadogan/anaconda3/envs/untitled/lib/python3.7/inspect.py"", line 56, in - for k, v in dis.COMPILER_FLAG_NAMES.items(): -AttributeError: module 'dis' has no attribute 'COMPILER_FLAG_NAMES' -error how can i solve it? -''' -for k, v in dis.COMPILER_FLAG_NAMES.items(): - mod_dict[""CO_"" + v] = k -'''","We ran across this issue in our code with the same exact AttributeError. -Turns out it was a totally unrelated file in the current directory called dis.py.",0.3869120172231254,False,1,6767 -2020-05-20 13:37:42.400,save a figure with a precise pixels size with savefig,"How can I save a plot in a 750x750 px using savefig? -The only useful parameter is DPI, but I don't understand how can I use it for setting a precise size","I added plt.tight_layout() before savefig(), and it solved the trimming issue I had. Maybe it will help yours as well. -I also set the figure size at the begining rcParams['figure.figsize'] = 40, 12(you can set your own width and height)",0.0,False,1,6768 -2020-05-20 19:33:34.343,Call function when new result has been returned from API,"There is an API that I am using from another company that returns the ID-s of the last 100 purchases that have been made in their website. -I have a function change_status(purchase_id) that I would like to call whenever a new purchase has been made. I know a workaround on how to do it, do a while True loop, keep an index last_modified_id for the last modified status of a purchase and loop all purchases from the latest to the earliest and stop once the current id is the same as last_modified_id and then put a sleeper for 10 seconds after each iteration. -Is there a better way on how to do it using events in python? Like calling the function change_status(purchase_id) when the result of that API has been changed. I have been searching around for a few days but could not find about about an event and an API. Any suggestion or idea helps. Posting what I have done is usually good in stackoverflow, but I don't have anything about events. The loop solution is totally different from the events solution. -Thank you","The only way to do this is to keep calling the API and watching for changes from the previous response, unless... -The API provider might have an option to call your API when something is updated on their side. It is a similar mechanism to push notifications. If they provide a method to do that, you can create an endpoint on your side to do whatever you need to do when a new purchase is made, and provide them the endpoint. However, as far as I know, most API providers do not do this, and the first method is your only option. -Hope this helps!",1.2,True,1,6769 -2020-05-20 19:55:21.393,Tips to practice matplotlib,"I've been studying python for data science for about 5 months now. But I get really stucked when it comes to matplotlib. There's always so many options to do anything, and I can't see a well defined path to do anything. Does anyone have this problem too and knows how to deal with it?","I think your question is stating that you are bored and do not have any projects to make. If that is correct, there are many datasets available on sites like Kaggle that have open-source datasets for practice programmers.",0.0,False,2,6770 -2020-05-20 19:55:21.393,Tips to practice matplotlib,"I've been studying python for data science for about 5 months now. But I get really stucked when it comes to matplotlib. There's always so many options to do anything, and I can't see a well defined path to do anything. Does anyone have this problem too and knows how to deal with it?","in programming in general "" There's always so many options to do anything"". -i recommend to you that read library and understand their functions and classes in a glance, then go and solve some problems from websites or give a real project if you can. if your code works do not worry and go ahead. -after these try and error you have a lot of real idea about various problems and you recognize difference between these options and pros and cons of them. like me three years ago.",0.0,False,2,6770 -2020-05-21 08:14:07.720,OnetoOne (primary_key=Tue) to ForeignKey in Django,"I have a OnetoOne field with primary_key=True in a model. -Now I want to change that to a ForeignKey but cannot since there is no 'id'. -From this: - -user = models.OneToOneField(User, primary_key=True, on_delete=models.CASCADE) - -To this: - -user1 = models.ForeignKey(User, related_name='questionnaire', on_delete=models.CASCADE) - -Showing this while makemigrations: - -You are trying to add a non-nullable field 'id' to historicaluserquestionnaire without a default; we can't do that (the database needs something to populate existing rows). - Please select a fix: - 1) Provide a one-off default now (will be set on all existing rows with a null value for this column) - 2) Quit, and let me add a default in models.py - -So how to do that? -Thanks!","The problem is that your trying to remove the primary key, but Django is then going to add a new primary key called ""id"". This is non-nullable and unique, so you can't really provide a one-off default. -The easiest solution is to just create a new model and copy your table over in a SQL migration, using the old user_id to populate the id field. Be sure to reset your table sequence to avoid collisions.",0.1352210990936997,False,1,6771 -2020-05-23 16:28:37.970,Deploy python flask project into a website,"So I recently finished my python project, grabbing values from an API and put it into my website. -Now I have no clue how I actually start the website (finding a host) and making it accessible to other people, I thought turning to here might find the solution. -I have done a good amount of research, tried ""pythonanywhere"" and ""google app engine"" but seem to not really find a solution. -I was hoping to be able to use ""hostinger"" as a host, as they have a good price and a good host. Contacted them but they said that they couldn't, though I could upload it to a VPS (which they have). Would it work for me to upload my files to this VPS and therefor get it to a website? or should I use another host?","A VPS would work, but you'll need to understand basic linux server admin to get things setup properly. -Sounds like you don't have any experience with server admin, so something like App Engine would be great for you. There are a ton of tutorials on the internet for deploying flask to GAE.",0.0,False,1,6772 -2020-05-24 19:16:38.693,"How can i change dtype from object to float64 in a column, using python?","I extracted some data from investing but columns values are all dtype = object, so i cant work with them... -how should i convert object to float? -(2558 6.678,08 2557 6.897,23 2556 7.095,95 2555 7.151,21 2554 7.093,34 ... 4 4.050,38 3 4.042,63 2 4.181,13 1 4.219,56 0 4.223,33 Name: Alta, Length: 2559, dtype: object) -What i want is : -2558 6678.08 2557 6897.23 2556 7095.95 2555 7151.21 2554 7093.34 ... 4 4050.38 3 4042.63 2 4181.13 1 4219.56 0 4223.33 Name: Alta, Length: 2559, dtype: float -Tried to use the a function which would replace , for . -def clean(x): x = x.replace(""."", """").replace("","",""."") -but it doesnt work cause dtype is object -Thanks!","That is because there is a comma between the value -Because a float cannot have a comma, you need to first replace the comma and then convert it into float -result[col] = result[col].str.replace("","","""").astype(float)",0.0,False,1,6773 -2020-05-25 14:54:36.873,Secure password store for Python CGI (Windows+IIS+Windows authentification),"I need to develope a python cgi script for a server run on Windows+IIS. The cgi script is run from a web page with Windows authentification. It means the script is run under different users from Windows active directory. -I need to use login/passwords in the script and see no idea how to store the passwords securely, because keyring stores data for a certain user only. Is there a way how to access password data from keyring for all active OS users? -I also tried to use os.environ variables, but they are stored for one web session only.",The only thing I can think of here is to run your script as a service account (generic AD account that is used just for this service) instead of using windows authentication. Then you can log into the server as that service account and setup the Microsoft Credential Manager credentials that way.,0.3869120172231254,False,1,6774 -2020-05-26 05:06:08.297,How do i add a PATH variable in the user variables of the environment variables?,"I have a path variable in the system variables but how do i add a path variable in the user variables section since i don't have any at the moment. -If there isn't a path variable in the user variables will it affect in any way? -How much will values of the path variables differ from the one in environment variables to the one in user variables if there is only one user present?","to add a new variable in users variable - -click one new button below the user variables. - -2.Then a pop window will appear asking you to type new variable name and its value, click ok after entering name and value. -Thats how you can add a new variable in user variables. -You should have a path variable in user variables also because ,for example while installing python you have a choice to add python path to variables here the path will be added in user variable 'path'.",0.0,False,1,6775 -2020-05-26 18:44:36.240,Best way to load a Pillow Image object from binary data in Python?,I have a program that modifies PNG files with Python's Pillow library. I was wondering how I could load binary data into a PNG image from PIL's Image object. I receive the PNG over a network as binary data (e.g. the data looks like b'\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR...'). What is the best way to accomplish this task?,I'd suggest receiving the data into a BytesIO object from the io standard library package. You can then treat that as a file-like object for the purposes of Pillow.,0.3869120172231254,False,1,6776 -2020-05-27 01:06:20.607,Clear all text in separate file,"I want to know how to delete/clear all text in a file inside another python file, I looked through stack overflow and could not find a answer, all help appreciated. Thanks!","Try: open('yourfile.txt', 'w').close()",0.1352210990936997,False,1,6777 -2020-05-27 08:12:04.827,Loss function and data format for training a ''categorical input' to 'categorical output' model?,"I am trying to train a model for autonomous driving that converts input from the front camera, to a bird's eye view image. -The input and output, both are segmentation masks with shape (96, 144) where each pixel has a range from 0 to 12 (each number represents a different class). -Now my question is how should i preprocess my data and which loss function should i use for the model (I am trying to use a Fully convolutional Network). -I tried to convert input and outputs to shape (96, 144, 13) using keras' to_categorical utility so each channel has 0s and 1s of representing a specific mask of a category. I used binary_crossentropy ad sigmoid activation for last layer with this and the model seemed to learn and loss started reducing. -But i am still unsure if this is the correct way or if there are any better ways. -what should be the: - -input and ouptput data format -activation of last layer -loss function","I found the solution, use categorical crossentropy with softmax activation at last layer. Use the same data format as specified in the question.",1.2,True,1,6778 -2020-05-27 12:05:54.603,how to compile python kivy app for ios on Windows 10 using buildozer?,"I succesfully compiled app for android, and now I want to compile python kivy app for ios using buildozer. My operation system is Windows 10, so I don't know how to compile file for ios. I downloaded ubuntu console from microsoft store, that helped me to compile apk file. How to compile file for ios? I hope you help me...",You can only deploy to iOS if you're working on a MacOS machine.,0.0,False,1,6779 -2020-05-27 12:06:03.667,How to copy and paste dataframe rows into a web page textarea field,"I have a dataframe with a single column ""Cntr_Number"" with x no of rows. -What i trying to achieve is using selenium to copy and paste the data into the web page textarea. -The constraint is that the web page text area only accept 20 rows of data per submission. -So how can i impplment it using while loop or other method. - -Copy and paste the first 20 rows of data and click on the ""Submit"" -button -Copy and paste the next 20 rows of data and click on the -""Submit"" button - -repeat the cycle until the last row. -Sorry i dont have any sample code to show but this is what I'm trying to achieve. -Appreciate if could have some sample code on how to do the implmentation.","The better approach will be capture all the the data in a List, Later while pasting it you can check the length of the list, and later iterate through the list and paste the data 20 at a time in the text area. I hope this will solve your problem.",0.3869120172231254,False,1,6780 -2020-05-27 12:11:19.710,"Convert the string ""%Y-%M-%D"" to ""YYYY-MM-DD"" for use in openpyxl NamedStyle number_format","TLDR: This is not a question about how to change the way a date is converted to a string, but how to convert between the two format types - This being ""%Y"" and ""YYYY"", the first having a % and the second having 4 x Y. -I have the following date format ""%Y-%M-%D"" that is used throughout an app. I now need to use this within a openpyxl NamedStyle as the number_format option. I cant use it directly as it doesn't like the format, it needs to be in ""YYYY-MM-DD"" (Excel) format. - -Do these two formats have names? (so I can Google a little more) -Short of creating a lookup table for each combination of %Y or %M to Y and M is there a conversion method? Maybe in openpyxl? I'd prefer not to use an additional library just for this! - -TIA!","Sounds like you are looking for a mapping between printf-style and Excel formatting. Individual date formats don't have names. And, due to the way Excel implements number formats I can't think of an easy way of covering all the possibilities. NamedStyles generally refer to a collection of formatting options such as font, border and not just number format.",0.3869120172231254,False,1,6781 -2020-05-27 14:20:48.347,How do iterators know what item comes next?,"As far as I understood it, iterators use lazy evaluation, meaning that they don't actually save each item in memory, but just contain the instructions on how to generate the next item. -However, let's say I have some list [1,2,3,4,5] and convert it into an iterator doing a = iter([1,2,3,4,5]). -Now, if iterators are supposed to save memory space because as said they contain the instructions on how to generate the next item that is requested, how do they do it in this example? How is the iterator a we created supposed to know what item comes next, without saving the entire list to memory?","Just think for a moment about this scenario ... You have a file of over a million elements, loading the memory of the whole list of elements would be really expensive. By using an iterator, you can avoid making the program heavy by opening the file once and extracting only one element for the computation. You would save a lot of memory.",0.0,False,1,6782 -2020-05-27 15:21:31.810,How do modules installation work in Python?,"[On a mac] -I know I can get packages doing pip install etc. -But I'm not entirely sure how all this works. -Does it matter which folder my terminal is in when I write this command? -What happens if I write it in a specific folder? -Does it matter if I do pip/pip3? -I'm doing a project, which had a requirements file. -So I went to the folder the requirements txt was in and did pip install requirements, but there was a specific tensorflow version, which only works for python 3.7. So I did """"""python3.7 -m pip install requirements"""""" and it worked (i'm not sure why). Then I got jupyter with brew and ran a notebook which used one of the modules in the requirements file, but it says there is no such module. -I suspect packages are linked to specific versions of python and I need to be running that version of python with my notebook, but I'm really not sure how. Is there some better way to be setting up my environment than just blindley pip installing stuff in random folders? -I'm sorry if this is not a well formed question, I will fix it if you let me know how.","There may be a difference between pip and pip3, depending on what you have installed on your system. pip is likely the pip used for python2 while pip3 is used for python3. -The easiest way to tell is to simply execute python and see what version starts. python will run typically run the older version 2.x python and python3 is required to run python version 3.x. If you install into the python2 environment (using pip install or python -m pip install the libraries will be available to the python version that runs when you execute python. To install them into a python3 environment, use pip3 or python3 -m pip install. -Basically, pip is writing module components into a library path, where import can find them. To do this for ALL users, use python3 or pip3 from the command line. To test it out, or use it on an individual basis, use a virtual environment as @Abhishek Verma said.",0.0,False,1,6783 -2020-05-27 16:15:33.287,How to display text on gmaps in Jupyter Python Notebook?,"Background: I'm using the gmaps package in Jupyter Python notebook. I have 2 points A (which is a marker) and B (which is a symbol) which is connected by a line. -Question: I want to somehow display text on this line that represents the distance between A and B. I have already calculated the distance between A and B but cannot display the text on the map. Is there any way to display text on the line?",I found that gmaps doesn't have this feature so I switched to folium package which has labels and popups to display text on hover and clicking the line.,1.2,True,1,6784 -2020-05-28 11:22:01.147,Python ValueError if running on different laptop,"I've just built a function that is working fine on my laptop (Mac, but I'm working on a Windows virtual machine of the office laptop), but when I pass it to a colleague o'mine, it raises a ValueError: -""You are trying to merge on object and int64 columns. If you wish to proceed you should use pd.concat"" -The line of the code that raises the error is a simple merge that on my laptop works perfectly: -df = pd.merge(df1, df2, on = ""x"", how = ""outer) -The input files are exactly the same (taken directly from the same remote folder). -I totally don't know how to fix the problem, and I don't understand why on my laptop it works (even if I open a new script or I restart the kernel, so no stored variables around) and in the one of my colleague it doesn't. -Thanks for your help!","my guess (a wild guess) is that the data from the 2 tab-separated CSV files (i.e., TSV files) is somehow converted using different locales on your computer and your colleague's computer. -Check if you have locale-dependent operations that could cause a number with the ""wrong"" decimal separator not to be recognized as a number. -This should not happen in pd.read_csv() because the decimal parameter has a well-defined default value of ""."". -But from an experience I had with timestamps in another context, one timestamp with a ""bad"" format can cause the whole column to be of the wrong type. So if just one number of just one of the two files, in the column you are merging on, has a decimal separator, and this decimal separator is only recognized as such on your machine, only on your machine the join will succeed (I'm supposing that pandas can join numeric columns even if they are of different type).",0.0,False,1,6785 -2020-05-28 19:55:54.440,"Can terraform run ""apply"" for multiple infrastructure/workspace in parallel?","We have one terraform instance and script which could create infra in azure. We would like to use same scripts to create/update/destroy isolated infra for each one of our customers on azure . We have achieved this by assigning one workspace for each client,different var files and using backend remote state files on azure. -Our intend is to create a wrapper python program that could create multiple threads and trigger terraform apply in parallel for all workspaces. This seems to be not working as terraform runs for one workspace at a time. -Any suggestions/advice on how we can achieve parallel execution of terraform apply for different workspaces?","It's safe to run multiple Terraform processes concurrently as long as: - -They all have totally distinct backend configurations, both in terms of state storage and in terms of lock configuration. (If they have overlapping lock configuration then they'll mutex each other, effectively serializing the operations in spite of you running multiple copies.) -They work with an entirely disjoint set of remote objects, including those represented by both managed resources (resource blocks) and data resources (data blocks). - -Most remote APIs do not support any sort of transaction or mutex concept directly themselves, so Terraform cannot generally offer fine-grained mutual exclusion for individual objects. However, multiple runs that work with entirely separate remote objects will not interact with one another. -Removing a workspace (using terraform workspace delete) concurrently with an operation against that workspace will cause undefined behavior, because it is likely to delete the very objects Terraform is using to track the operation. -There is no built-in Terraform command for running multiple operations concurrently, so to do so will require custom automation that wraps Terraform.",0.9950547536867304,False,1,6786 -2020-05-28 20:40:14.903,How do you request device connection string in azure using python and iotHub library?,I am wondering how can you get device connection string from IotHub using python in azure? any ideas? the device object produced by IoTHubRegisterManager.Create_device_with_sas(...) doesn't seem to contain the property connection string.,"You can get a device connection string from the device registry. However, it is not recommended that you do that on a device. The reason being is that you will need the IoT hub connection string to authenticate with your hub so that you can read the device registry. If your device is doing that and it is compromised then the perpetrator now has your IoT hub connection string and could cause all kinds of mayhem. You should specifically provide each device instance with its connection string. -Alternatively, you could research the Azure DPS service which will provide you with device authentication details in a secure manner.",0.0,False,1,6787 -2020-05-29 21:43:13.640,I am not allowed to run a python executable on other pcs,"I was doing a game in tkinter, then I make it executable with PyInstaller and sent it to my friends so they can run it and tell me how it feels. -It seems that they could download the file, but can't open it because windows forbade them telling that it's not secure and not letting them choose to assume the risk or something. -They tried to run as administrator and still nothing changed. -What should I do or what I should add to my code so that windows can open it without problem and why windows opens other executable files without saying that(current error that my executable gets)?","compress it as a .zip file and then it will most probably work -or install NSIS and create a windows installer for it.",0.0,False,1,6788 -2020-05-30 06:09:20.403,how to implement csrf without csrf token in django,"In django, if I want to use csrf token, I need to imbed a form with csrf token in django template. However as a backend-engineer I am co-working with a front-end engineer whose code is not available for me. So I caanot use the template. In this case, if I want still the csrf function. what should I do?","you should ask the coworker to embed the csrf token in the form he is sending you -you can get it from document.Cookies if he doesnt want to or cannot use the {% csrf %} tag",0.0,False,1,6789 -2020-05-30 08:51:11.993,How to analyze crawl results,"I crawled and saved the user's website usage lists. -I want to analyze the results of the crawl, but I wonder how there is a way. -First of all, what I thought was Word Cloud. -I am looking for a way to track user's personal preferences with user's computer history. -I want a way to visualize personal tendencies, etc. at a glance. Or I'm looking for a way to find out if there's no risk of suicide or addiction as a result of the search. -thank you.","If you want to visualize data and make analysis on it matplotlib would be good start , again it depends a lot on your data. Matplotlib and seaborn are plotting libraries that are good for representing quantitative data and get some basic analysis at least.",0.0,False,1,6790 -2020-06-01 16:31:56.840,Surfaces or Sprites in Pygame?,"Good evening, I'm making a platformer and would like to know when you should use one of the both. -For example for: -1)The player controlled character -2)The textured tiles that make up the level -3)The background -Should/Could you make everything with sprites ? -I just want to know how you would do it if you were to work on a pygame project. -I ask this because I see lots of pygame tutorials that explain adding textures by using surfaces but then in other tutorials, they use sprite objects instead.","Yes you could make everything including the background with sprites. It usually does not make sense for the background though (unless you;re doing layers of some form). -The rest often make senses as sprite, but that depends on your situation.",1.2,True,1,6791 -2020-06-01 22:09:24.457,"Threading in Python, ""communication"" between threads","I have two functions: def is_updated_database(): is checking if database is updated and the other onedef scrape_links(database): is scraping through set of links(that it downloaded from aforementioned database). -So what I want do is when def is_updated_database(): finds that the updated is downloaded, I want to stop def scrape_links(database): and reload it with a new function parameter(database which would be a list of new links). -My attempt: I know how to run two threads, but I have no idea how to ""connect"" them, so that if something happens to one then something should happen to another one.","Well, one way to solve this problem, may be the checking of database state, and if something new appears there, you could return the new database object, and after that scrape the links, probably this is losing it's multithreading functionality, but that's the way it works. -I don't think that any code examples are required here for you to understand what I mean.",0.0,False,1,6792 -2020-06-02 05:00:54.747,"Given the dataset, how to select the learning algorithm?","I've to build an ML model to classify sentences into different categories. I have a dataset with 2 columns (sentence and label) and 350 rows i.e. with shape (350, 2). To convert the sentences into numeric representation I've used TfIdf vectorization, and so the transformed dataset now has 452 columns (451 columns were obtained using TfIdf, and 1 is the label) i.e. with shape (350, 452). More generally speaking, I have a dataset with a lot more features than training samples. In such a scenario what's the best classification algorithm to use? Logistic Regression, SVM (again what kernel?), neural networks (again which architecture?), naive Bayes or is there any other algorithm? -How about if I get more training samples in the future (but the number of columns doesn't increase much), say with a shape (10000, 750)? -Edit: The sentences are actually narrations from bank statements. I have around 10 to 15 labels, all of which I have labelled manually. Eg. Tax, Bank Charges, Loan etc. In future I do plan to get more statements and I will be labelling them as well. I believe I may end up having around 20 labels at most.","With such a small training set, I think you would only get any reasonable results by getting some pre-trained language model such as GPT-2 and fine tune to your problem. That probably is still true even for a larger dataset, a neural net would probably still do best even if you train your own from scratch. Btw, how many labels do you have? What kind of labels are those?",0.0,False,1,6793 -2020-06-02 06:45:38.810,What is the most efficient way to push and pop a list in Python?,"In Python how do I write code which shifts off the last element of a list and adds a new one to the beginning - to run as fast as possible at execution? -There are good solutions involving the use of append, rotate etc but not all may translate to fast execution.","Don't use a list. -A list can do fast inserts and removals of items only at its end. You'd use pop(-1) and append, and you'd end up with a stack. -Instead, use collections.deque, which is designed for efficient addition and removal at both ends. Working on the ""front"" of a deque uses the popleft and appendleft methods. Note, ""deque"" means ""double ended queue"", and is pronounced ""deck"".",0.9950547536867304,False,1,6794 -2020-06-02 16:26:04.147,How to set tkinter Entry Border Radius,"This is my first question to here. I don't know how to set Border Radius for Tkinter Entry, Thanks for your Help!","There is no option to set a border radius on the tkinter or ttk Entry widgets, or any of the other widgets in those modules. Tkinter doesn't support the concept of a border radius.",1.2,True,1,6795 -2020-06-02 18:46:29.100,A new table for each user created,I am using Django 3.0 and I was wondering how to create a new database table linked to the creation of each user. In a practical sense: I want an app that lets users add certain stuff to a list but each user to have a different list where they can add their stuff. How should I approach this as I can't seem to find the right documentation... Thanks a lot !!!,"This is too long for a comment. -Creating a new table for each user is almost never the right way to solve a problem. Instead, you just have a userStuff table that maintains the lists. It would have columns like: - -userId -stuffId - -And, if you want the stuff for a given user, just use a where clause.",1.2,True,1,6796 -2020-06-02 19:12:03.813,How to enable PyCharm autocompletion for imported library (Discord.py),How do I enable method autocompletion for discord.py in PyCharm? Until now I've been doing it the hard way by looking at the documentation and I didn't even know that autocomplete for a library existed. So how do I enable it?,"The answer in my case was to first create a new interpreter as a new virtual environment, copy over all of the libraries I needed (there is an option to inherit all of the libraries from the previous interpreter while setting up the new one) and then follow method 3 from above. I hope this helps anyone in the future!",1.2,True,1,6797 -2020-06-03 18:20:40.193,How to install turicreate on windows 7?,"Can anyone tell me how to install turicreate on windows 7? I am using python of version 3.7. I have tried using pip install -U turicreate to install but failed. -Thanks in advance","I am quoting from Turicreate website: -Turi Create supports: - -macOS 10.12+ -Linux (with glibc 2.12+) -Windows 10 (via WSL) - -System Requirements - -Python 2.7, 3.5, or 3.6 -Python 3.7 macOS only -x86_64 architecture - -So Windows 7 is not supported in this case.",0.0,False,1,6798 -2020-06-04 04:50:55.740,Identify domain related important keywords from a given text,"I am relatively new to the field of NLP/text processing. I would like to know how to identify domain-related important keywords from a given text. -For example, if I have to build a Q&A chatbot that will be used in the Banking domain, the Q would be like: What is the maturity date for TRADE:12345 ? -From the Q, I would like to extract the keywords: maturity date & TRADE:12345. -From the extracted information, I would frame a SQL-like query, search the DB, retrieve the SQL output and provide the response back to the user. -Any help would be appreciated. -Thanks in advance.","So, this is where the work comes in. -Normally people start with a stop word list. There are several, choose wisely. But more than likely you'll experiment and/or use a base list and then add more words to that list. -Depending on the list it will take out - -""what, is, the, for, ?"" - -Since this a pretty easy example, they'll all do that. But you'll notice that what is being done is just the opposite of what you wanted. You asked for domain-specific words but what is happening is the removal of all that other cruft (to the library). -From here it will depend on what you use. NLTK or Spacy are common choices. Regardless of what you pick, get a real understanding of concepts or it can bite you (like pretty much anything in Data Science). -Expect to start thinking in terms of linguistic patterns so, in your example: - -What is the maturity date for TRADE:12345 ? - -'What' is an interrogative, 'the' is a definite article, 'for' starts a prepositional phrase. -There may be other clues such as the ':' or that TRADE is in all caps. But, it might not be. -That should get you started but you might look at some of the other StackExchange sites for deeper expertise. -Finally, you want to break a question like this into more than one question (assuming that you've done the research and determined the question hasn't already been asked -- repeatedly). So, NLTK and NLP are decently new, but SQL queries are usually a Google search.",0.0,False,1,6799 -2020-06-04 12:37:35.410,Devpi REST API - How to retrieve versions of packages,"I'm trying to retrieve versions of all packages from specific index. I'm trying to sending GET request with /user/index/+api suffix but it not responding nothing intresting. I can't find docs about devpi rest api :( -Has anyone idea how could I do this? -Best regards, Matt.",Simply add header Accept: application/json - it's working!,1.2,True,1,6800 -2020-06-04 13:32:52.410,Use HTML interface to control a running python script on a lighttpd server,"I am trying to find out what the best tool is for my project. -I have a lighttpd server running on a raspberry pi (RPi) and a Python3 module which controls the camera. I need a lot of custom control of the camera, and I need to be able to change modes on the fly. -I would like to have a python script continuously running which waits for commands from the lighttpd server which will ultimately come from a user interacting with an HTML based webpage through an intranet (no outside connections). -I have used Flask in the past to control a running script, and I have used FastCGI to execute scripts. I would like to continue using the lighttpd server over rather than switching entirely over to Flask, but I don't know how to interact with the script once it is actually running to execute individual functions. I can't separate them into multiple functions because only one script can control the camera at a time. -Is the right solution to set up a Flask app and have the lighttpd send requests there, or is there a better tool for this?","You have several questions merged into one, and some of them are opion based questions as such I am going to avoid answering those. These are the opinion based questions. - -I am trying to find out what the best tool is for my project. -Is the right solution to set up a Flask app and have the lighttpd send requests there -Is there a better tool for this? - -The reason I point this out is not because your question isnn't valid but because often times questions like these will get flagged and/or closed. Take a look at this for future referece. -Now to answer this question: -"" I don't know how to interact with the script once it is actually running to execute individual functions"" -Try doing it this way: - -Modify your script to use threads and/or processes. -You will have for example a continously running thread which would be the camera. -You would have another non blocking thread listening to IO commands. -Your IO commands would be comming through command line arguments. -Your IO thread upon recieving an IO command would redirect your running camera thread to a specific function as needed. - -Hope that helps and good luck!!",0.0,False,2,6801 -2020-06-04 13:32:52.410,Use HTML interface to control a running python script on a lighttpd server,"I am trying to find out what the best tool is for my project. -I have a lighttpd server running on a raspberry pi (RPi) and a Python3 module which controls the camera. I need a lot of custom control of the camera, and I need to be able to change modes on the fly. -I would like to have a python script continuously running which waits for commands from the lighttpd server which will ultimately come from a user interacting with an HTML based webpage through an intranet (no outside connections). -I have used Flask in the past to control a running script, and I have used FastCGI to execute scripts. I would like to continue using the lighttpd server over rather than switching entirely over to Flask, but I don't know how to interact with the script once it is actually running to execute individual functions. I can't separate them into multiple functions because only one script can control the camera at a time. -Is the right solution to set up a Flask app and have the lighttpd send requests there, or is there a better tool for this?","I have used Flask in the past to control a running script, and I have used FastCGI to execute scripts. - -Given your experience, one solution is to do what you know. lighttpd can execute your script via FastCGI. Python3 supports FastCGI with Flask (or other frameworks). A python3 app which serially processes requests will have one process issuing commands to the camera. - -I would like to continue using the lighttpd server over rather than switching entirely over to Flask, but I don't know how to interact with the script once it is actually running to execute individual functions. - -Configure your Flask app to run as a FastCGI app instead of as a standalone webserver.",1.2,True,2,6801 -2020-06-04 17:52:29.977,How to prevent direct access to cert files when connecting MQTT client with Python,"I am using the pho MQTT client library successfully to connect to AWS. After the mqtt client is created, providing the necessary keys and certificates is done with a call to client.tls_set() This method requires file paths to root certificate, own certificate and private key file. -All is well and life is good except that I now need to provide this code to external contractors whom should not have direct access to these cert and key files. The contractors have a mix of PC and macOS systems. On macOS we have keychain I am familiar with but do not know how to approach this with python - examples/library references would be great. On the PC I have no idea which is the prevalent mechanism to solve this. -To add to this, I have no control over the contractor PCs/Macs - i.e., I have no ability to revoke an item in their keychain. How do I solve this? -Sorry for being such a noob in security aspects. No need to provide complete examples, just references to articles to read, courses to follow and keywords to search would be great - though code examples will be happily accepted also of course.","Short answer: you don't. -Longer answer: -If you want them to be able connect then you have no choice but to give them the cert/private key that identifies that device/user. -The control you have is issue each contractor with their own unique key/cert and if you believe key/cert has been miss used, revoke the cert at the CA and have the broker check the revocation list. -You can protect the private key with a password, but again you have to either include this in the code or give it to the contractor. -Even if the contractors were using a device with a hardware keystore (secure element) that you could securely store the private key in, all that would do is stop the user from extracting the key and moving it to a different machine, they would still be able to make use of the private key for what ever they want on that machine. -The best mitigation is to make sure the certificate has a short life and control renewing the certificate, this means if a certificate is leaked then it will stop working quickly even if you don't notice and explicitly revoke it.",0.3869120172231254,False,1,6802 -2020-06-04 20:38:18.423,Importing module to VS code,"im very new in programming and i learn Python. -I'm coding on mac btw. -I'd like to know how can i import some modules in VS code. -For exemple, if i want to use the speedtest module i have to download it (what i did) and then import it to my code. But it never worked and i always have the error no module etc. -I used pip to install each package, i have them on my computer but i really don't know to import them on VS code. Even with the terminal of the IDE. -I know it must be something very common for u guys but i will help me a lot. -Thx","Quick Summary -This might not be an issue with VS Code. -Problem: The folder to which pip3 installs your packages is not on your $PATH. -Fix: Go to /Applications/Python 3.8 in Finder, and run the Update Shell Profile.command script. Also, if you are using pip install , instead of pip3 install that might be your problem. -Details -Your Mac looks for installed packages in several different folders on your Mac. The list of folders it searches is stored in an environment variable called $PATH. Paths like /Library/Frameworks/Python.framework/Versions/3.8/bin should be in the $PATH environment variable, since that's where pip3 installs all packages.",1.2,True,1,6803 -2020-06-05 09:05:11.073,How to install pip and python modules with a single batch file?,"I really don't understand how batch files work. But I made a python script for my father to use in his work. And I thought installing pip and necessary modules with a single batch file would make it a lot easier for him. So how can I do it? -The modules I'm using in script are: xlrd, xlsxwriter and tkinter.","You can create a requirements.txt file then use pip install -r requirements.txt to download all modules, if you are working on a virtual environment and you only have the modules your project uses, you can use pip3 freeze >> requirements.txt This is not a batch file but it will work just fine and it is pretty easy",0.296905446847765,False,1,6804 -2020-06-05 12:25:21.517,Python Contour Plot/HeatMap,"I have x and y coordinates in a df from LoL matches and i want to create a contour plot or heat map to show where the player normally moves in a match. -Does any one know how can I do it?","A contour plot or heat map needs 3 values. You have to provide x, y and z values in order to plot a contour since x and y give the position and z gives the value of the variable you want to show the contour of as a variable of x and y. -If you want to show the movement of the players as a function of time you should look at matplotlib's animations. Or if you want to show the ""players density field"" you have to calculate it.",0.0,False,1,6805 -2020-06-06 13:00:36.307,Login required in django,"I am developing ecommerce website in django . -I have view ( addToCart) -I want sure before add to cart if user logged in or not -so that i use @login_required('login') before view -but when click login it show error (can't access to page ). -Note that: normal login is working","Please check the following -1. Add login url on settings -2. Add redirect url on login required decorator -3. If you create a custom login view make sure to check next kwargs",0.0,False,1,6806 -2020-06-06 23:06:30.737,Running all Python scripts with the same name across many directories,"I have a file structure that looks something like this: -Master: - -First - - -train.py -other1.py - -Second - - -train.py -other2.py - -Third - - -train.py -other3.py - - -I want to be able to have one Python script that lives in the Master directory that will do the following when executed: - -Loop through all the subdirectories (and their subdirectories if they exist) -Run every Python script named train.py in each of them, in whatever order necessary - -I know how to execute a given python script from another file (given its name), but I want to create a script that will execute whatever train.py scripts it encounters. Because the train.py scripts are subject to being moved around and being duplicated/deleted, I want to create an adaptable script that will run all those that it finds. -How can I do this?","Which OS are you using ? -If Ubuntu/CentOS try this combination: -import os -//put this in master and this lists every file in master + subdirectories and then after the pipe greps train.py -train_scripts = os.system(""find . -type d | grep train.py "") -//next execute them -python train_scripts",0.1352210990936997,False,2,6807 -2020-06-06 23:06:30.737,Running all Python scripts with the same name across many directories,"I have a file structure that looks something like this: -Master: - -First - - -train.py -other1.py - -Second - - -train.py -other2.py - -Third - - -train.py -other3.py - - -I want to be able to have one Python script that lives in the Master directory that will do the following when executed: - -Loop through all the subdirectories (and their subdirectories if they exist) -Run every Python script named train.py in each of them, in whatever order necessary - -I know how to execute a given python script from another file (given its name), but I want to create a script that will execute whatever train.py scripts it encounters. Because the train.py scripts are subject to being moved around and being duplicated/deleted, I want to create an adaptable script that will run all those that it finds. -How can I do this?","If you are using Windows you could try running them from a PowerShell script. You can run two python scripts at once with just this: -python Test1.py -python Folder/Test1.py -And then add a loop and or a function that goes searching for the files. Because it's Windows Powershell, you have a lot of power when it comes to the filesystem and controlling Windows in general.",0.1352210990936997,False,2,6807 -2020-06-07 13:31:11.780,How to transfer data from Quantopian to Excel,"Anyone know how you get a dataframe from Quantopian to excel - I try - results.to_excel -results are the name of my dataframe","Try this : -Name of your DataFrame: Result.to_csv(""result.csv"") -here Result is your DataFrame Name , while to_csv() is a function",0.0,False,1,6808 -2020-06-07 15:39:09.570,How do i delete instances of a class from within it,"So in my case, I have a class Gnome for example and I want to destroy each object of this class when its variable health reaches 0. Is there a way for me to delete each instance of Gnome when its hp is 0 or should I ""mark it for death"" and delete everything that was marked? Either way, how can I do this?","Unfortunately, there isn't a way to do what you're wanting. Every Python object maintains a record of how many references there are to it. Once the reference count reaches 0, the Python garbage collector will clean it up. -As long as you still have references to the instances, they will persist.",0.0,False,1,6809 -2020-06-07 19:41:32.177,Use Pycharm and Spyder Together,"Recently i read this comment -I like Spyder for interacting with my variables and PyCharm for editing my scripts. Alternative Solution: use both simultaneously. As I edit in PyCharm (on Mac OS), the script updates live in spyder. Best of both worlds! -i want to understand how to use them together and live update the script in Spyder ?","After some research, I find that there is no variable explorer like Sypder option in PyCharm. To work with PyCharm and Spyder together, we need to use the two IDEs parallelly i.e., to write the code we can use the PyCharm and to view the Spyder we can just alt tab to the Spyder window and re run the code in Spyder. It will not take much time to re run the code again. We just need to press Ctrl + A and Ctrl + Enter, then the variables will get updated in the variable explorer. Spyder variable view is amazing especially data frames. -Only thing we need to remember is, we need to install the packages in both PyCharm and Sypder. If we install the package in PyCharm, it will not reflect in Spyder. So we need to install through Conda Prompt.",0.0,False,1,6810 -2020-06-08 04:26:50.710,How can i make a button in my web that when it is being clicked. it can send a data in my python script,"I have a project and i was wondering how can i make a button in my web that when it is being clicked it can display a string in my python terminal -Thank you in advance","You need to create an http server in your Python, and call it with fetch in JavaScript. You can pass data in the query parameters.",0.0,False,1,6811 -2020-06-08 13:35:29.377,"Camcapture on Notebook with showing the video on Pepper. (Choreographe, Python)","I have a question about a programm with Python. -I must capture my Notebookcam with Pepper and show it on the Display from Pepper. -Now I have the Problem, the programming with Choreograph is a little bit different and I don't know how I can handle this Programm. I would be happy if you could answer. -Thanks.","You cannot use Choregraphe to retrieve the video remotely because the applications made using Choregraphe are run on the robot, not on your PC. -You need to write separate program on your PC to retrieve the video.",0.0,False,1,6812 -2020-06-08 20:28:02.807,Dunders no longer combined in Pycharm?,"I recently switched to the new Pycharm version and in the contrary to the previous versions it seems like two underscores are no longer combined like this: __ -Does someone know how to switch it back, so the IDE combines them?",Please try to enable: File - Settings - Editor - Font - Enable font ligatures,1.2,True,1,6813 -2020-06-09 14:36:13.823,How to bring a web browser with .ipybn link opened in my Jupyter?,"I have received a link with .ipynb link. I am new to Python and Jupyter and I need to open the link to work on the details inside. -The link opens in my internet browser and I couldn't properly see the contents and bring it in to a Jupyter notebook. -Could anyone please give me a tip how to handle such links for Python/Jupyter?","Adding to Vinzee's answer: jupyter notebook starts in your home folder and you can't move up from there; only down into subfolders. Open jupyter to see what folder it starts in, and make sure that you put the .ipynb file in that folder or one of its subfolders.",1.2,True,1,6814 -2020-06-09 22:36:41.577,Project directory accidentally in sys.path - how to remove it?,"I don't know how it happened, but my sys.path now apparently contains the path to my local Python project directory, let's call that /home/me/my_project. (Ubuntu). -echo $PATH does not contain that path and echo $PYTHONPATH is empty. -I am currently preparing distribution of the package and playing with setup.py, trying to always work in an virtualenv. Perhaps I messed something up while not having a virtualenv active. Though I trying to re-install using python3 setup.py --record (in case I did an accidental install) fails with insufficient privileges - so I probably didn't accidentally install it into the system python. -Does anyone have an idea how to track down how my module path got to the sys.path and how to remove that?","I had the same problem. I don't have the full understanding of my solution, but here it is nonetheless. -My solution -Remove my package from site-packages/easy-install.pth -(An attempt at) explanation -The first hurdle is to understand that PYTHONPATH only gets added to sys.path, but is not necessarily equal to it. We are thus after what adds the package into sys.path. -The variable sys.path is defined by site.py. -One of the things site.py does is automatically add packages from site-packages into sys.path. -In my case, I incorrectly installed my package as a site-package, causing it to get added to easy-install.pth in site-packages and thus its path into sys.path.",0.0,False,1,6815 -2020-06-10 14:46:18.217,Low-latecy response with Ray on large(isch) dataset,"TL;DR -What's the fasted way to get near-zero loading time for a pandas dataset I have in memory, using ray? -Background -I'm making an application which uses semi-large datasets (pandas dataframes between 100MB to 700MB) and are trying to reduce each query time. For a lot of my queries the data loading is the majority of the response times. The datasets are optimized parquet files (categories instead of strings, etc) which only reads the columns it needs. -Currently I use a naive approach that per-requests loads the require dataset (reading the 10-20 columns out of 1000 I need from the dataset) and then filter out the rows I need. -A typical request: - -Read and parse the contract (~50-100ms) -Load the dataset (10-20 columns) (400-1200ms) -Execute pandas operations (~50-100ms) -Serialise the results (50-100ms) - -I'm now trying to speed this up (reduce or remove the load dataset step). -Things I have tried: - -Use Arrow's new row-level filtering on the dataset to only read the rows I need as well. This is probably a good way in the future, but for now the new Arrow Dataset API which is relies on is significantly slower than reading the full file using the legacy loader. -Optimize the hell out of the datasets. This works well to a point, where things are in categories, the data types is optimized. -Store the dataframe in Ray. Using ray.put and ray.get. However this doesn't actually improve the situation since the time consuming part is deserialization of the dataframe. -Put the dataset in ramfs. This doesn't actually improve the situation since the time consuming part is deserialization of the dataframe. -Store the object in another Plasma store (outside of ray.put) but obviously the speed is the same (even though I might get some other benefits) - -The datasets are parquet files, which is already pretty fast for serialization/deserialization. I typically select about 10-20 columns (out of 1000) and about 30-60% of the rows. -Any good ideas on how to speed up the loading? I haven't been able to find any near zero-copy operations for pandas dataframes (i.e without the serialization penalty). -Things that I am thinking about: - -Placing the dataset in an actor, and use one actor per thread. That would probably give the actor direct access to the dataframe without any serialization, but would require me to do a lot of handling of: - -Making sure I have an actor per thread -Distribute requests per threads -""Recycle"" the actors when the dataset gets updated - - -Regards, -Niklas","After talking to Simon on Slack we found the culprit: - -simon-mo: aha yes objects/strings are not zero copy. categorical or fixed length string works. for fixed length you can try convert them to np.array first - -Experimenting with this (categorical values, fixed length strings etc) allows me not quite get zero-copy but at least fairly low latency (~300ms or less) when using Ray Objects or Plasma store.",1.2,True,1,6816 -2020-06-10 15:08:46.340,linking web application's backend in python and frontend in flutter,I am making a CRM web application. I am planning to do its backend in python(because I only know that language better) and I have a friend who uses flutter for frontend. Is it possible to link these two things(flutter and python backend)? If yes how can it be done...and if no what are the alternatives I have?,I used $.ajax() method in HTML pages and then used request.POST['variable_name_used_in_ajax()'] in the views.py,1.2,True,2,6817 -2020-06-10 15:08:46.340,linking web application's backend in python and frontend in flutter,I am making a CRM web application. I am planning to do its backend in python(because I only know that language better) and I have a friend who uses flutter for frontend. Is it possible to link these two things(flutter and python backend)? If yes how can it be done...and if no what are the alternatives I have?,"Yes you both can access same Django rest framework Backend. Try searching for rest API using Django rest framework and you are good to go. -Other alternatives are Firebase or creating rest API with PHP. -You would need to define API endpoints for different functions of your app like login,register etc. -Django rest framework works well with Flutter. I have tried it. You could also host it in Heroku -Use http package in flutter to communicate with the Django server.",0.0,False,2,6817 -2020-06-10 16:24:39.130,Building Tensorflow 1.5,"I have an old Macbook Pro 3,1 running ubuntu 20.04 and python 3.8. The mac CPU doesn't have support for avx (Advanced Vector Extensions) which is needed for tensorflow 2.2 so whilst tensorflow installs, it fails to run with the error: - -illegal instruction (core dumped) - -I've surfed around and it seems that I need to use tensorflow 1.5 however there is no wheel for this for my configuration and I have the impression that I need to build one for myself. -So here's my question... how do I even start to do that? Does anyone have a URL to Building-Stuff-For-Dummies or something similar please? (Any other suggestions also welcome) -Thanks in advance for your help",Usually there are instructions for building in the repository's README.md. Isn't there such for TensorFlow? It would be odd.,0.0,False,2,6818 -2020-06-10 16:24:39.130,Building Tensorflow 1.5,"I have an old Macbook Pro 3,1 running ubuntu 20.04 and python 3.8. The mac CPU doesn't have support for avx (Advanced Vector Extensions) which is needed for tensorflow 2.2 so whilst tensorflow installs, it fails to run with the error: - -illegal instruction (core dumped) - -I've surfed around and it seems that I need to use tensorflow 1.5 however there is no wheel for this for my configuration and I have the impression that I need to build one for myself. -So here's my question... how do I even start to do that? Does anyone have a URL to Building-Stuff-For-Dummies or something similar please? (Any other suggestions also welcome) -Thanks in advance for your help",Update: I installed python 3.6 alongside the default 3.8 and then installed tensorflow 1.5 and it looks like it works now (albeit with a few 'future warnings'.),0.0,False,2,6818 -2020-06-10 17:22:00.360,xgboost how to copy model,"In the xgboost documentation they refer to a copy() method, but I can't figure out how to use it since if foo is my model, neither bar = foo.copy() nor bar=xgb.copy(foo) works (xgboost can't find a copy() attribute of either the module or the model). Any suggestions?","It turns out that copy() is a method of the Booster object, but a (say) XGBClassifier is not one, so if using the sklearn front end, you do -bar = foo.get_booster().copy()",0.2012947653214861,False,1,6819 -2020-06-11 02:21:57.910,Need help getting data using Selenium,"I'm trying to get Python and selenium to store the ""1292"" in the following html script and cant figure out why it won't work. I've tried using find_element_by_xpath as well as placing a wait before it and I keep getting this error ""Message: no such element: Unable to locate element:"" -Any ideas on how else I can accomplish this? Thanks - - 1292 - ","You can try: -driver.find_element_by_xpath(""//tspan[text()='1292']"").text -to obtain the string ""1292"".",0.0,False,1,6820 -2020-06-11 07:07:02.407,Alternatives for interaction between C# and Python application -- Pythonnet vs DLL vs shared memory vs messaging,"We have a big C# application, would like to include an application written in python and cython inside the C# -Operating system: Win 10 -Python: 2.7 -.NET: 4.5+ -I am looking at various options for implementation here. -(1) pythonnet - embed the python inside the C# application, if I have abc.py and inside the C#, while the abc.py has a line of ""import numpy"", does it know how to include all python's dependencies inside C#? -(2) Convert the python into .dll - Correct me if i am wrong, this seems to be an headache to include all python files and libraries inside clr.CompileModules. Is there any automatically solution? (and clr seems to be the only solution i have found so far for building dll from python. -(3) Convert .exe to .dll for C# - I do not know if i can do that, all i have is the abc.exe constructed by pyinstaller -(4) shared memory seems to be another option, but the setup will be more complicated and more unstable? (because one more component needs to be taken care of?) -(5) Messaging - zeromq may be a candidate for that. -Requirements: -Both C# and python have a lot of classes and objects and they need to be persistent -C# application need to interact with Python Application -They run in real-time, so performance for communication does matter, in milliseconds space. -I believe someone should have been through a similar situation and I am looking for advice to find the best suitable solution, as well as pros and cons for above solution. -Stability comes first, then the less complex solution the better it is.",For variant 1: in my TensorFlow binding I simply add the content of a conda environment to a NuGet package. Then you just have to point Python.NET to use that environment instead of the system Python installation.,0.0,False,1,6821 -2020-06-11 15:48:31.190,Test interaction between flask apps,"I have a flask app that is intended to be hosted on multiple host. That is, the same app is running on different hosts. Each host can then send a request to the others host to take some action on the it's respective system. -For example, assume that there is systems A and B both running this flask app. A knows the IP address of B and the port number that the app is hosted on B. A gets a request via a POST intended for B. A then needs to forward this request to B. -I have the forwarding being done in a route that simply checks the JSON attached to the POST to see if it is the intended host. If not is uses python's requests library to make a POST request to the other host. -My issue is how do I simulate this environment (two different instance of the same app with different ports) in a python unittest so I can confirm that the forwarding is done correctly? -Right now I am using the app.test_client() to test most of the routes but as far as I can tell the app.test_client() does not contain a port number or IP address associated with it. So having the app POST to another app.test_client() seems unlikely. -I tried hosting the apps in different threads but there does not seem to be a clean and easy way to kill the thread once app.run() starts, can't join as app.run() never exits. In addition, the internal state of the app (app.config) would be hidden. This makes verifying that A does not do the request and B does hard. -Is there any way to run two flask app simultaneously on different port numbers and still get access to both app's app.config? Or am I stuck using the threads and finding some other way to make sure A does not execute the request and B does? -Note: these app do not have any forums so there is no CSRF.","I ended up doing two things. One, I started using patch decorator from the mock library to fake the response form systems B. More specifically I use the @patch('requests.post') then in my code I set the return value to ""< Response [200]>"". However this only makes sure that requests.post is called, not that the second system processed it correctly. The second thing I did was write a separate test that makes the request that should have been sent by A and sends it to the system to check if it processes it correctly. In this manner systems A and B are never running at the same time. Instead the tests just fake there responses/requests. -In summery, I needed to use @patch('requests.post') to fake the reply from B saying it got the request. Then, in a different test, I set up B and made a request to it.",0.0,False,1,6822 -2020-06-11 23:53:40.030,How do I perform crosscorelation between two time series and what transformations should I perform in python?,"I have two-time series datasets i.e. errors received and bookings received on a daily basis for three years (a few million rows). I wish to find if there is any relationship between them.As of now, I think that cross-correlation between these two series might help. I order to so, should I perform any transformations like stationarity, detrending, deseasonality, etc. If this is correct, I'm thinking of using ""scipy.signal.correlate¶"" but really want to know how to interpret the result?","scipy.signal.correlate is for the correlation of time series. For series y1 and y2, correlate(y1, y2) returns a vector that represents the time-dependent correlation: the k-th value represents the correlation with a time lag of ""k - N + 1"", so that the N+1 th element is the similarity of the time series without time lag: close to one if y1 and y2 have similar trends (for normalized data), close to zero if the series are independent. -numpy.corrcoef takes two arrays and aggregates the correlation in a single value (the ""time 0"" of the other routine), the Pearson correlation, and does so for N rows, returning a NxN array of correlations. corrcoef normalizes the data (divides the results by their rms value), so that he diagonal is supposed to be 1 (average self correlation). -The questions about stationarity, detrending, and deseasonality depend on your specific problem. The routines above consider ""plain"" data without consideration for their signification.",1.2,True,1,6823 -2020-06-12 19:17:12.797,How to remove superuser on the system in Django?,"I was doing some project by using django -and I realized that I forgot to activate virtualenv. -I already made some changes and applied it not on the venv, -and created superuser on the system. - -How to find any changes on the system? -how to remove superuser that I made on the system -and what are the cmd commands for that?","If you haven't setup an additional database for your project and you have used django-admin startproject you'll just have a standard django setup, and you will be using sqlite. With this setup, your database is stored in a file in your root directory (for the project) called db.sqlite3. -This is where the super-user you have created will be stored. So it does not matter if the virtualenv was activated or not. Your superuser will have been created in the right place. -TLDR: No need to worry, the superuser you created will most likely be in the right place.",1.2,True,1,6824 -2020-06-12 19:21:05.707,How to get python to search for whole numbers in a string-not just digits,"Okay please do not close this and send me to a similar question because I have been looking for hours at similar questions with no luck. -Python can search for digits using re.search([0-9]) -However, I want to search for any whole number. It could be 547 or 2 or 16589425. I don't know how many digits there are going to be in each whole number. -Furthermore I need it to specifically find and match numbers that are going to take a form similar to this: 1005.2.15 or 100.25.1 or 5.5.72 or 1102.170.24 etc. -It may be that there isn't a way to do this using re.search but any info on what identifier I could use would be amazing.","Assuming that you're looking for whole numbers only, try re.search(r""[0-9]+"")",0.0,False,1,6825 -2020-06-12 20:05:50.403,Dynamic Select Statement In Python,"I'm using Python with cx_Oracle, and I'm trying to do an INSERT....SELECT. Some of the items in the SELECT portion are variable values. I'm not quite sure how to accomplish this. Do I bind those variables in the SELECT part, or just concatenate a string? - - v_insert = (""""""\ - INSERT INTO editor_trades - SELECT "" + v_sequence + "", "" + issuer_id, UPPER("" + p_name + ""), "" + p_quarter + "", "" + p_year + - "", date_traded, action, action_xref, SYSDATE - FROM "" + p_broker.lower() + ""_tmp"") """""") - -Many thanks!","With Oracle DB, binding only works for data, not for SQL statement text (like column names) so you have to do concatenation. Make sure to allow-list or filter the variables (v_sequence etc) so there is no possibility of SQL injection security attacks. You probably don't need to use lower() on the table name, but that's not 100% clear to me since your quoting currently isn't valid.",0.0,False,1,6826 -2020-06-14 05:35:32.993,Heroku won't run latest python file,"I use Heroku to host my discord.py bot, and since I've started using sublime merge to push to GitHub (I use Heroku GitHub for it), Heroku hasn't been running the latest file. The newest release is on GitHub, but Heroku runs an older version. I don't think it's anything to do with sublime merge, but it might be. I've already tried making a new application, but same problem. Anyone know how to fix this? -Edit: I also tried running Heroku bash and running the python file again","1) Try to deploy branch (maybe another branch) -2) Enable automatic deploy",0.3869120172231254,False,1,6827 -2020-06-14 09:54:51.873,Is it faster and more memory efficient to manipulate data in Python or PostgreSQL?,"Say I had a PostgreSQL table with 5-6 columns and a few hundred rows. Would it be more effective to use psycopg2 to load the entire table into my Python program and use Python to select the rows I want and order the rows as I desire? Or would it be more effective to use SQL to select the required rows, order them, and only load those specific rows into my Python program. -By 'effective' I mean in terms of: - -Memory Usage. -Speed. - -Additionally, how would these factors start to vary as the size of the table increases? Say, the table now has a few million rows?","Actually, if you are comparing data that is already loaded into memory to data being retrieved from a database, then the in-memory operations are often going to be faster. Databases have overhead: - -They are in separate processes on the same server or on a different server, so data and commands needs to move between them. -Queries need to be parsed and optimized. -Databases support multiple users, so other work may be going on using up resources. -Databases maintain ACID properties and data integrity, which can add additional overhead. - -The first two of these in particular add overhead compared to equivalent in-memory operations for every query. -That doesn't mean that databases do not have advantages, particularly for complex queries: - -They implement multiple different algorithms and have an optimizer to choose the best one. -They can take advantage of more resources -- particularly by running in parallel. -They can (sometimes) cache results saving lots of time. - -The advantage of databases is not that they provide the best performance all the time. The advantage is that they provide good performance across a very wide range of requests with a simple interface (even if you don't like SQL, I think you need to admit that it is simpler, more concise, and more flexible that writing code in a 3rd generation language). -In addition, databases protect data, via ACID properties and other mechanisms to support data integrity.",1.2,True,1,6828 -2020-06-15 04:21:10.657,Creating a stop in a While loop - Python,"I am working on a code that is supposed to use a while loop to determine if the number inputted by the user is the same as the variable secret_number = 777. -the following criteria are: -will ask the user to enter an integer number; -will use a while loop; -will check whether the number entered by the user is the same as the number picked by the magician. If the number chosen by the user is different than the magician's secret number, the user should see the message ""Ha ha! You're stuck in my loop!"" and be prompted to enter a number again. -If the number entered by the user matches the number picked by the magician, the number should be printed to the screen, and the magician should say the following words: ""Well done, muggle! You are free now."" -if you also have any tips how to use the while loop that would be really helpful. Thank you!","You can use while(true) to create a while loop. -Inside, set a if/else to compare the value input and secret_number. If it's true, print(""Well done, muggle! You are free now."") and break. Unless, print(""Ha ha! You're stuck in my loop"") and continue",0.0,False,1,6829 -2020-06-15 16:39:14.833,"IDLE and python is different, not able to install modules properly","thanks for reading this. I am using macOS High Sierra. I am not very familiar with terminal or environment variables, but am trying to learn more. From reading other threads and google, it seems like I either have multiple pythons installed, or have pythons running from different paths. However I am not able to find a solution to resolving this, either by re-pathing my IDLE or deleting it entirely. -I do have python, python launcher, and anaconda (not very sure how anaconda works, have it installed a few years back and didn't touch it) installed. I am trying to install pandas (pip install pandas), which tells me that I have it installed, but when I run it on IDLE, it says module not found. Though if i run python3 on terminal and type my code in, it works (so pandas has indeed been installed). -When i run which python on terminal, it returns -/Users/myname/anaconda3/bin/python -(when i enter into this directory from terminal, it shows that in the bin folder, I have python, python.app, python3, python3-config, python3.7, python3.7-config, python3.7m, python3.7m-config) -When i run which idle on terminal, it returns -/usr/bin/idle (im not even sure how to find this directory from the terminal) -When i run import os; print(os.path) on IDLE, it returns module 'posixpath' from '/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/posixpath.py' -Would really appreciate some help to figure out how to ensure that when i install modules from terminal, it would be installed into the same python as the one IDLE is using. Also, I would like to know whether it is possible for me to work on VSCode instead of IDLE. I cant seem to find suitable extensions for data science and its related modules (like statsmodels, pandas etc). Thanks a lot!","First: This would be a comment if I had enough reputation. -Second: I would just delete python. Everything. And reinstall it.",0.1352210990936997,False,3,6830 -2020-06-15 16:39:14.833,"IDLE and python is different, not able to install modules properly","thanks for reading this. I am using macOS High Sierra. I am not very familiar with terminal or environment variables, but am trying to learn more. From reading other threads and google, it seems like I either have multiple pythons installed, or have pythons running from different paths. However I am not able to find a solution to resolving this, either by re-pathing my IDLE or deleting it entirely. -I do have python, python launcher, and anaconda (not very sure how anaconda works, have it installed a few years back and didn't touch it) installed. I am trying to install pandas (pip install pandas), which tells me that I have it installed, but when I run it on IDLE, it says module not found. Though if i run python3 on terminal and type my code in, it works (so pandas has indeed been installed). -When i run which python on terminal, it returns -/Users/myname/anaconda3/bin/python -(when i enter into this directory from terminal, it shows that in the bin folder, I have python, python.app, python3, python3-config, python3.7, python3.7-config, python3.7m, python3.7m-config) -When i run which idle on terminal, it returns -/usr/bin/idle (im not even sure how to find this directory from the terminal) -When i run import os; print(os.path) on IDLE, it returns module 'posixpath' from '/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/posixpath.py' -Would really appreciate some help to figure out how to ensure that when i install modules from terminal, it would be installed into the same python as the one IDLE is using. Also, I would like to know whether it is possible for me to work on VSCode instead of IDLE. I cant seem to find suitable extensions for data science and its related modules (like statsmodels, pandas etc). Thanks a lot!","First of all, a quick description of anaconda: -Anaconda is meant to help you manage multiple python ""environments"", each one potentially having its own python version and installed packages (with their own respective versions). This is really useful in cases where you would like multiple python versions for different tasks or when there is some conflict in versions of packages, required by other ones. By default, anaconda creates a ""base"" environment with a specific python version, IDLE and pip. Also, anaconda provides an improved way (with respect to pip) of installing and managing packages via the command conda install . -For the rest, I will be using the word ""vanilla"" to refer to the python/installation that you manually set up, independent of anaconda. -Explanation of the problem: -Now, the problem arises since you also installed python independently. The details of the problem depend on how exactly you set up both python and anaconda, so I cannot tell you exactly what went wrong. Also, I am not an OSX user, so I have no idea how python is installed and what it downloads/sets alongside. -By your description however, it seems that the ""vanilla"" python installation did not overwrite neither your anaconda python nor anaconda's pip, but it did install IDLE and set it up to use this new python. -So right now, when you are downloading something via pip, only the python from anaconda is able to see that and not IDLE's python. -Possible solutions: -1. Quick fix: -Just run IDLE via /Users/myname/anaconda3/bin/idle3 every time. This one uses anaconda's python and should be able to see all packages installed via conda install of pip install (*). I get this is tiresome, but you don't have to delete anything. You can also set an ""alias"" in your ~/.bashrc file to make the command idle specifically linking you there. Let me know with a comment if you would like me to explain how to do that, as this answer will get too long and redundant. -2. Remove conda altogether (not recommended) -You can search google on how to uninstall anaconda along with everything that it has installed. What I do not know at this point is whether your ""vanilla"" python will become the default, whether you will need to also manually install pip again and whether there is the need to reinstall python in order for everything to work properly. -3. Remove your python ""vanilla"" installation and only use anaconda -Again, I do not know how python installation works in OSX, but it should be reasonably straightforward to uninstall it. The problem now is that probably you will not have a launcher for IDLE (since I am guessing anaconda doesn't provide one on OSX) but you will be able to use it via the terminal as described in 1.. -4. Last resort: -If everything fails, simply uninstall both your vanilla python (which I presume will also uninstall IDLE) and anaconda which will uninstall its own python, pip and idle versions. The relevant documentation should not be difficult to follow. Then, reinstall whichever you want anew. -Finally: -When you solve your problems, any IDE you choose, being VScode (I haven't use that either), pycharm or something else, will probably be able to integrate with your installed python. There is no need to install a new python ""bundle"" with every IDE. - -(*): Since you said that after typing pip install pandas your anaconda's python can import pandas while IDLE cannot, I am implying in my answer that pip is also the one that comes with anaconda. You can make sure this is the case by typing which pip which should point to an anaconda directory, probably /Users/myname/anaconda3/bin/pip",1.2,True,3,6830 -2020-06-15 16:39:14.833,"IDLE and python is different, not able to install modules properly","thanks for reading this. I am using macOS High Sierra. I am not very familiar with terminal or environment variables, but am trying to learn more. From reading other threads and google, it seems like I either have multiple pythons installed, or have pythons running from different paths. However I am not able to find a solution to resolving this, either by re-pathing my IDLE or deleting it entirely. -I do have python, python launcher, and anaconda (not very sure how anaconda works, have it installed a few years back and didn't touch it) installed. I am trying to install pandas (pip install pandas), which tells me that I have it installed, but when I run it on IDLE, it says module not found. Though if i run python3 on terminal and type my code in, it works (so pandas has indeed been installed). -When i run which python on terminal, it returns -/Users/myname/anaconda3/bin/python -(when i enter into this directory from terminal, it shows that in the bin folder, I have python, python.app, python3, python3-config, python3.7, python3.7-config, python3.7m, python3.7m-config) -When i run which idle on terminal, it returns -/usr/bin/idle (im not even sure how to find this directory from the terminal) -When i run import os; print(os.path) on IDLE, it returns module 'posixpath' from '/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/posixpath.py' -Would really appreciate some help to figure out how to ensure that when i install modules from terminal, it would be installed into the same python as the one IDLE is using. Also, I would like to know whether it is possible for me to work on VSCode instead of IDLE. I cant seem to find suitable extensions for data science and its related modules (like statsmodels, pandas etc). Thanks a lot!","To repeat and summarized what has been said on various other question answers: -1a. 3rd party packages are installed for a particular python(3).exe binary. -1b. To install multiple packages to multiple binaries, see the option from python -m pip -h. - -To find out which python binary is running, execute import sys; print(sys.executable). - -3a. For 3rd party package xyz usually installed in some_python/Lib/site-packages, IDLE itself has nothing to do with whether import xyz works. It only matters whether xyz is installed for 'somepython' (see 1a). -3b. To run IDLE with 'somepython', run somepython -m idlelib in a terminal or console. -somepython can be a name recognized by the OS or a path to a python executable.",0.0,False,3,6830 -2020-06-15 16:46:12.930,Why does os.system('cls') print 0,"hello before I say anything I would like to let you know that I tried searching for the answer but I found nothing. -whenever I use os.system('cls') it clears the screen but it prints out a zero. -is this normal, if not how do I stop it from doing that?","I guess you running in inside an interpreter -os.system will return: - -a 16-bit number, whose low byte is the signal number that killed the process, and whose high byte is the exit status (if the signal number is zero) - -So it just print the value it got, the return value of the command cls in the command line, which is 0 cause the command run successfully",0.2012947653214861,False,1,6831 -2020-06-15 22:24:45.037,VS Code - pylint is not running,"I have a workspace setup in VS Code where I do python development. I have linting enabled, pylint enabled as the provider, and lint on save enabled, but I continue to see no errors in the Problems panel. When I run pylint via the command line in the virtual environment i see a bunch of issues - so I know pylint works. I am also using black formatting(on save) which works without issue. I have tried using both the default pylint path as well as updating it manually to the exact location and still no results. When I look at the Output panel for python it looks like pylint is never even running (i.e. I see the commands for black running there but nothing for pylint). -My pylint version is 2.4.4 and VS Code version 1.46 -Any idea how to get this working?","Uninstall Python Extension -Reinstall Python Extension -And with that there will will be one more extension of ""Python Extension"" named - ""PYLANCE"" don't forget to install that too. -Reload VS Code - -DONE !!",0.0,False,1,6832 -2020-06-16 06:08:53.517,Saving a File in an Atom Text Editor Folder,"This is my first time on stack overflow. I am a beginner python coder and I use the Atom text editor. I am currently learning from a book called PythonCrashCourse by Eric Matthes (second edition) and is developing a practice-project called Alien Invasion. I am currently stuck on saving a file of a spaceship image into a folder named ""images"" within my text editor. I have an ASUS chromebook. The file I am trying to save is called ship.bmp and the book instructions say ""Make a folder called images inside your main alien_invasion project folder. Save the file ship.bmp in the images folder."" I have the ship.bmp file saved but I just don't know how to transport it into a file within my text editor ""images"" folder. I have been stuck on this for quite a while and I would really appreciate it if someone could give me some advice. Thanks!","First of all you need to have the ship.bmp file downloaded somewhere on your computer. You then would need to move it into your project folder. I think that the easiest way for you to navigate through the files you have is to go to your ""Files"" app in the Chromebook. You should look through your Downloads folder for the ship.bmp after you download it and manually move it into the project folder that you are working on. You should be able to open your project folder and place the ship.bmp file inside the ""images"" folder.",0.0,False,1,6833 -2020-06-16 10:20:24.730,How does Python compare two lists of unequal length?,"I am aware of the following: - -[1,2,3]<[1,2,4] is True because Python does an element-wise comparison from left to right and 3 < 4 -[1,2,3]<[1,3,4] is True because 2 < 3 so Python never even bothers to compare 3 and 4 - -My question is how does Python's behavior change when I compare two lists of unequal length? - -[1,2,3]<[1,2,3,0] is True -[1,2,3]<[1,2,3,4] is True - -This led me to believe that the longer list is always greater than the shorter list. But then: - -[1,2,3]<[0,0,0,0] is False - -Can someone please explain how these comparisons are being done by Python? -My hunch is that element-wise comparisons are first attempted and only if the first n elements are the same in both lists (where n is the number of elements in the shorter list) does Python consider the longer list to be greater. If someone could kindly confirm this or shed some light on the reason for this behavior, I'd be grateful.","The standard comparisons (<, <=, >, >=, ==, !=, in , not in ) work exactly the same among lists, tuples and strings. -The lists are compared element by element. -If they are of variable length, it happens till the last element of the shorter list -If they are same from start to the length of the smaller one, the length is compared i.e. shorter is smaller",1.2,True,1,6834 -2020-06-16 18:37:38.617,Cannot install older versions of tensorflow: No matching distribution found for tensorflow==1.9.0,"I need to install older versions of tensorflow to get the deepface library to work properly, however whenever I run pip install tensorflow==1.9.0, I get: ERROR: Could not find a version that satisfies the requirement tensorflow==1.9.0 (from versions: 2.2.0rc1, 2.2.0rc2, 2.2.0rc3, 2.2.0rc4, 2.2.0) -Anyone else run into this issue/know how to fix it? Thanks!",You can install TensorFlow 1.9.0 with the following Python versions: 2.7 and 3.4 to 3.6.,0.6730655149877884,False,1,6835 -2020-06-17 20:14:07.813,Remove character '\xa0' while reading CSV file in python,I want to remove the non-ASCII Character '\xa0' while reading my CSV file using read_csv into a dataframe with python. Can someone tell me how to achieve this?,"You can use x = txt.replace(u'\xa0', u'') for text you're reading.",1.2,True,1,6836 -2020-06-17 21:33:19.697,"How to scrape over 50,000 data points from dynamically loading webpage in under 24 hours?","I am using selenium python and was wondering how one effectively scrapes over 50,000 data points in under 24 hours. For example, when I search for products on the webpage 'insight.com' it takes about 3.5 seconds for the scraper to search for the product and grab its price, meaning that with large amounts of data it takes the scraper several days. A part from using threads to simultaneously look up several products at the same time, how else can I speed up this process? -I only have one laptop and will have to simultaneously scrape six other similar websites so therefore do not want too many threads and the speed at which the computer operates will slow down significantly. -How do people achieve to scrape large amounts of data in such short periods of time?","If you stop using the selenium module, and rather work with a much more sleek and elegant module, like requests, you could get the job done in a matter of mere minutes. -If you manage to reverse engineer the requests being handled, and send them yourself, you could pair this with threading to scrape at some 50 'data points' per second, more or less (depending on some factors, like processing and internet connection speed).",0.3869120172231254,False,2,6837 -2020-06-17 21:33:19.697,"How to scrape over 50,000 data points from dynamically loading webpage in under 24 hours?","I am using selenium python and was wondering how one effectively scrapes over 50,000 data points in under 24 hours. For example, when I search for products on the webpage 'insight.com' it takes about 3.5 seconds for the scraper to search for the product and grab its price, meaning that with large amounts of data it takes the scraper several days. A part from using threads to simultaneously look up several products at the same time, how else can I speed up this process? -I only have one laptop and will have to simultaneously scrape six other similar websites so therefore do not want too many threads and the speed at which the computer operates will slow down significantly. -How do people achieve to scrape large amounts of data in such short periods of time?","Find an API and use that!!! The goal of both web scraping and APIs is to access web data. -Web scraping allows you to extract data from any website through the use of web scraping software. On the other hand, APIs give you direct access to the data you’d want. -As a result, you might find yourself in a scenario where there might not be an API to access the data you want, or the access to the API might be too limited or expensive. -In these scenarios, web scraping would allow you to access the data as long as it is available on a website. -For example, you could use a web scraper to extract product data information from Amazon since they do not provide an API for you to access this data. However, if you had access to an API, you could grab all the data you want, super, super, super fast!!! It's analogous to doing a query in a database on prem, which is very fast and very efficient, vs. refreshing a webpage, waiting for ALL elements to load, and you can't use the data until all elements have been loaded, and then.....do what you need to do.",0.2012947653214861,False,2,6837 -2020-06-18 02:49:23.653,How to efficiently query a large database on a hourly basis?,"Background: -I have multiple asset tables stored in a redshift database for each city, 8 cities in total. These asset tables display status updates on an hourly basis. 8 SQL tables and about 500 mil rows of data in a year. -(I also have access to the server that updates this data every minute.) - -Example: One market can have 20k assets displaying 480k (20k*24 hrs) status updates a day. - -These status updates are in a raw format and need to undergo a transformation process that is currently written in a SQL view. The end state is going into our BI tool (Tableau) for external stakeholders to look at. -Problem: -The current way the data is processed is slow and inefficient, and probably not realistic to run this job on an hourly basis in Tableau. The status transformation requires that I look back at 30 days of data, so I do need to look back at the history throughout the query. -Possible Solutions: -Here are some solutions that I think might work, I would like to get feedback on what makes the most sense in my situation. - -Run a python script that looks at the most recent update and query the large history table 30 days as a cron job and send the result to a table in the redshift database. -Materialize the SQL view and run an incremental refresh every hour -Put the view in Tableau as a datasource and run an incremental refresh every hour - -Please let me know how you would approach this problem. My knowledge is in SQL, limited Data Engineering experience, Tableau (Prep & Desktop) and scripting in Python or R.","So first things first - you say that the data processing is ""slow and inefficient"" and ask how to efficiently query a large database. First I'd look at how to improve this process. You indicate that the process is based on the past 30 days of data - is the large tables time sorted, vacuumed and analyzed? It is important to take maximum advantage of metadata when working with large tables. Make sure your where clauses are effective at eliminating fact table block - don't rely on dimension table where clauses to select the date range. -Next look at your distribution keys and how these are impacting the need for your critical query to move large amounts of data across the network. The internode network has the lowest bandwidth in a Redshift cluster and needlessly pushing lots of data across it will make things slow and inefficient. Using EVEN distribution can be a performance killer depending on your query pattern. -Now let me get to your question and let me paraphrase - ""is it better to use summary tables, materialized views, or external storage (tableau datasource) to store summary data updated hourly?"" All 3 work and each has its own pros and cons. - -Summary tables are good because you can select the distribution of the data storage and if this data needs to be combined with other database tables it can be done most efficiently. However, there is more data management to be performed to keep this data up to data and in sync. -Materialized views are nice as there is a lot less management action to worry about - when the data changes, just refresh the view. The data is still in the database so is is easy to combine with other data tables but since you don't have control over storage of the data these action may not be the most efficient. -External storage is good in that the data is in your BI tool so if you need to refetch the results during the hour the data is local. However, it is not locked into your BI tool and far less efficient to combine with other database tables. - -Summary data usually isn't that large so how it is stored isn't a huge concern and I'm a bit lazy so I'd go with a materialized view. Like I said at the beginning I'd first look at the ""slow and inefficient"" queries I'm running every hour first. -Hope this helps",1.2,True,1,6838 -2020-06-18 03:50:06.060,How to send a HTML file as a table through outlook?,"I now have an HTML file and I want to send it as a table, not an attachment by using outlook. The code that I found online only sends the file as an attachment. Can anyone give me ideas on how to do it properly?",You can use the HTMLBody property of the MailItem class to set up the message body.,1.2,True,1,6839 -2020-06-18 03:51:03.357,Python idle to python.exe,"So I've made a script/code in python idle and want to run it on python.exe but whenever I do this the you can see the python window pop up briefly for a second before closing, and I want to run my code using python instead of idle, how can I do this?","since I cant comment yet: -go to the command line and open the file location directory and type: python filename.py",1.2,True,1,6840 -2020-06-18 04:35:58.813,Using Selenium without using any browser,"I have been trying to do web automation using selenium,Is there any way to use browser like chrome,firefox without actually installing then, like using some alternate options, or having portable versions of them.If I can use portable versions how do i tell selenium to use it?","If you install pip install selenium -it comes with the protable chrome browser, no need to install any browser for this. -the chrome has a tag ""chrome is controlled by automated test software"" near search bar",0.0,False,1,6841 -2020-06-18 06:04:06.180,Tkinter: How do I handle menus with many items?,"If I have a menu with too many items to fit on the screen, how do I get one of those 'more' buttons with a downward arrow at the bottom of the menu? Is that supported?","I solved my problem with cascading menus. I already had some, but I didn't want to use more for these particular menus items—but after closer inspection, I think it's better this way. -I'm still interested in other solutions, for scenarios where cascading menus are not a practical option, however (like if the screen is too narrow to cascade that far or something). So, I don't plan to mark this as the accepted answer anytime soon (even though in most circumstances, it's probably the best solution).",-0.2012947653214861,False,1,6842 -2020-06-18 10:18:20.563,How to check if a QThread is alive or killed and restart it if it is killed in PyQt5?,"I have an PyQt5 application to update database collections one by one using QThread and send updation signal to main thread as each collection gets updated to reflect it on GUI. It runs continuously 24X7. But somehow the data stops getting updated and also GUI stops getting signals. But the application is still running as other part are accessible and functioning properly. Also no errors are found in log file. -Mostly the application runs fine but after some random period this problem arises(first time after approximately a month, then after 2 weeks and now after 23 days). However restarting the application solves the problem. -I tried using isRunning() method and isFinished() method but no change found. -Can anyone tell what is the problem?? Thank you in advance. -Also tell how to check weather the QThread is stuck or killed?","If any exception occur in the thread, then thread can be finished soon. -so You should use settimeout function to calling any third party library(data update) in the thread. -That will solve your problem.",0.0,False,1,6843 -2020-06-18 12:22:55.510,Ngrok hostname SSL Certificate,"I am running a Flask API application, and I have an SSL Certificate. -When I run flask server on localhost the certificate is applied from Flask successfully. -But when I use Ngrok to deploy the localhost on a custom domain, the certificate is changed to *.ngrok.com, how can I change that to my certificate?. -EDIT #1: -I already have a certificate for the new hostname and I have already applied it on Flask, but ngrok is changing it.","You expose your service through the URL *.ngrok.com. A browser or other client will make a request to *.ngrok.com. The certificate presented there must be valid for *.ngrok.com. If *.ngrok.com presents a certificate for example.com, any valid HTTPS client would reject it because the names do not match, which by definition makes it an invalid certificate and is a flag for a potential security problem, exactly what HTTPS is designed to mitigate. -If you want to present your certificate for example.com to the client, you need to actually host your site at example.com",0.0,False,1,6844 -2020-06-18 14:48:27.287,Record sound without blocking Pygame UI,"I am making a simple Python utility that shows the tempo of a song (BPM) that is playing. I record short fragments of a few seconds to calculate the tempo over. The problem is that now I want to show this on a display using a Pygame UI, but when I'm recording sound, the UI does not respond. I want to make it so that the UI will stay responsive during the recording of the sound, and then update the value on the screen once the tempo over a new fragment has been calculated. How can I implement this? -I have looked at threading but I'm not sure this is the appropriate solution for this.","I'd use the python threading library. -Use the pygame module in the main thread (just the normal python shell, effectively) an create a separate thread for the function that determines BPM. -This BPM can then be saved to a global variable that can be accessed by PyGame for displaying.",1.2,True,1,6845 -2020-06-18 18:48:51.503,Text classification using Word2Vec,"I am in trouble to understand Word2Vec. I need to do a help desk text classification, based on what users complain in the help desk system. Each sentence has its own class. -I've seen some pre-trained word2vec files in the internet, but I don't know if is the best way to work since my problem is very specific. And my dataset is in Portuguese. -I'm considering that I will have to create my own model and I am in doubt on how to do that. Do I have to do it with the same words as the dataset I have with my sentences and classes? -In the frst line, the column titles. Below the first line, I have the sentence and the class. Could anyone help me? I saw Gensin to create vector models, and sounds me good. But I am completely lost. - -: chamado,classe 'Prezados não estou conseguindo gerar uma nota fiscal - do módulo de estoque e custos.','ERP GESTÃO', 'Não consigo acessar o - ERP com meu usuário e senha.','ERP GESTÃO', 'Médico não consegue gerar - receituário no módulo de Medicina e segurança do trabalho.','ERP - GESTÃO', 'O produto 4589658 tinta holográfica não está disponível no - EIC e não consigo gerar a PO.','ERP GESTÃO',","Your inquiry is very general, and normally StackOverflow will be more able to help when you've tried specific things, and hit specific problems - so that you can provide exact code, errors, or shortfalls to ask about. -But in general: - -You might not need word2vec at all: there are many text-classification approaches that, with sufficient training data, may assign your texts to helpful classes without using word-vectors. You will likely want to try those first, then consider word-vectors as a later improvement. -For word-vectors to be helpful, they need to be based on your actual language, and also ideally your particular domain-of-concern. Generic word-vectors from news articles or even Wikipedia may not include the important lingo, and word-senses for your problem. But it's not too hard to train your own word-vectors – you just need a lot of varied, relevant texts that use the words in realistic, relevant contexts. So yes, you'd ideally train your word-vectors on the same texts you eventually want to classify. - -But mostly, if you're ""totally lost"", start with more simple text-classification examples. As you're using Python, examples based on scikit-learn may be most relevant. Adapt those to your data & goals, to familiarize yourself with all the steps & the ways of evaluating whether your changes are improving your end results or not. Then investigate techniques like word-vectors.",0.0,False,1,6846 -2020-06-19 18:05:21.573,Pyqt5 widget style similar to tkinter style,"I want to create a qwidgets one with raised/sunkin/groove/ridge relief similar to tkinter. I know how to do this in tkinter, but don't know the style sheet option in Pyqt5 for each one. Please find the tkinter option -Widget = Tkinter.Button(top, text =""FLAT"", relief=raised ). Hope you can help to translate to Pyqt5",You can do this with QFrame. you can set QFrame.setFrameShadow(QFrame.Sunken). But I couldn't find for a QWidget one.,0.0,False,1,6847 -2020-06-20 13:21:05.770,How to program NVIDIA's tensor cores in RTX GPU with python and numba?,"I am interested in using the tensor cores from NVIDIA RTX GPUs in python to benefit from its speed up in some scientific computations. Numba is a great library that allows programming kernels for cuda, but I have not found how to use the tensor cores. Can it be done with Numba? If not, what should I use?",".... I have not found how to use the tensor cores. Can it be done with Numba? - -No. Numba presently doesn't have half precision support or tensor core intrinsic functions available in device code. - -If not, what should I use? - -I think you are going to be stuck with writing kernel code in the native C++ dialect and then using something like PyCUDA to run device code compiled from that C++ dialect.",1.2,True,1,6848 -2020-06-20 19:06:57.617,is it possible to run multiple http servers on one machine?,"can i run multiple python http servers on one machine to receive http post request from a webpage? -currently i am running an http server on port 80 and on the web page there is a HTML form which sends the http post request to the python server and in the HTML form i am using the my server's address like this : ""http://123.123.123.123"" and i am receiving the requests -but i want to run multiple servers on the same machine with different ports for each server. -if i run 2 more servers on port 21200 and 21300 how do i send the post request from the HTML form on a specified port , so that the post request is received and processed by correct server?? -do i need to define the server address like this : ""http://123.123.123.123:21200"" and ""http://123.123.123.123:21300"" ?","Yes can run multiple webservers on one machine. -use following commands to run on different ports: -python3 -m http.server 4000 -4000 is the port number, you can replace it with any port number here.",1.2,True,1,6849 -2020-06-21 01:52:26.577,How to change API level when using buildozer?,"I just finished my app and made a release version with buildozer and signed it but when I tried to upload my apk file to Google Play Console...It said that the API level of the app was 27 and it should be level 28. So how can I do this? -Thanks in advance",Find the line that says android.api = 27 in your buildozer.spec file and change it to 28.,0.0,False,2,6850 -2020-06-21 01:52:26.577,How to change API level when using buildozer?,"I just finished my app and made a release version with buildozer and signed it but when I tried to upload my apk file to Google Play Console...It said that the API level of the app was 27 and it should be level 28. So how can I do this? -Thanks in advance","It should be edited in buildozer.spec file. -If you scroll down it's default to 27, change it to specification",1.2,True,2,6850 -2020-06-21 11:00:41.357,Is there a plugin similar to gitlens for pycharm or other products?,"My question is very simple , as you read the title I want plugin similar to GitLens that I found in vscode. As you know with GitLens you can easily see the difference between two or multiple commits. I searched it up and I found GitToolBox but I don't know how to install it as well and I don't think that's like GitLens...","Open Settings on jetbrains IDE. -Go to plugins and look for git toolbox. -Install it and boom, its done!",0.0,False,1,6851 -2020-06-21 14:24:22.560,Sending Information from one Python file to another,"I would like to know how to perform the below mentioned task -I want to upload a CSV file to a python script 1, then send file's path to another python script in file same folder which will perform the task and send the results to python script 1. -A working code will be very helpful or any suggestion is also helpful.","You can import the script editing the CSV to the python file and then do some sort of loop that edits the CSV file with your script 1 then does whatever else you want to do with script 2. -This is an advantage of OOP, makes these sorts of tasks very easy as you have functions set in a module python file and can create a main python file and run a bunch of functions editing CSV files this way.",0.0,False,1,6852 -2020-06-21 14:56:48.953,I'm trying to figure out how to install this lib on python (time),"im new to python and i was trying to install ""time"" library on python, i typed -pip install time -but the compiler said this -C:\Users\Giuseppe\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Python 3.6>pip install time ERROR: Could not find a version that satisfies the requirement time (from versions: none) ERROR: No matching distribution found for time -i dont know how to resolve, can anyone help me? please be the more simple u can cause im not too good in py, as i said im new, thx to everyone! -P.S. -the py version is 3.6 -thx everyone, im stupid xd","Time is a module that comes built-in with python so no need to install anything, just import it : -import time",0.1352210990936997,False,1,6853 -2020-06-21 15:07:21.757,how can i use an chrome extension in my selenium python program?,"im just trying to use an vpn extension with selenium. I have the extension running , but i need to click in the button and enable the vpn so it can works, there's a way to do that with selenium? im thinking to use another similar option like scrapy or pyautogui...","No there is no way to enable the VPN on your extension -If you want to use your VPN extension you have to set a profile (otherwise selenium will create a new profile without installed extension)",1.2,True,1,6854 -2020-06-21 15:10:15.180,I have completely messed up my Python Env and need help to start fresh,"Long story short, I messed with my Python environment too much (moving files around, creating new folders, trying to reinstall packages, deleting files etc.) My google package doesn't work anymore. Everytime I try to import the package, it says it can't find the module, even though I did a pip install. -I was wondering how I could do a hard reset/delete python off my computer and reinstall it. -Thanks.","I figured it out. My pip was installing to a site packages folder inside a local folder, while my jupyter notebook was trying to pull from the anaconda site packages folder.",1.2,True,1,6855 -2020-06-22 19:54:41.410,Gettinng back cells after being deleted in Colab,"I often delete code in Colab, by accident, and for some reason when I try to do undo code deletion it does not work. So basically when I do this I want to get my cells back somehow. Is there any way to do this, like take a look at the code that Colab is running, because my cells are probably still there. Another option would be to somehow see cells that have been previously deleted. Please help me. Any other solutions would be nice.",You can undo deleting cell in google colab simply by typing ctrl + M Z,0.2012947653214861,False,1,6856 -2020-06-22 21:33:39.547,"Replace string with quotes, brackets, braces, and slashes in python","I have a string where I am trying to replace [""{\"" with [{"" and all \"" with "". -I am struggling to find the right syntax in order to do this, does anyone have a solid understanding of how to do this? -I am working with JSON, and I am inserting a string into the JSON properties. This caused it to put a single quotes around my inserted data from my variable, and I need those single quotes gone. I tried to do json.dumps() on the data and do a string replace, but it does not work. -Any help is appreciated. Thank you.","I would recommend maybe posting more of your code below so we can suggest a better answer. Just based on the information you have provided, I would say that what you are looking for are escape characters. I may be able to help more once you provide us with more info!",0.0,False,2,6857 -2020-06-22 21:33:39.547,"Replace string with quotes, brackets, braces, and slashes in python","I have a string where I am trying to replace [""{\"" with [{"" and all \"" with "". -I am struggling to find the right syntax in order to do this, does anyone have a solid understanding of how to do this? -I am working with JSON, and I am inserting a string into the JSON properties. This caused it to put a single quotes around my inserted data from my variable, and I need those single quotes gone. I tried to do json.dumps() on the data and do a string replace, but it does not work. -Any help is appreciated. Thank you.","if its two characters you want to replace then you have to first check for first character and then the second(which should be present just after the first one and so on) and shift(shorten the whole array by 3 elements in first case whenever the condition is satisfied and in the second case delete \ from the array. -You can also find particular substring by using inbuilt function and replace it by using replace() function to insert the string you want in its place",0.0,False,2,6857 -2020-06-23 15:32:04.937,How to calculate percentage in Python with very simple formula,"I've seen similar questions but it's shocking that I didn't see the answer I was, in fact, looking for. So here they are, both the question and the answer: -Q: -How to calculate simply the percentage in Python. -Say you need a tax calculator. To put it very simple, the tax is 18% of earnings. -So how much tax do I have to pay if I earn, say, 18 342? The answer in math is that you divide by 100 and multiply the result by 18 (or multiply with 18 divided by 100). But how do you put that in code? -tax = earnings / 100 * 18 -Would that be quite right?","The answer that best fitted me, especially as it implied no import, was this: -tax = earnings * 0.18 -so if I earned 18 342, and the tax was 18%, I should write: -tax = 18 342 * 0.18 -which would result in 3 301.56 -This seems trivial, I know, and probably some code was expected, moreover this form might be applicable not only in Python, but again, I didn't see the answer anywhere and I thought that it is, after all, the simplest.",0.0,False,1,6858 -2020-06-23 17:34:09.277,"In P4, how do i check if a change submitted to one branch is also submitted to another branch using command","I want to find out there is a p4 command that can find cl submitted in a depot branch from a cl submitted in another depot branch. -like - -if CL 123 was submitted to branch //code/v1.0/files/... -and same code changes were also submitted to another branch //code/v5.0/files/... -can i find out cl in 2nd branch from cl 123?","There are a few different methods; which one is easiest will depend on the exact context/requirements of what you're doing. -If you're interested in the specific lines of code rather than the metadata, p4 annotate is the best way. Use p4 describe 123 to see the lines of code changed in 123, and then p4 annotate -c v5.0/(file) to locate the same lines of code in v5.0 and see which changelist(s) introduced them into that branch. This method will work even if the changes were copied over manually instead of using Perforce's merge commands. -If you want to track the integration history (i.e. the metadata) rather than the exact lines of code (which may have been edited in the course of being merged between codelines, making the annotate method not work), the easiest method is to use the Revision Graph tool in P4V, which lets you visually inspect a file's branching history; you can select the revision from change 123 and use the ""highlight ancestors and descendants"" tool to see which revisions/changelists it is connected to in other codelines. This makes it easy to see the context of how many integration steps were involved, who did them, when they happened, whether there were edits in between, etc. -If you want to use the metadata but you're trying for a more automated solution, changes -i is a good tool. This will show you which changelists are included in another changelist via integration, so you can do p4 changes -i @123,123 to see the list of all the changes that contributed to change 123. On the other side (finding changelists in v5.0 that 123 contributed to), you could do this iteratively; run p4 changes -i @N,N for each changelist N in the v5.0 codeline, and see which of them include 123 in the output (it may be more than one).",0.6730655149877884,False,1,6859 -2020-06-24 01:35:15.063,Alpha_Vantage ts.get_daily ending with [0],"I am learning how to use Alpha_Vantage api and came across this line of code. I do not understand what is the purpose of [0]. -SATS = ts.get_daily('S58.SI', outputsize = ""full"")[0]","ts.get_daily() appears to return an array. -SATS is getting the 0 index of the array (first item in the array)",0.0,False,1,6860 -2020-06-24 06:47:05.090,how do I run two separate deep learning based model together?,"I trained a deep learning-based detection network to detect and locate some objects. I also trained a deep learning-based classification network to classify the color of the detected objects. Now I want to combine these two networks to detect the object and also classify color. I have some problems with combining these two networks and running them together. How do I call classification while running detection? -They are in two different frameworks: the classifier is based on the Keras and TensorFlow backend, the detection is based on opencv DNN module.","I have read your question and from that, I can infer that your classification network takes the input from the output of your first network(object locator). i.e the located object from your first network is passed to the second network which in turn classifies them into different colors. The entire Pipeline you are using seems to be a sequential one. Your best bet is to first supply input to the first network, get its output, apply some trigger to activate the second network, feed the output of the first net into the second net, and lastly get the output of the second net. You can run both of these networks in separate GPUs. -The Trigger that calls the second function can be something as simple as cropping the located object in local storage and have a function running that checks for any changes in the file structure(adding a new file). If this function returns true you can grab that cropped object and run the network with this image as input.",0.0,False,1,6861 -2020-06-24 18:24:37.047,ModuleNotFoundError: No module named 'pandas' when converting Python file to Executable using auto-py-to-exe,"I used auto-py-to-exe to convert a Python script into an executable file and it converts it to an executable without any problems, but when I launch the executable the following error happens: -ModuleNotFoundError: No module named 'pandas' -[11084] Failed to execute script test1 -Any ideas on how to fix this? I've tried many libraries to convert the Python file to and Executable and all give me the same error. I've tried with cx_Freeze, PyInstaller, py2exe, and auto-py-to-exe. All give me a ModuleNotFoundError, but when I run the script on the IDE it runs perfectly.",Are you trying pip install pandas?,0.2655860252697744,False,3,6862 -2020-06-24 18:24:37.047,ModuleNotFoundError: No module named 'pandas' when converting Python file to Executable using auto-py-to-exe,"I used auto-py-to-exe to convert a Python script into an executable file and it converts it to an executable without any problems, but when I launch the executable the following error happens: -ModuleNotFoundError: No module named 'pandas' -[11084] Failed to execute script test1 -Any ideas on how to fix this? I've tried many libraries to convert the Python file to and Executable and all give me the same error. I've tried with cx_Freeze, PyInstaller, py2exe, and auto-py-to-exe. All give me a ModuleNotFoundError, but when I run the script on the IDE it runs perfectly.","A script that runs in your IDE but not outside may mean you are actually working in a virtual environment. Pandas probably is not installed globally in your system. Try remembering if you had created a virtual environment and then installed pandas inside this virtual environment. -Hope it helped, -Vijay.",1.2,True,3,6862 -2020-06-24 18:24:37.047,ModuleNotFoundError: No module named 'pandas' when converting Python file to Executable using auto-py-to-exe,"I used auto-py-to-exe to convert a Python script into an executable file and it converts it to an executable without any problems, but when I launch the executable the following error happens: -ModuleNotFoundError: No module named 'pandas' -[11084] Failed to execute script test1 -Any ideas on how to fix this? I've tried many libraries to convert the Python file to and Executable and all give me the same error. I've tried with cx_Freeze, PyInstaller, py2exe, and auto-py-to-exe. All give me a ModuleNotFoundError, but when I run the script on the IDE it runs perfectly.","For cx_freeze, inlcude pandas explicitly in the packages. Like in the example below - -build_exe_options = {'packages': ['os', 'tkinter', 'pandas']} -This should include the pandas module in you build.",0.1352210990936997,False,3,6862 -2020-06-25 05:00:30.313,Is there a python code that I can add to my program that will add it to start in windows 10?,"Currently, I have been scouring the internet for a code that will either add this program (something.exe) to the windows task scheduler or if that is not even an option how to add it to the windows reg key for a startup. I cannot find anything in terms of Python3, and I really hope it is not an answer that is right in front of my face. Thanks!","Open the windows scheduler -> select ""create basic task"" -> fill out the desired times -> input the path to the script you want to execute.",0.0,False,1,6863 -2020-06-25 06:15:17.920,How do I run a downloaded repository's config in Python?,"I am trying to use sunnyportal-py. Relatively new to python, I do not understand step 2 in the README: -How to run - -Clone or download the repository. -Enter the directory and run: -PYTHONPATH=. ./bin/sunnyportal2pvoutput --dry-run sunnyportal.config -Enter the requested information and verify that the script is able to connect to Sunny Portal. -The information is saved in sunnyportal.config and can be edited/deleted if you misstype anything. -Once it works, replace --dry-run with e.g. --output to upload the last seven days output data to pvoutput or --status to upload data for the current day. -Add --quiet to silence the output. - -Could anyone help me? I have gone into a cmd.exe in the folder I have downloaded, I don't know how to correctly write the python path in the correct location. What should I paste into the command line? Thanks! -Edit : I would like to be able to do this on Windows, do tell me if this is possible.","The command at bullet 2 is to be typed at the commandline (You need to be in windows: cmd or powershell, Linux: bash, etc.. to be able to do this). - -PYTHONPATH=. ./bin/sunnyportal2pvoutput --dry-run sunnyportal.config - -The first part of the command code above indicates where your program is located. Go to the specific folder via commandline (windows: cd:... ; where .. is your foldername) and type the command. -The second part is the command to be executed. Its behind the ""--"" dashes. The program knows what to do. In this case: - ---dry-run sunnyportal.config - -running a validation/config file to see if the program code itself works; as indicated by ""dry run"". -In your case type at the location (while in cmd): - -""sunnyportal2pvoutput --dry-run sunnyportal.config"" - -or - -""sunnyportal2pvoutput.py --dry-run sunnyportal.config"" (without the environment variables (python path) set). - -Note: the pythonpath is an environment variable. This can be added via: Control Panel\All Control Panel Items\System\ --> bullet Advanced System Settings --> button ""environment variables"". Then you can select to add it to ""Variables for user ""username"""" or ""system variables"". Remember to reboot thereafter to make the change effective immediately. -Update 1 (pip install sunnyportal): - -go to cmd. -type ""pip search sunnyportal"" - -Result: - -Microsoft Windows [Version 10.0.18363.836] (c) 2019 Microsoft -Corporation. All rights reserved. -C:\Windows\System32>pip search -sunnyportal -sunnyportal-py (0.0.4) - A Python client for SMA sunny portal -C:\Windows\System32> - -If found, then type: -""pip install sunnyportal""",0.0,False,1,6864 -2020-06-25 08:51:15.257,Run one file among multiple files in azure webjobs,"I am trying to run a contionus azure webjob for python. -i have 6 files where main.py is the main file, other files internally importing each other and finally everything is being called from main.py, now when i am trying to run only the first python file is getting executed, but i want that when the webjob will start only main.py will be executed not anything else. how to achieve that ?","This is quite simple. In azure webjob, if the file name starts with run, then this file has the highest priority to execute. -So the most easiest way is just renaming the main.py to run.py. -Or add an run.py, then call the main.py within it.",1.2,True,1,6865 -2020-06-25 10:32:25.647,How do you download online libraries on python?,I am trying to download youtube videos using python and for the code to work I need to install pytube3 library but I am very new to coding so I am not sure how to do it.,"You could use -python3 -m pip install pytube3",0.1352210990936997,False,1,6866 -2020-06-25 16:48:51.563,How to check if image contains text or not?,"Given any image of a scanned document, I want to check if it's not empty page. -I know I can send it to AWS Textract - but it will cost money for nothing. -I know I can use pytesseract but maybe there is more elegant and simple solution? -Or given a .html file that represents the text of the image - how to check it shows a blank page?","We can use pytesseract for this application by thresholding the image and passing it to tesseract. However if you have a .html file that represents text of image, you can use beautifulsoup for extracting text from it and check if it is empty.Still this is a round way approach.",0.2012947653214861,False,1,6867 -2020-06-26 15:06:57.583,How to profile my APIs for concurrent requests?,"Scenario -Hi, I have a collection of APIs that I run on Postman using POST requests. The flask and redis servers are set up using docker. -What I'm trying to do -I need to profile my setup/APIs in a high traffic environment. So, - -I need to create concurrent requests calling these APIs - -The profiling aims to get the system conditions with respect to memory (total memory consumed by the application), time (total execution time taken to create and execute the requests) and CPU-time (or the percentage of CPU consumption) - - -What I have tried -I am familiar with some memory profilers like mprof and time profiler like line_profiler. But I could not get a profiler for the CPU consumption. I have run the above two profilers (mprof and line_profiler) on a single execution to get the line-by-line profiling results for my code. But this focuses on the function wise results.I have also created parallel requests earlier using asyncio,etc but that was for some simple API-like programs without POSTMAN. My current APIs work with a lot of data in the body section of POSTMAN -Where did I get stuck -With docker, this problem gets trickier for me. - -Firstly, I am unable to get concurrent requests - -I do not know how to profile my APIs when using POSTMAN (perhaps there is an option to do it without POSTMAN) with respect to the three parameters: time, memory and CPU consumption.","I suppose that you've been using the embbed flask server(dev server) that is NOT production ready and,by default, it supports only on request per time. For concurrent requests should be looking to use gunicorn or some other wsgi server like uWsgi. -Postman is only a client of your API, i don't see it's importance here. If you want to do a stress test or somethin like that, you can write your own script or use known tools, like jmetter. -Hope it helps!",0.0,False,1,6868 -2020-06-26 17:16:07.260,How to send clickable link and Mail in Chatterbot flask app,"I am using chatterbot, I want to send clickable link and Mail as per message sent by the user. I cant find any link or reference on how to do this",Try using linkify.... pip install autolink... linkify (bot.get_response(usr_text)),1.2,True,1,6869 -2020-06-26 17:58:13.033,How to train a model for recognizing two objects?,"Ive got two separate models, one for mask recognition and another for face recognition. The problem now is that how do I combine these both models so that it performs in unison as a single model which is able to :- - -Recognize whether or not a person is wearing mask -Simultaneously recognize who that person is if he isn't wearing mask apart from warning about no mask. - -What are the possibilities I have to solve this problem!!",You don't have to combine the both models and train them you have to train them seprately. And after training the model first you have to check with the mask detection model what's the probability/confidence score that there's a mask detected and if the probability is low say like 40%-45% then you have to use the other model that recognises the person.,0.2012947653214861,False,1,6870 -2020-06-26 20:38:05.160,model for hand written text recognition,"I have been attempting to create a model that given an image, can read the text from it. I am attempting to do this by implementing a cnn, rnn, and ctc. I am doing this with TensorFlow and Keras. There are a couple of things I am confused about. For reading single digits, I understand that your last layer in the model should have 9 nodes, since those are the options. However, for reading words, aren't there infinitely many options, so how many nodes should I have in my last layer. Also, I am confused as to how I should add my ctc to my Keras model. Is it as a loss function?","I see two options here: - -You can construct your model to recognize separate letters of those words, then there are as many nodes in the last layer as there are letters and symbols in the alphabet that your model will read. -You can make output of your model as a vector and then ""decode"" this vector using some other tool that can encode/decode words as vectors. One such tool I can think of is word2vec. Or there's an option to download some database of possible words and create such a tool yourself. -Description of your model is very vague. If you want to get more specific help, then you should provide more info, e.g. some model architecture.",0.0,False,1,6871 -2020-06-27 04:24:24.573,creating an api to update postgres db with a json or yaml payload,"I decided to ask this here after googling for hours. I want to create my own API endpoint on my own server. -Essentially I want to be able to just send a yaml payload to my server, when received I want to kick off my python scripts to parse the data and update the database. I'd also like to be able to retrieve data with a different call. I can code the back-end stuff, I just don't know how to make that bridge between hitting the server from outside and having the server do the things in the back-end in python. -Is django the right way? I've spent a couple days doing Django tutorials, really cool stuff, but I don't really need a website right now but whenever I search for web and python together, django pretty much always comes up. I don't need any python code help, just some direction on how to create that bridge. -Thanks.",DRF was what I was looking for. As suggested.,1.2,True,1,6872 -2020-06-28 12:56:23.870,PySimpleGui: how to remove event-delay in Listboxes?,"When reading events from a simple button in PySimpleGui, spamming this button with mouseclicks will generate an event for each of the clicks. -When you try to do the same with Listboxes (by setting enable_events to True for this element) it seems like there is a timeout after each generated event. If you click once every second, it will generate all the events. But if you spam-click it like before it will only generate the first event. -I'm not sure if this behavior is intended (only started learning PySimpleGui today), but is there a way to get rid of this delay? I tried checking the docs but can't find it mentioned anywhere.","I think the reason is that a Listbox reacts to click events, but also to double click events. A Button does not. This behavior looks like consistent.",0.0,False,1,6873 -2020-06-28 19:56:59.520,How to start multiple py files (2 discord bots) from one file at once,"I'm wondering how would I run my 2 discord bots at once from main, app.py, file. -And after I kill that process (main file process), they both would stop. -Tried os.system, didn't work. Tried multiple subprocess.Popen, didn't work. -Am I doing something wrong? -How would I do that?",I think the good design is to have one bot per .py file. If they both need code that is in app.py then they should 'import' the common code. Doing that you can just run both bot1.py and bot2.py.,0.0,False,1,6874 -2020-06-28 21:34:07.527,pip3 install of Jupyter and Notebook problem when running,"I have tried all of the things here on stack and on other sites with no joy... -I'd appreciate any suggestions please. -I have installed Jupyter and Notebook using pip3 - please note that I have updated pip3 before doing so. -However when trying to check the version of both jupyter --version and notebook --version my terminal is returning no command found. I have also tried to run jupyter, notebook and jupyter notebook and I am still getting the same message. -I have spent nearly two days now trying to sort this out... I'm on the verge of giving up. -I have a feeling it has something to do with my PATH variable maybe not pointing to where the jupyter executable is stored but I don't know how to find out where notebook and jupyter are stored on my system. -many thanks in advance -Bobby","have you tried locate Jupiter? It may tell you where jupyter is on your system. -Also, why not try installing jupyter via anaconda to avoid the hassle?",0.0814518047658113,False,2,6875 -2020-06-28 21:34:07.527,pip3 install of Jupyter and Notebook problem when running,"I have tried all of the things here on stack and on other sites with no joy... -I'd appreciate any suggestions please. -I have installed Jupyter and Notebook using pip3 - please note that I have updated pip3 before doing so. -However when trying to check the version of both jupyter --version and notebook --version my terminal is returning no command found. I have also tried to run jupyter, notebook and jupyter notebook and I am still getting the same message. -I have spent nearly two days now trying to sort this out... I'm on the verge of giving up. -I have a feeling it has something to do with my PATH variable maybe not pointing to where the jupyter executable is stored but I don't know how to find out where notebook and jupyter are stored on my system. -many thanks in advance -Bobby","So to summarise this is what I have found on this issue (in my experience): -to run the jupyter app you can use the jupyter-notebook command and this works, but why? This is because, the jupyter-notebook is stored in usr/local/bin which is normally always stored in the PATH variable. -I then discovered that the jupyter notebook or jupyter --version command will now work if I did the following: - -open my ./bash_profile file -add the following to the bottom of the file: export PATH=$PATH:/Users/your-home-directory/Library/Python/3.7/bin - -this should add the location of where jupyter is located to your path variable. -Alternatively, as suggested by @HackLab we can also do the following: - -python3 -m jupyter notebook - -Hopefully, this will give anyone else having the same issues I had an easier time resolving this issue.",1.2,True,2,6875 -2020-06-30 01:16:27.903,How do I use a cron job in order to insert events into google calendar?,"I wrote a Python script that allows me to retrieve calendar events from an externally connected source and insert them into my Google Calendar thanks to the Google Calendar's API. It works locally when I execute the script from my command line, but I would like to make it happen automatically so that the externally added events pop up in my Google Calendar automatically. -It appears that a cron job is the best way to do this, and given I used Google Calendar's API, I thought it might be helpful to use Cloud Functions with Cloud Scheduler in order to make it happen. However, I really don't know where to start and if this is even possible because accessing the API requires OAuth with Google to my personal Google account which is something I don't think a service account (which I think I need) can do on my behalf. -What are the steps I need to take in order to allow the script which I manually run and authenticates me with Google Calendar run every 60 seconds ideally in the cloud so that I don't need to have my computer on at all times? -Things I’ve tried to do: -I created a service account with full permissions and tried to create an http-trigger event that would theoretically run the script when the created URL is hit. However, it just returns an HTTP 500 Error. -I tried doing Pub/Sub event targets to listen and execute the script, but that doesn’t work either. -Something I’m confused about: -with either account, there needs to be a credentials.json file in order to login; how does this file get “deployed” alongside the main function? Along with the token.pickle file that gets created when the authentication happens for the first time.","The way a service account works is that it needs to be preauthorized. You would take the service account email address and share a calendar with it like you would with any other user. The catch here being that you should only be doing this with calendars you the developer control. If these are calendars owned by others you shouldnt be using a service account. -The way Oauth2 works is that a user is displayed a consent screen to grant your application access to their data. Once the user has granted you access and assuming you requested offline access you should have a refresh token for that users account. Using the refresh token you can request a new access token at anytime. So the trick here would be storing the users refresh tokens in a place that your script can access it then when the cron job runs the first thing it needs to do is request a new access token using its refresh token. -So the only way you will be able to do this as a cron job is if you have a refresh token stored for the account you want to access. Other wise it will require it to open a web browser to request the users consent and you cant do that with a cron job.",0.6730655149877884,False,1,6876 -2020-06-30 08:51:32.650,Python FBX SDK – How to enable auto-complete?,"I am using Pycharm to code with Python FBX SDK, but I don't how to enable auto-complete. I have to look at the document for function members. It's very tedious. So, does anyone know how to enable auto-complete for Python FBX SDK in editor? -Thanks!","Copy these two files -[PATH_TO_YOUR_MOBU]\bin\config\Python\pyfbsdk_gen_doc.py -[PATH_TO_YOUR_MOBU]\bin\config\Python\pyfbsdk_additions.py -to another folder like -d:\pyfbsdk_autocomplete for instance. -rename the file pyfbsdk_gen_doc.py to pyfbsdk.py -add the folder to your interpreter paths in PyCharm. (Interpreter Settings, Show All, Show paths for interpreter)",1.2,True,1,6877 -2020-07-01 02:37:30.927,I must install django for every single project i make?,"i am new to Python programming language and Django. I am learning about web development with Django, however, each time I create a new project in PyCharm, it doesn´t recognize django module, so i have to install it again. Is this normal? Because i´ve installed django like 5 times. It doesn´t seem correct to me, there must be a way to install Django once and for all and not have the necessity of using 'pip install django' for each new project I create, I am sure there must be a way but I totally ignore it, I think I have to add django to path but I really don´t know how (just guessing). I will be thankful if anyone can help me :)","pycharm runs in a venv. A venv is an isolated duplicate (sort of) of python (interpreter) and other scripts. To use your main interpreter, change your interpreter location. The three folders (where your projects is, along with your other files) are just that. I think there is an option to inherit packages. I like to create a file called requirements.txt and put all my modules there. Comment for further help. -In conclusion, this is normal.",1.2,True,1,6878 -2020-07-01 22:53:41.403,How to show messages in Python?,"I am new to Django and trying to create an Application. -My scenario is: -I have a form on which there are many items and user can click on Add to Cart to add those item to Cart. I am validating if the user is logged in then only item should be added to Cart else a message or dialogue box must appear saying please login or sign up first. -Although I was able to verify the authentication but the somehow not able to show the message if user is not logged in. -For now I tried the below things: - -Using session messages, but somehow it needs so many places to take care when to delete or when to show the message -Tried using Django Messages Framework, I checked all the configuration in settings.py and everything seems correct but somehow not showing up on HTML form - -Does anyone can help me here? -I want to know a approach where I can authenticate the user and if user is not logged in a dialogue box or message should appear saying Please login or Signup. It should go when user refreshes the page.","If you are using render() for views.py you could add a boolean value to the context -i.e render(request ""template_name.html"", {""is_auth"": True}) -Assumedly you are doing auth in the serverside so you could tackle it this way. -Not a great fix but might help.",0.0,False,1,6879 -2020-07-02 20:11:13.097,installing Opencv on Mac Catalina,"I have successfully installed opencv 4.3.0 on my Mac OS Catalina, python 3.8 is installed also, but when I try to import cv2, I get the Module not found error. -Please how do I fix this? -thanks in advance.",Can you try pip install opencv-python?,0.0,False,2,6880 -2020-07-02 20:11:13.097,installing Opencv on Mac Catalina,"I have successfully installed opencv 4.3.0 on my Mac OS Catalina, python 3.8 is installed also, but when I try to import cv2, I get the Module not found error. -Please how do I fix this? -thanks in advance.","I was having issue with installing opencv in my Macbook - python version 3.6 ( i downgraded it for TF 2.0) and MacOs Mojave 10.14. Brew , conda and pip - none of the three seemed to work for me. So i went to [https://pypi.org/project/opencv-python/#files] and downloaded the .whl that was suitable for my combo of python and MacOs versions. Post this navigated to the folder where it was downloaded and executed pip install ./opencv_python-4.3.0.36-cp36-cp36m-macosx_10_9_x86_64.whl",0.0,False,2,6880 -2020-07-02 22:37:28.507,DIY HPC cluster to run Jupyter/Python notebooks,"I recently migrated my Python / Jupyter work from a macbook to a refurbrished Gen 8 HP rackmounted server (192GB DDR3 2 x 8C Xeon E5-2600), which I got off amazon for $400. The extra CPU cores have dramatically improved the speed of fitting my models particularly for decision tree ensembles that I tend to use a lot. I am now thinking of buying additional servers from that era (early-mid 2010s) (either dual or quad-socket intel xeon E5, E7 v1/v2) and wiring them up as a small HPC cluster in my apartment. Here's what I need help deciding: - -Is this a bad idea? Am I better off buying a GPU (like a gtx 1080). The reason I am reluctant to go the GPU route is that I tend to rely on sklearn a lot (that's pretty much the only thing I know and use). And from what I understand model training on gpus is not currently a part of the sklearn ecosystem. All my code is written in numpy/pandas/sklearn. So, there will be a steep learning curve and backward compatibility issues. Am I wrong about this? - -Assuming (1) is true and CPUs are indeed better for me in the short term. How do I build the cluster and run Jupyter notebooks on it. Is it as simple as buying an additional server. Designating one of the servers as the head node. Connecting the servers through ethernet. Installing Centos / Rocks on both machines. And starting the Jupyter server with IPython Parallel (?). - -Assuming (2) is true, or at least partly true. What other hardware / software do I need to get? Do I need an ethernet switch? Or if I am connecting only two machines, there's no need for it? Or do I need a minimum of three machines to utilize the extra CPU cores and thus need a switch? Do I need to install Centos / Rocks? Or are there better, more modern alternatives for the software layer. For context, right now I use openSUSE on the HP server, and I am pretty much a rookie when it comes to operating systems and networking. - -How homogeneous should my hardware be? Can I mix and match different frequency CPUs and memory across the machines? For example, having 1600 MHz DDR3 memory in one machine, 1333 MHz DDR3 in another? Or using 2.9 GHz E5-2600v1 and 2.6 GHz E5-2600v2 CPUs? - -Should I be worried about power? I.e. can I safely plug three rackmounted servers in the same power strip in my apartment? There's one outlet that I know if I plug my hairdryer in, the lights go out. So I should probably avoid that one :) Seriously, how do I run 2-3 multi-CPU machines under load and avoid tripping the circuit breaker? - - -Thank you.","Nvidia's rapids.ai implements a fair bit of sklearn on gpus. Whether that is the part you use, only you can say. - -Using Jupiter notebooks for production is known to be a mistake. - -You don't need a switch unless latency is a serious issue, it rarely is. - -Completely irrelevant. - -For old hardware of the sort you are considering, you will be having VERY high power bills. But worse, since you will have many not-so-new machines, the probability of some component failing at any given time is high, so unless you seek a future in computer maintenance, this is not a great idea. A better idea is: develop your idea on your macbook/existing cluster, then rent an AWS spot instance (or two or three) for a couple of days. Cheaper, no muss, no fuss. everything just works.",1.2,True,1,6881 -2020-07-03 10:03:02.433,How to reformat the date text in each individual box of a column?,"I currently converted a list of roughly 1200 items (1200 rows) and a problem arised when i looked at the date of each individual item and realised that the day and month was before the year which meant that ordering them by date would be useless. Is there any way I can reorder over 1200 dates so that they can be formatted correctly with me having to manually do it. Would I have to use python. I am very new to that and I don't know how to use it really. -Here's an example of what I get: -September 9 2016 -And this is what i want: -2016 September 9 -I am also using the microsoft excel if anyone was asking.","it must be date format. -you can split date parts in other cells and re-merge them in preferred format...",0.0,False,1,6882 -2020-07-03 15:06:50.723,How to convert py file to apk?,"I have created a calculator in Python using Tkinter module,though I converted it to exe but I am not able to convert it to apk.please tell me how to do so?",I personally haven't seen anyone do that. I think it would be best to try and re-make you calculator in the Kivy framework if you want to later turn it into an APK using bulldozer. Tkinter is decent for beginners but if you want to have nice Desktop UI's use PyQT5 and if you're interested in making mobile apps use Kivy. Tkinter is just a way to dip into using GUIs in python.,0.3869120172231254,False,1,6883 -2020-07-04 03:40:27.593,How to diagnose inconsistent S3 permission errors,"I'm running a Python script in an AWS Lambda function. It is triggered by SQS messages that tell the script certain objects to load from an S3 bucket for further processing. -The permissions seem to be set up correctly, with a bucket policy that allows the Lambda's execution role to do any action on any object in the bucket. And the Lambda can access everything most of the time. The objects are being loaded via pandas and s3fs: pandas.read_csv(f's3://{s3_bucket}/{object_key}'). -However, when a new object is uploaded to the S3 bucket, the Lambda can't access it at first. The botocore SDK throws An error occurred (403) when calling the HeadObject operation: Forbidden when trying to access the object. Repeated invocations (even 50+) of the Lambda over several minutes (via SQS) give the same error. However, when invoking the Lambda with a different SQS message (that loads different objects from S3), and then re-invoking with the original message, the Lambda can suddenly access the S3 object (that previously failed every time). All subsequent attempts to access this object from the Lambda then succeed. -I'm at a loss for what could cause this. This repeatable 3-step process (1) fail on newly-uploaded object, 2) run with other objects 3) succeed on the original objects) can happen all on one Lambda container (they're all in one CloudWatch log stream, which seems to correlate with Lambda containers). So, it doesn't seem to be from needing a fresh Lambda container/instance. -Thoughts or ideas on how to further debug this?","Amazon S3 is an object storage system, not a filesystem. It is accessible via API calls that perform actions like GetObject, PutObject and ListBucket. -Utilities like s3fs allow an Amazon S3 bucket to be 'mounted' as a file system. However, behind the scenes s3fs makes normal API calls like any other program would. -This can sometimes (often?) lead to problems, especially where files are being quickly created, updated and deleted. It can take some time for s3fs to update S3 to match what is expected from a local filesystem. -Therefore, it is not recommended to use tools like s3fs to 'mount' S3 as a filesystem, especially for Production use. It is better to call the AWS API directly.",1.2,True,1,6884 -2020-07-06 20:18:01.003,Spyder - how to execute python script in the current console?,"I've updated conda and spyder to the latest versions. -I want to execute python scripts (using F5 hotkey) in the current console. -However, the new spyder behaves unexpectedly, for example, if I enter in a console a=5 and then run test.py script that only contains a command print(a), there is an error: NameError: name 'a' is not defined. -In the configuration options (command+F6) I've checked ""Execute in current console"" option. -I am wondering why is this happening? -Conda 4.8.2, Spyder 4.0.1","In the preferences, run settings, there is a ""General settings"", in which you can (hopefully still) deactivate ""Remove all variables before execution"". -I even think to remember that this is new, so it makes sense.",0.0,False,2,6885 -2020-07-06 20:18:01.003,Spyder - how to execute python script in the current console?,"I've updated conda and spyder to the latest versions. -I want to execute python scripts (using F5 hotkey) in the current console. -However, the new spyder behaves unexpectedly, for example, if I enter in a console a=5 and then run test.py script that only contains a command print(a), there is an error: NameError: name 'a' is not defined. -In the configuration options (command+F6) I've checked ""Execute in current console"" option. -I am wondering why is this happening? -Conda 4.8.2, Spyder 4.0.1","I figured out the answer: -In run configuration (command+F6) there is another option that needs to be checked: ""Run in console's namespace instead of empty one""",1.2,True,2,6885 -2020-07-06 20:45:20.950,Resampling data from 1280 Hz to 240 Hz in python,"I have a python list of force data that was sampled at 1280 Hz, I have to get it do exactly 240 Hz in order to match it exactly with a video that was filmed at 240 Hz. I was thinking about downsampling to 160 Hz and then upsampling through interpolation to 240 Hz. Does anyone have any ideas on how to go about doing this? Exact answers not needed, just an idea of where to look to find out how.","Don't downsample and that upsample again; that would lead to unnecessary information loss. -Use np.fft.rfft for a discrete Fourier transform; zero-pad in the frequency domain so that you oversample 3x to a sampling frequency of 3840 Hz. (Keep in mind that rfft will return an odd number of frequencies for an even number of input samples.) You can apply a low-pass filter in the frequency domain, making sure you block everything at or above 120 Hz (the Nyqvist frequency for 240 Hz sampling rate). Now use np.fft.irfft to transform back to a time-domain signal at 3840 Hz sampling rate. Because 240 Hz is exactly 16x lower than 3840 Hz and because the low-pass filter guarantees that there is no content above the Nyqvist frequency, you can safely take every 16th sample.",1.2,True,1,6886 -2020-07-07 09:52:29.370,how does one normalize a TensorFlow `Dataset` pipeline?,"I have my dataset in a TensorFlow Dataset pipeline and I am wondering how can I normalize it, The problem is that in order to normalize you need to load your entire dataset which is the exact opposite of what the TensorFlow Dataset is for. -So how exactly does one normalize a TensorFlow Dataset pipeline? And how do I apply it to new data? (I.E. data used to make a new prediction)","You do not need to normalise the entire dataset at once. -Depending on the type of data you work with, you can use a .map() function whose sole purpose is to normalise that specific batch of data you are working with (for instance divide by 255.0 each pixel within an image. -You can use, for instance, map(preprocess_function_1).map(preprocess_function_2).batch(batch_size), where preprocess_function_1 and preprocess_function_2 are two different functions that preprocess a Tensor. If you use .batch(batch_size) then the preprocessing functions are applied sequentially on batch_size number of elements, you do not need to alter the entire dataset prior to using tf.data.Dataset()",0.2012947653214861,False,1,6887 -2020-07-07 11:19:47.523,Python Selenium bot to view Instagram stories | How can i click the profiles of people that have active stories?,"I have this Instagram bot that is made using Python and Selenium, It log into Instagram, goes to a profile, select the last post and select the ""other x people liked this photo"" to show the complete list of the people that liked the post(it can be done with the follower of the page too). -Now I am stuck because I don't know how can i make the bot click only the profiles that have active stories and how to make it scroll down (the problem is that the way that i found to click on the profiles works just with the first one profile because when I click on the profile it opens the stories and closes the post, so when i reopen the post and the list of like on this post it will reclick on the same profile that I have already seen the stories of). -Does someone know how to do that or a similar thing maybe something even better that I didn't thinked of? -I don't think code is needed but if you need I will post it, just let me know.","Have you tried to use the ""back"" button on your browser window? Or open the page in a new tab, so you have still the old one to go back to.",0.3869120172231254,False,1,6888 -2020-07-08 04:22:54.717,How do we get the output when 1 filter convolutes over 3 images?,"Imagine, that I have a 28 x 28 size grayscale image.. Now if I apply a Keras Convolutional layer with 3 filters and 3X3 size with 1X1 stride, I will get 3 images as output. Now if I again apply a Keras Convolutional layer with only 1 filter and 3X3 size and 1X1 stride, so how will this one 3X3 filter convolute over these 3 images and then how will we get one image.. -What I think is that, the one filter will convolute over each of the 3 images resulting in 3 images, then it adds all of the three images to get the one output image. -I am using tensorflow backend of keras. please excuse my grammar, And Please Help me.","Answering my own question: -I figured out that the one filter convolutes over 3 images, it results in 3 images, but then these these images pixel values are added together to get one resultant image.. -You can indeed check by outputting 3 images for 3 filters on 1 image. when you add these 3 images yourself (matrix addition), and plot it, the resultant image makes a lot of sense.",1.2,True,1,6889 -2020-07-08 09:52:48.397,How to rank images based on pairs of comparisons with SVM?,"I'm working on a neural network to predict scores on how ""good"" the images are. The images are the inputs to another machine learning algorithm, and the app needs to tell the user how good the image they are taking is for that algorithm. -I have a training dataset, and I need to rank these images so I can have a score for each one for the regression neural network to train. -I created a program that gives me 2 images from the training set at a time and I will decide which one wins (or ties). I heard that the full rank can be obtained from these comparisons using SVM Ranking. However, I haven't really worked with SVMs before. I only know the very basics of SVMs. I read a few articles on SVM Ranking and it seems like the algorithm turns the ranking problem to a classification problem, but the maths really confuses me. -Can anyone explain how it works in simple terms and how to implement it in Python?","I did some more poking around on the internet, and found the solution. -The problem was how to transform this ranking problem to a classification problem. This is actually very simple. -If you have images (don't have to be images though, can be anything) A and B, and A is better than B, then we can have (A, B, 1). If B is better, then we have (A, B, -1) -And we just need a normal SVM to take the names of the 2 images in and classify 1 or -1. That's it. -After we train this model, we can give it all the possible pairs of images from the dataset and generating the full rank will be simple.",1.2,True,1,6890 -2020-07-08 11:14:08.523,Efficient way to remove half of the duplicate items in a list,"If I have a list say l = [1, 8, 8, 8, 1, 3, 3, 8] and it's guaranteed that every element occurs an even number of times, how do I make a list with all elements of l now occurring n/2 times. So since 1 occurred 2 times, it should now occur once. Since 8 occurs 4 times, it should now occur twice. Since 3 occurred twice, it should occur once. -So the new list will be something like k=[1,8,8,3] -What is the fastest way to do this? -I did list.count() for every element but it was very slow.","Instead of using a counter, which keeps track of an integer for each possible element of the list, try mapping elements to booleans using a dictionary. Map to true the first time they're seen, and then every time after that flip the bit, and if it's true skip the element.",0.2336958171850616,False,2,6891 -2020-07-08 11:14:08.523,Efficient way to remove half of the duplicate items in a list,"If I have a list say l = [1, 8, 8, 8, 1, 3, 3, 8] and it's guaranteed that every element occurs an even number of times, how do I make a list with all elements of l now occurring n/2 times. So since 1 occurred 2 times, it should now occur once. Since 8 occurs 4 times, it should now occur twice. Since 3 occurred twice, it should occur once. -So the new list will be something like k=[1,8,8,3] -What is the fastest way to do this? -I did list.count() for every element but it was very slow.","I like using a trie set, as you need to detect duplicates to remove them, or a big hash set (lots of buckets). The trie does not go unbalanced and you do not need to know the size of the final set. An alternative is a very parallel sort -- brute force.",0.0340004944420038,False,2,6891 -2020-07-08 16:42:47.570,how to get position of thumb (in pixels) inside of vertical scale widget relatively upper right corner?,"Is there a way to get a position of thumb in pixels in vertical scale widget relative to upper right corner of widget? I want a label with scale value to pop up next to thumb when mouse pointer hovering over it, for this I need thumb coordinates.","The coords method returns the location along the trough corresponding to a particular value. -This is from the canonical documentation for the coords method: - -Returns a list whose elements are the x and y coordinates of the point along the centerline of the trough that corresponds to value. If value is omitted then the scale's current value is used. - -Note: you asked for coordinates relative to upper-right corner. These coordinates are relative to the upper-left. You can get the width of the widget with winfo_width() and do a simple transformation.",1.2,True,1,6892 -2020-07-09 10:59:20.653,user interaction with django,"I'm working on a question and answer system with django. my problem : I want the app to get a question from an ontology and according the user's answer get the next question. how can I have all the questions and user's answers displayed. i'm new to django, I don't know if I can use session with unauthenticated user and if I need to use websocket with the django channels library.","Given that you want to work with anonymous users the simplest way to go is to add a hidden field on the page and use it to track the user progress. The field can contain virtual session id that will point at a model record in the backend, or the entire Q/A session(ugly but fast and easy). Using REST or sockets would require similar approach. -I can't tell from the top of my mind if you can step on top of the built in session system. It will work for registered users, but I do believe that for anonymous users it gets reset on refresh(may be wrong here).",0.3869120172231254,False,1,6893 -2020-07-09 22:14:52.293,How do I use external applications to scrape data from a mobile app?,"I am trying to scrape data from a mobile application (Pokemon HOME). The app shows usage statistics and other useful statistics that I want to scrape. I want to scrape this on my computer using python. -I am having trouble determining how to scrape data from a mobile application. I tried using Fiddler and an Android emulator to intercept server data but I am unfamiliar with the software to be able to understand what exactly to do. -Any help would be very beneficial. Even just suggestions for resources where I can learn how to do this on my own. Thank you!","It's possible but it's really a hard nut to break. There's a huge difference between Mobile app and web app -Web app is accessible through WAN ,v.i.z World area network. Scraping is fairly and squarely easier. -In Python, you can bs4 to do it. -But in Mobile app, essentially and effectively, it's more about LAN. -It's installed locally. -Install an app to remote control your device from another device (usually required root) -However, whole data might not be available.",0.0,False,1,6894 -2020-07-09 23:28:48.100,How does python collections accept multiple data types?,"The most popular python version is CPython, written in C. What i want to know is how is it possible to write a python collection using C when C arrays can only store on type of data at the same time?","This is not how python does it in C, but I've written a small interpreted language in Java (which also only allows arrays/lists with 1 data type) and implemented mixed type lists. I had a Value interface and a class for each type of value and those classes implemented the Value interface. I had FunctionValue class, a StringValue class, a BooleanValue class, and a ListValue class, all of which implemented the value interface. The ListValue class has a field of type List which contains the list's elements. All methods on the Value interface and its implementing classes which do stuff like numeric addition, string appending, list access, function calling, etc. initially take in Value objects and do different things based on which actual kind of Value it is. -You could do something similar in C, albeit at a lower level since it doesn't have interfaces and classes to help you manage your types.",0.0,False,1,6895 -2020-07-10 20:37:35.990,Python same Network Card Game,"So I'm doing this python basics course and my final project is to create a card game. At the bottom of the instructions I get this - -For extra credit, allow 2 players to play on two different computers that are on the same network. Two people should be able to start identical versions of your program, and enter the internal IP address of the user on the network who they want to play against. The two applications should communicate with each other, across the network using simple HTTP requests. Try this library to send requests: - - -http://docs.python-requests.org/en/master/ - - -http://docs.python-requests.org/en/master/user/quickstart/ - - -And try Flask to receive them: - - -http://flask.pocoo.org/ - - -The 2-player game should only start if one person has challenged the other (by entering their internal IP address), and the 2nd person has accepted the challenge. The exact flow of the challenge mechanism is up to you. - -I already investigated how flask works and kind of understand how python-requests works too. I just can't figure out how to make those two work together. If somebody could explain what should I do or tell me what to watch or read I would really appreciate it.","it would be nice to see how far you've come before answer (as hmm suggested you in a comment), but i can tell you something theorical about this. -What you are talking about is a client-server application, where server need to elaborate the result of clients actions. -What i can suggest is to learn about REST API, that you can use to let client and server to communicate in a easy way. Your clients will send http requests to server exposed APIs. -From what you wrote, you have a basically constraints that should be respected during client and server communication, here reasumed: - -Someone search for your ip and send you a challenge request - -You have received a challenge that you refuse or accept; only if you accept the challenge you can start the game - - -As you can see from the project specifications the entire challenge mechanism is up to you, so you can decide the best for you. -I would begin start thinking to a possible protocol that make use of REST API to start initial communication between client and server and let you define a basic challenge mechanism. -Enjoy programming :).",0.0,False,1,6896 -2020-07-11 14:03:16.807,Putting .exe file in windows autorun with python,"I'm writing installer for my program with python. -When everything is extracted, how can i make my program .exe file to run with Windows startup? -I want to make it fully automatic, without any user input. -Thanks.","You don't need to use Python for this. You can copy your .exe file and paste it in this directory: - -C:\Users\YourUsername\AppData\Roaming\Microsoft\Windows\Start -Menu\Programs\Startup - -It will run automatically when your computer starts.",0.0,False,1,6897 -2020-07-12 16:23:15.963,"What's the difference between calling pip as a command line command, and calling it as a module of the python command?","When installing python modules, I seem to have two possible command line commands to do so. -pip install {module} -and -py -{version} -m pip install {module} -I suppose this can be helpful for selecting which version of python has installed which modules? But there's rarely a case where I wouldn't want a module installed for all possible versions. -Also the former method seems to have a pesky habit of being out-of-date no matter how many times I call: -pip install pip --upgrade -So are these separate? Does the former just call the latest version of the latter?","So the pip install module is callable if you have already installed the pip. The pip install pip --upgrade upgrades the pip and if you replace the pip into a module name it will upgrade that module to the most recent one. the py -{version} -m pip install {module} is callable if you have installed many versions of python - for example most of the Linux servers got installed python 2, so when you install the Python 3, and you want to install a module to version 3, you will have to call that command.",0.0,False,2,6898 -2020-07-12 16:23:15.963,"What's the difference between calling pip as a command line command, and calling it as a module of the python command?","When installing python modules, I seem to have two possible command line commands to do so. -pip install {module} -and -py -{version} -m pip install {module} -I suppose this can be helpful for selecting which version of python has installed which modules? But there's rarely a case where I wouldn't want a module installed for all possible versions. -Also the former method seems to have a pesky habit of being out-of-date no matter how many times I call: -pip install pip --upgrade -So are these separate? Does the former just call the latest version of the latter?","TLDR: Prefer ... -m pip to always install modules for a specific Python version/environment. - -The pip command executes the equivalent of ... -m pip. However, bare pip does not allow to select which Python version/environment to install to – the first match in your executable search path is selected. This may be the most recent Python installation, a virtual environment, or any other Python installation. -Use the ... -m pip variant in order to select the Python version/environment for which to install a module.",0.5457054096481145,False,2,6898 -2020-07-13 03:40:35.067,how to get names of all detected models from existing tensorflow lite instance?,"I'm looking to build a system that alerts me when there's a package at my front door. I already have a solution for detecting when there's a package (tflite), but I don't know how to get the array of detected objects from the existing tflite process and then pull out an object's title through the array. Is this even possible, or am I doing this wrong? -Also, the tflite model google gives does not know how to detect packages, but I'll train my own for that",I've figured out a solution. I can just use the same array that the function that draws labels uses (labels[int(classes[i])) to get the name of the object in place i of the array (dunno if I'm using the correct terminology but whatever). hopefully this will help someone,0.0,False,1,6899 -2020-07-13 04:19:48.737,Upgrading pycharm venv python version,"I have python 3.6 in my venv on PyCharm. However, I want to change that to Python 3.8. I have already installed 3.8, so how do I change my venv python version? -I am on windows 10. -Changing the version on the project intepreter settings seems to run using the new venv not my existing venv with all the packages I have installed. Attempting to add a new intepreter also results in the ""OK"" button being greyed out, possibly due to the current venv being not empty.","In pycharm you can do further steps: - -Go in File-->Settings-->Python Interpreter -Select different python environment if already available from the drop down, If not click on ""Add"". -Select New Environment option, then in Base interpreter you can select 3.8 version",0.2012947653214861,False,1,6900 -2020-07-13 11:13:34.620,How to embed my python chatbot to a website,"I am very new to python, and I am trying to create a chatbot with python for a school project. -I am almost done with creating my chatbot, but I don't know how to create a website to display it, I know how to create a website with Flask but how can I embed the chatbot code into the website?","In your flask code you can also embed the chatbot predict-functions into specific routes of your flask app. This would require following steps: -Just before you start the flask server you train the chatbot to ensure its predict function works propperly. -After that you can specifiy some more route-functions to your flask app. -In those functions you grab input from the user (from for example route parameters), send it through the chatbots predict function and then send the respons (probably with postprocessing if you wish) back to the requester. -Sending to the requester can be done through many different ways. -Two examples just of my head would be via display (render_template) to the webpage (if the request came in over GET-Request via usual browser site-opening request) or by sending a request to the users ip itself. -As a first hand experience i coupled the later mechanism to a telegram bot on my home-automation via post-request which itself then sends the response to me via telegram.",0.0,False,1,6901 -2020-07-13 12:20:28.610,two versions of python installed at two places,"I had uninstalled python 3.8 from my system and installed 3.7.x -But after running the command where python and where python3 in the cmd I get two different locations. -I was facing issues regarding having two versions of python. So I would like to know how i can completely remove python3 located files.","To delete a specific python version, you can use which python and remove the python folder using sudo rm -rf . You might also have to modify the PATH env variable to the location which contains the python executables of the version you want. -Or you can install Anaconda [https://www.anaconda.com/products/individual] which helps to manage multiple versions of python for you.",0.0,False,1,6902 -2020-07-14 20:41:18.337,How to encrypt data using the password from User,"I have a flask site. It's specifically a note app. At the moment I am storing the user notes as plaintext. That means that anyone with access to the server which is me has access to the notes. I want to encrypt the data with the user password, so that only the user can access it using their password, but that would require the user to input his/her password each time they save their notes, retrive the notes or even updates them. I am hashing the password obviously. -Anyone has any idea how this could be done?","Use session to store user information, the Flask-Login extension would be a good choice for you.",-0.2012947653214861,False,1,6903 -2020-07-15 03:10:47.947,I have a visual studio code terminal problem how do i fix it so that i have the integrated one instead of external?,"I'm using VS Code on Windows 10. I had no problems until a few hours ago (at the time of post), whenever I want to run a python program, it opens terminals outside of VS Code like Win32 and Git Bash. How do I change it back to the integrated terminal I usually had?","With your Python file open in VS Code: - -Go to Run > Open Configurations, if you get prompted select ""Python File"" -In the launch.json file, change the value of ""console"" to ""integratedTerminal""",0.3869120172231254,False,1,6904 -2020-07-15 12:26:42.943,How can I remove/delete a virtual python environment created with virtualenv in Windows 10?,"I want to learn how to remove a virtual environment using the windows command prompt, I know that I can easily remove the folder of the environment. But I want to know if there is a more professional way to do it.","There is no command to remove virtualenv, you can deactivate it or remove the folder but unfortunately virtualenv library doesn't contain any kind of removal functionality.",1.2,True,1,6905 -2020-07-16 07:00:18.590,"In NumPy, how to use a float that is larger than float64's max value?","I have a calculation that may result in very, very large numbers, that won fit into a float64. I thought about using np.longdouble but that may not be large enough either. -I'm not so interested in precision (just 8 digits would do for me). It's the decimal part that won't fit. And I need to have an array of those. -Is there a way to represent / hold an unlimited size number, say, only limited by the available memory? Or if not, what is the absolute max value I can place in an numpy array?","Can you rework the calculation so it works with the logarithms of the numbers instead? -That's pretty much how the built-in floats work in any case... -You would only convert the number back to linear for display, at which point you'd separate the integer and fractional parts; the fractional part gets exponentiated as normal to give the 8 digits of precision, and the integer part goes into the ""×10ⁿ"" or ""×eⁿ"" or ""×2ⁿ"" part of the output (depending on what base logarithm you use).",1.2,True,1,6906 -2020-07-16 15:46:39.480,Why does the dimensions of Kivy app changes after deployment?,"As mentioned in the question, I build a kivy app and deploy it to my android phone. The app works perfectly on my laptop but after deploying it the font size changes all of a sudden and become very small. -I can't debug this since everything works fine. The only problem is this design or rather the UI. -Does anyone had this issue before? Do you have a suggestion how to deal with it? -PS: I can't provide a reproducible code here since everything works fine. I assume it is a limitation of the framework but I'm not sure.","It sounds like you coded everything in terms of pixel sizes (the default units for most things). The difference on the phone is probably just that the pixels are smaller. -Use the kivy.metrics.dp helper function to apply a rough scaling according to pixel density. You'll probably find that if you currently have e.g. width: 50, on the desktop then width: dp(50) will look the same while on the phone it will be twice as big as before. - -PS: I can't provide a reproducible code here since everything works fine. - -Providing a minimal runnable example would, in fact, have let the reader verify whether you were attempting to compensate for pixel density.",1.2,True,1,6907 -2020-07-16 16:58:29.950,Adding files to gitignore in Visual Studio Code,"In Visual Studio Code, with git extensions installed, how do you add files or complete folders to the .gitignore file so the files do not show up in untracked changes. Specifically, using Python projects, how do you add the pycache folder and its contents to the .gitignore. I have tried right-clicking in the folder in explorer panel but the pop-menu has no git ignore menu option. Thanks in advance. -Edit: I know how to do it from the command line. Yes, just edit the .gitignore file. I was just asking how it can be done from within VS Code IDE using the git extension for VS Code.","So after further investigation, it is possible to add files from the pycache folder to the .gitignore file from within VS Code by using the list of untracked changed files in the 'source control' panel. You right-click a file and select add to .gitignore from the pop-up menu. You can't add folders but just the individual files.",1.2,True,1,6908 -2020-07-17 06:35:43.907,how to get proper formatted string?,"if I print the string in command prompt I I'm getting it i proper structure -""connectionstring""."""".""OT"".""ORDERS"".""SALESMAN_ID"" -but when I write it to json, I'm getting it in below format -\""connectionstring\"".\""\"".\""OT\"".\""ORDERS\"".\""SALESMAN_ID\"" -how to remove those escape characters? -when It's happening?","What is happening? -Json serialization and de-serialization is happening. -From wikipedia: -In the context of data storage, serialization (or serialisation) is the process of translating data structures or object state into a format that can be stored (for example, in a file or memory buffer) or transmitted (for example, across a network connection link) and reconstructed later. [...] -The opposite operation, extracting a data structure from a series of bytes, is deserialization. -In console, you de-serialize the json but when storing in file, you serialize the json.",0.0,False,1,6909 -2020-07-17 11:57:33.973,how do we check similarity between hash values of two audio files in python?,"About the data : -we have 2 video files which are same and audio of these files is also same but they differ in quality. -that is one is in 128kbps and 320kbps respectively. -we have used ffmpeg to extract the audio from video, and generated the hash values for both the audio file using the code : ffmpeg -loglevel error -i 320kbps.wav -map 0 -f hash - -the output was : SHA256=4c77a4a73f9fa99ee219f0019e99a367c4ab72242623f10d1dc35d12f3be726c -similarly we did it for another audio file to which we have to compare , -C:\FFMPEG>ffmpeg -loglevel error -i 128kbps.wav -map 0 -f hash - -SHA256=f8ca7622da40473d375765e1d4337bdf035441bbd01187b69e4d059514b2d69a -Now we know that these audio files and hash values are different but we want to know how much different/similar they are actually , for eg: like some distance in a-b is say 3 -can someone help with this?","You cannot use a SHA256 hash for this. This is intentional. It would weaken the security of the hash if you could. what you suggest is akin to differential cryptoanalysis. SHA256 is a modern cryptographic hash, and designed to be safe against such attacks.",0.2012947653214861,False,1,6910 -2020-07-17 19:42:44.647,Add Kivy Widgets Gradually,"I would like to ask how could I add dynamically some widgets in my application one by one and not all at once. Those widgets are added in a for loop which contains the add_widget() command, and is triggered by a button. -So I would like to know if there is a way for the output to be shown gradually, and not all at once, in the end of the execution. Initially I tried to add a delay inside the for loop, but I'm afraid it has to do with the way the output is built each time. -EDIT: Well, it seems that I hadn't understood well the use of Clock.schedule_interval and Clock.schedule_once, so what I had tried with them (or with time.sleep) didn't succeed at all. But obviously, this was the solution to my problem.",Use Clock.schedule_interval or Clock.schedule_once to schedule each iteration of the loop at your desired time spacing.,1.2,True,1,6911 -2020-07-18 01:31:21.407,Why isn't lst.sort().reverse() valid?,"Per title. I do not understand why it is not valid. I understand that they mutate the object, but if you call the sort method, after it's done then you'd call the reverse method so it should be fine. Why is it then that I need to type lst.sort() then on the line below, lst.reverse()? -Edit: Well, when it's pointed out like that, it's a bit embarrassing how I didn't get it before. I literally recognize that it mutated the object and thus returns a None, but I suppose it didn't register that also meant that you can't reverse a None-type object.","When you call lst.sort(), it does not return anything, it changes the list itself. -So the result of lst.sort() is None, thus you try to reverse None which is impossible.",1.2,True,1,6912 -2020-07-18 05:52:32.897,Converting numpy boolean array to binary array,"I have a boolean numpy array which I need to convert it to binary, therefore where there is true it should be 255 and where it is false it should be 0. -Can someone point me out how to write the code?","Let x be your data in numpy array Boolean format. -Try -np.where(x,255,0)",0.0,False,1,6913 -2020-07-18 16:00:43.153,"df['colimn_name'] vs df.loc[:, 'colimn_name']","I would like more info. on the answer to the following question: - -df[‘Name’] and 2. df.loc[:, ‘Name’], where: - -df = pd.DataFrame(['aa', 'bb', 'xx', 'uu'], [21, 16, 50, 33], columns = ['Name', 'Age']) -Choose the correct option: - -1 is the view of original dataframe and 2 is a copy of original -dataframe -2 is the view of original dataframe and 1 is a copy of -original dataframe -Both are copies of original dataframe -Both are views of original dataframe - -I found more than one answer online but not sure. I think the answer is number 2 but when i tried x = df['name'] then x[0] = 'cc' then print(df) I saw that the change appeared in the original dataframe. So how the changed appeared in the original dataframe although I also got this warining: -A value is trying to be set on a copy of a slice from a DataFrame -I just want to know more about the difference between the two and weather one is really a copy of the original dataframe or not. Thank you.","Both are the views of original dataframe -One can be used to add more columns in dataframe and one is used for specifically getting a view of a cell or row or column in dataframe.",0.0,False,1,6914 -2020-07-19 11:57:34.290,In-memory database and programming language memory management / garbage collection,"I've been reading about in-memory databases and how they use RAM instead of disk-storage. -I'm trying to understand the pros and cons of building an in-memory database with different programming languages, particularly Java and Python. What would each implementation offer in terms of speed, efficiency, memory management and garbage collection? -I think I could write a program in Python faster, but I'm not sure what additional benefits it would generate. -I would imagine the language with a faster or more efficient memory management / garbage collection algorithm would be a better system to use because that would free up resources for my in-memory database. From my basic understanding I think Java's algorithm might be more efficient that Python's at freeing up memory. Would this be a correct assumption? -Cheers","You choose an in-memory database for performance, right? An in-memory database written in C/C++ and that provides an API for Java and/or Python won't have GC issues. Many (most?) financial systems are sensitive to latency and 'jitter'. GC exacerbates jitter.",0.0,False,1,6915 -2020-07-20 08:27:36.160,How to know the response data type of API using requests,"I have one simple question, is there a easy way to know the type of API's response? -Fox example: -Using requests post method to send api requests, some apis will return data format as .xml type or .json type, -how can i know the response type so i can choose not to convert to .json use json() when response type is .xml?",Use r.headers.get('content-type') to get the response type,1.2,True,1,6916 -2020-07-20 14:58:08.290,Calculating how much area of an ellipsis is covered by a certain pixel in Python,"I am working with Python and currently trying to figure out the following: If I place an ellipsis of which the semi-axes, the centre's location and the orientation are known, on a pixel map, and the ellipsis is large enough to cover multiple pixels, how do I figure out which pixel covers which percentage of the total area of the ellipsis? As an example, let's take a map of 10*10 pixels (i.e. interval of [0,9]) and an ellipsis with the centre at (6.5, 6.5), semi-axes of (0.5, 1.5) and an orientation angle of 30° between the horizontal and the semi-major axis. I have honestly no idea, so any help is appreciated. -edit: To clarify, the pixels (or cells) have an area. I know the area of the ellipsis, its position and its orientation, and I want to find out how much of its area is located within pixel 1, how much it is within pixel 2 etc.","This is math problem. Try math.exchange rather than stackoverflow. -I suggest you to transform the plane: translation to get the center in the middle, rotation to get the ellipsis's axes on the x-y ones and dilatation on x to get a circle. And then work with a circle on rhombus tiles. -Your problem won't be less or more tractable in the new formulation but the math and code you have to work on will be slightly lighter.",0.0,False,1,6917 -2020-07-20 17:32:32.860,How to dinamically inject HTML code in Django,"In a project of mine I need to create an online encyclopedia. In order to do so, I need to create a page for each entry file, which are all written in Markdown, so I have to covert it to HTML before sending them to the website. I didn't want to use external libraries for this so I wrote my own python code that receives a Markdown file and returns a list with all the lines already formatted in HTML. The problem now is that I don't know how to inject this code to the template I have in Django, when I pass the list to it they are just printed like normal text. I know I could make my function write to an .html file but I don't think it's a great solution thinking about scalability. -Is there a way to dynamically inject HTML in Django? Is there a ""better"" approach to my problem?","You could use the safe filter in your template! So it would look like that. -Assuming you have your html in a string variable called my_html then in your template just write -{{ my_html | safe }} -And don’t forget to import it!",1.2,True,1,6918 -2020-07-21 09:12:16.213,EnvironmentNotWritableError on Windows 10,"I am trying to get python-utils package and utils module work in my anaconda3. However, whenever I open my Anaconda Powershell and try to install the package it fails with the comment - -EnvironmentNotWritableError: The current user does not have write permissions to the target environment. -environment location: C:\ProgramData\Anaconda3 - -I searched for solutions and was advised that I update conda. -However, when I ran the comment below - -conda update -n base -c defaults conda - -it also failed with EnvironmentNotWritableError showing. -Then I found a comment that says maybe my conda isn't installed at some places, so I tried - -conda install conda - -which got the same error. -Then I tried - -conda install -c conda-forge python-utils - -which also failed with the same error. -Maybe it's the problem with setting paths? but I don't know how to set them. All I know about paths is that I can type - -sys.path - -and get where Anaconda3 is running.","I have got the same non writable error in anaconda prompt for downloading pandas,then sorted the the error by running anaconda prompt as administrator. it worked for me since i already had that path variable in environment path",0.3869120172231254,False,2,6919 -2020-07-21 09:12:16.213,EnvironmentNotWritableError on Windows 10,"I am trying to get python-utils package and utils module work in my anaconda3. However, whenever I open my Anaconda Powershell and try to install the package it fails with the comment - -EnvironmentNotWritableError: The current user does not have write permissions to the target environment. -environment location: C:\ProgramData\Anaconda3 - -I searched for solutions and was advised that I update conda. -However, when I ran the comment below - -conda update -n base -c defaults conda - -it also failed with EnvironmentNotWritableError showing. -Then I found a comment that says maybe my conda isn't installed at some places, so I tried - -conda install conda - -which got the same error. -Then I tried - -conda install -c conda-forge python-utils - -which also failed with the same error. -Maybe it's the problem with setting paths? but I don't know how to set them. All I know about paths is that I can type - -sys.path - -and get where Anaconda3 is running.",Run the PowerShell as Administrator. Right Click on the PowerShell -> Choose to Run as Administrator. Then you'll be able to install the required packages.,1.2,True,2,6919 -2020-07-21 19:42:40.367,"Selenium(Python): After clicking button, wait until all the new elements (which can have different attributes) are loaded","How do I wait for all the new elements that appear on the screen to load after clicking a specific button? I know that I can use the presence_of_elements_located function to wait for specific elements, but how do I wait until all the new elements have loaded on the page? Note that these elements might not necessarily have one attribute value like class name or id.","Well in reality you can't, but you can run a script to check for that. -However be wary that this will not work on javascript/AJAX elements. -self.driver.execute_script(""return document.readyState"").equals(""complete""))",1.2,True,1,6920 -2020-07-22 10:14:37.227,Scipy Differential Evolution initial solution(s) input,"Does anyone know how to feed in an initial solution or matrix of initial solutions into the differential evolution function from the Scipy library? -The documentation doesn't explain if its possible but I know that initial solution implementation is not unusual. Scipy is so widely used I would expect it to have that type of functionality.","Ok, after review and testing I believe I now understand it. -There are a set of parameters that the scipy.optimize.differential_evolution(...) function can accept, one is the init parameter which allows you to upload an array of solutions. Personally I was looking at a set of coordinates so enumerated them into an array and fed in 99 other variations of it (100 different solutions) and fed this matrix into the inti parameter. I believe it needs to have more than 4 solutions or your are going to get a tuple error. -I probably didn't need to ask/answer the question though it may help others that got equally confused.",1.2,True,1,6921 -2020-07-22 18:39:12.457,How do i check if it should be an or a in python?,"so im making a generator (doesn't really matter what one it is) -and im trying to make the a/ans appear before nouns correctly. -for example: -""an apple plays rock paper scissors with a banana"" -and not: -""a apple plays rock paper scissors with an banana"" -the default thing for the not-yet determined a/an is ""
"" -so i need to replace the """" with either a or an depending on if the letter after it is a vowel or not. -how would i do this?","Pseudo code - -first find letter 'a' or 'an' in string and keep track of it -then find first word after it -if word starts with vowel: make it 'an' -Else: make it 'a' -this rules breaks with words like 'hour' or 'university' so also make exception rule(find a list of words if u can)",0.0,False,1,6922 -2020-07-23 02:51:14.593,Schoology API understanding,I can get to the user information using the API but I cannot access course information. Can someone explain what I need to do to make the correct call for course information?,The easiest way to answer these questions is to try it in Postman. Highly recommended.,0.0,False,1,6923 -2020-07-23 08:31:12.210,Is an abstract class without any implementation and variables effectively interface?,"I'm reviewing the concepts of OOP, reading . -Here the book defines interface as - -The set of all signatures defined by an object’s operations is called the interface to the object. (p.39) - -And the abstract class as - -An abstract class is one whose main purpose is to define a common interface for its subclasses. An abstract class will defer some or all of its implementation to operations defined in subclasses; hence an abstract class cannot be instantiated. The operations that an abstract class declares but doesn’t implement are called abstract operations. Classes that aren’t abstract are called concrete classes. (p.43) - -And I wonder, if I define an abstract class without any internal data (variables) and concrete operations, just some abstract operations, isn't it effectively just a set of signatures? Isn't it then just an interface? -So this is my first question: - -Can I say an abstract class with only abstract functions is ""effectively (or theoretically)"" an interface? - -Then I thought, the book also says something about types and classes. - -An object’s class defines how the object is implemented. The class defines the object’s internal state and the implementation of its operations. In contrast, an object’s type only refers to its interface—the set of requests to which it can respond. An object can have many types, and objects of different classes can have the same type. (p.44) - -Then I remembered that some languages, like Java, does not allow multiple inheritance while it allows multiple implementation. So I guess for some languages (like Java), abstract class with only abstract operations != interfaces. -So this is my second question: - -Can I say an abstract class with only abstract functions is ""generally equivalent to"" an interface in languages that support multiple inheritance? - -My first question was like checking definitions, and the second one is about how other languages work. I mainly use Java and Kotlin so I'm not so sure about other languages that support multiple inheritance. I do not expect a general, comprehensive review on current OOP languages, but just a little hint on single language (maybe python?) will be very helpful.","No. - -In Java, every class is a subclass of Object, so you can't make an abstract class with only abstract methods. It will always have the method implementations inherited from Object: hashCode(), equals(), toString(), etc. - -Yes, pretty much. - -In C++, for example, there is no specific interface keyword, and an interface is just a class with no implementations. There is no universal base class in C++, so you can really make a class with no implementations. -Multiple inheritance is not really the deciding feature. Java has multiple inheritance of a sort, with special classes called ""interfaces"" that can even have default methods. -It's really the universal base class Object that makes the difference. interface is the way you make a class that doesn't inherit from Object.",1.2,True,1,6924 -2020-07-23 11:53:33.000,How to control Django with Javascript?,"I am building a web application with Django and I show the graphs in the website. The graphs are obtained from real time websites and is updated daily. I want to know how can I send graphs using matplotlib to template and add refresh option with javascript which will perform the web scraping script which I have written. The main question is which framework should I use? AJAX, Django REST, or what?","You're better off using a frontend framework and calling the backend for the data via JS. separating the front and backend is a more contemporary approach and has some advantages over doing it all in the backend. -From personal experience, it gets really messy mixing Python and JS in the same system. -Use Django as a Rest-ful backend, and try not to use AJAX in the frontend, then pick a frontend of your choice to deliver the web app.",0.3869120172231254,False,1,6925 -2020-07-23 15:56:17.107,How can I deploy a streamlit application in repl.it?,"I installed/imported streamlit, numpy, and pandas but I do not know how I can see the charts I have made. How do I deploy it on repl.it?","You can not deploy streamlit application within repl.it because - -In order to protect against CSRF attacks, we send a cookie with each request. -To do so, we must specify allowable origins, which places a restriction on -cross-origin resource sharing. - -One solution is push your code from repl.it to GitHub. Then deploy from GitHub on share.streamlit.io.",0.2012947653214861,False,1,6926 -2020-07-23 17:07:46.247,How to get jupyter notebook theme in vscode,I am a data scientist use jupyter notebook a lot and also have started to do lot of development work and use Vscode for development. so how can I get Jupyter notebook theme in vscode as well? I know how to open a Jupyter notebook in vscode by installing an extension but I wanted to know how to get Jupyter notebook theme for vs code. so it gets easier to switch between both ide without training eyes,"You can edit your VScode's settings by: -1- Go to your Jupyter extension => Extension settings => and check ""Ignore Vscode Theme"". -2- Click on File => preference=> color Theme -3- Select the theme you need. -You can download the theme extension from VSCode's extension store, for example: Markdown Theme Kit; Material Theme Kit. -Note: -You need to restart or reload VSCode to see the changes.",0.296905446847765,False,1,6927 -2020-07-24 18:18:58.150,KivyMD MDFlatButton not clickable & Kivy ScreenManager not working,"So I'm making this game with Kivy and it's a game where there's a start screen with an MDToolbar, an MDNavigationDrawer, two Images, three MDLabels and a OneLineIconListItem that says 'Start Game' and when you click on it the game is supposed to start. -The game screen contains: - -Viruses -Masked man -Soap which you use to hit the viruses -Current score in an MDLabel -A button to go back to the start screen - -Issues: - -The background music for the game starts playing before the game screen is shown (When the start screen is shown) - ScreenManager issue -When I click the button to go back to the start screen, the button doesn't get clicked - MDFlatButton issue - -I used on_touch_down, on_touch_move, and on_touch_up for this game and I know that's what's causing the MDFlatButton issue. So does anyone know how I'm supposed to have the on_touch_* methods defined AND have clickable buttons? -And I don't know how to fix the ScreenManager issue either. -I know I haven't provided any code here, but that's because this post is getting too long. I already got a post deleted because people thought the post was too long and I was providing too much code and too less details. And I don't want that to happen again. If anyone needs to view the code of my project, I will leave a Google Docs link to it. -Thanks in advance!","I fixed my app. -Just in case anyone had the same question, I'm gonna post the answer here. - -To get a clickable button, you have to create a new Screen or Widget and add the actual screen as a widget to the new class. Then, you can add buttons to the new class. This works because the button is on top of the actual screen. So when you click anywhere in the button's area, the button gets clicked and the on_touch_* methods of the actual screen don't get called. - - -And to fix the ScreenManager issue, you just have to expirement.",1.2,True,1,6928 -2020-07-25 22:12:31.897,Tkinter pickle save and load,help me please how can I use the pickle save if I have a lot of entry and I want to save all in one file and load form the file for each entry separately?,"You can't pickle tkinter widgets. You will have to extract the data and save just the data. Then, on restart you will have to unpickle the data and insert it back into the widgets.",0.0,False,1,6929 -2020-07-26 07:50:11.350,Windows desktop application read session data from browser,"I'm writing a desktop and web app, Just need to know how can i authorize this desktop application with same open web app browser after installed?","if you mean to authorize your desktop app via the login of user from any web browser, you can use TCP/UDP socket or also for example , call an api every 2 seconds to check is user is loged in or not. in web browser , if user had be loged in , you can set login state with its ip or other data in database to authorize the user from desktop app.",0.0,False,1,6930 -2020-07-26 13:19:22.760,How to add a python matplotlib interactive figure to vue.js web app?,"I have a plot made using Python matplotlib that updates every time new sensor data is acquired. I also have a web GUI using vue. I'd like to incorporate the matplotlib figure into the web GUI and have it update as it does when running it independently. This therefore means not just saving plot and loading it as an image. -Can anyone advise how to achieve this?","In my opinion it's not reasonable way, There are very good visualizing tools powered by javascript, for example chart.js. -you can do your computation with python in back-end and pass data to front-end by API and plot every interactive diagrams you want using javascript.",1.2,True,1,6931 -2020-07-27 06:36:07.150,How to instal python packages for Spyder,"I am using the IDE called Spyder for learning Python. -I would like to know in how to go about in installing Python packages for Spyder? -Thank you","I have not checked if the ways described by people here before me work or not. -I am running Spyder 5.0.5, and for me below steps worked: - -Step 1: Open anaconda prompt (I had my Spyder opened parallelly) -Step 2: write - ""pip install package-name"" - -Note: I got my Spyder 5.0.5 up and running after installing the whole Anaconda Navigator 2.0.3.",0.0,False,2,6932 -2020-07-27 06:36:07.150,How to instal python packages for Spyder,"I am using the IDE called Spyder for learning Python. -I would like to know in how to go about in installing Python packages for Spyder? -Thank you","Spyder is a package too, you can install packages using pip or conda, and spyder will access them using your python path in environment. -Spyder is not a package manager like conda,, but an IDE like jupyter notebook and VS Code.",0.1618299653758019,False,2,6932 -2020-07-28 16:08:13.623,What is the difference between sys.stdin.read() and sys.stdin.readline(),"Specifically, I would like to know how to give input in the case of read(). I tried everywhere but couldn't find the differences anywhere.","read() recognizes each character and prints it. -But readline() recognizes the object line by line and prints it out.",0.2012947653214861,False,2,6933 -2020-07-28 16:08:13.623,What is the difference between sys.stdin.read() and sys.stdin.readline(),"Specifically, I would like to know how to give input in the case of read(). I tried everywhere but couldn't find the differences anywhere.",">>> help(sys.stdin.read) -Help on built-in function read: - -read(size=-1, /) method of _io.TextIOWrapper instance - Read at most n characters from stream. - - Read from underlying buffer until we have n characters or we hit EOF. - If n is negative or omitted, read until EOF. -(END) - -So you need to send EOF when you are done (*nix: Ctrl-D, Windows: Ctrl-Z+Return): - ->>> sys.stdin.read() -asd -123 -'asd\n123\n' - -The readline is obvious. It will read until newline or EOF. So you can just press Enter when you are done.",0.3869120172231254,False,2,6933 -2020-07-28 17:13:22.017,"Is there any simple way to pass arguments based on their position, rather than kwargs. Like a positional version of kwargs?","Is there a generic python way to pass arguments to arbitrary functions based on specified positions? While it would be straightforward to make a wrapper that allows positional argument passing, it would be incredibly tedious for me considering how frequently I find myself needing to pass arguments based on their position. -Some examples when such would be useful: - -when using functools.partial, to partially set specific positional arguments -passing arguments with respect to a bijective argument sorting key, where 2 functions take the same type of arguments, but where their defined argument names are different - -An alternative for me would be if I could have every function in my code automatically wrapped with a wrapper that enables positional argument passing. I know several ways this could be done, such as running my script through another script which modifies it, but before resorting to that I'd like to consider simpler pythonic solutions.",For key arguments use **kwargs but for positional arguments use *args.,0.0,False,1,6934 -2020-07-28 22:24:48.747,NaN values with Pandas Spearman and Kendall correlations,"I am attempting to calculate Kendall's tau for a large matrix of data stored in a Pandas dataframe. Using the corr function, with method='kendall', I am receiving NaN for a row that has only one value (repeated for the length of the array). Is there a way to resolve it? The same issue happened with Spearman's correlation as well, presumably because Python doesn't know how to rank an array that has a single repeated value, which leaves me with Pearson's correlation -- which I am hesitant to use due to its normality and linearity assumptions. -Any advice is greatly appreciated!","I decided to abandon the complicated mathematics in favor of intuition. Because the NaN values arose only on arrays with constant values, it occurred to me that there is no relationship between it and the other data, so I set its Spearman and Kendall correlations to zero.",0.0,False,1,6935 -2020-07-28 23:02:11.343,Cannot find Python 3.8.2 path on Windows 10,"I have Windows 10 on my computer and when I use the cmd and check python --version, I get python 3.8.2. But when I try to find the path for it, I am unable to find it through searching on my PC in hidden files as well as through start menu. I don't seem to have a python 3.8 folder on my machine. Anybody have any ideas how to find it?","If you're using cmd (ie Command Prompt), and typing python works, then you can get the path for it by doing where python. It will list all the pythons it finds, but the first one is what it'll be using.",0.1352210990936997,False,1,6936 -2020-07-29 02:33:18.637,Pygame how to let balls collide,I want to make a script in pygame where two balls fly towards each other and when they collide they should bounce off from each other but I don't know how to do this so can you help me?,"Its pretty easy you just check if the x coordinate is in the same spot as the other x coordinate. For example if you had one of the x coordinated called x, and another one called i(there are 2 x coordinates for both of the balls) then you could just say if oh and before I say anything esle this example is fi your pygame window is a 500,500. You could say if x == 250: x -= 15. And the other way around for i. If i == 250: i += 15. Ther you go!. Obviously there are a few changes you have to do, but this is the basic code, and I think you would understand this",0.0,False,1,6937 -2020-07-29 08:54:18.833,How to set intervals between multiple requests AWS Lambda API,"I have created an API using AWS Lambda function (using Python). Now my react js code hits this API whenever an event fire. So user can request API as many times the events are fired. Now the problem is we are not getting the response from lambda API sequentially. Sometime we are getting the response of our last request faster than the previous response of previous request. -So we need to handle our response in Lambda function sequentially, may be adding some delay between 2 request or may be implementing throttling. So how can I do that.","Did you check the concurrency setting on Lambda? You can throttle the lambda there. -But if you throttle the lambda and the requests being sent are not being received, the application sending the requests might be receiving an error unless you are storing the requests somewhere on AWS for being processed later. -I think putting an SQS in front of lambda might help. You will be hitting API gateway, the requests get sent to SQS, lambda polls requests concurrently (you can control the concurrency) and then send the response back.",0.1352210990936997,False,2,6938 -2020-07-29 08:54:18.833,How to set intervals between multiple requests AWS Lambda API,"I have created an API using AWS Lambda function (using Python). Now my react js code hits this API whenever an event fire. So user can request API as many times the events are fired. Now the problem is we are not getting the response from lambda API sequentially. Sometime we are getting the response of our last request faster than the previous response of previous request. -So we need to handle our response in Lambda function sequentially, may be adding some delay between 2 request or may be implementing throttling. So how can I do that.","You can use SQS FIFO Queue as a trigger on the Lambda function, set Batch size to 1, and the Reserved Concurrency on the Function to 1. The messages will always be processed in order and will not concurrently poll the next message until the previous one is complete. -SQS triggers do not support Batch Window - which will 'wait' until polling the next message. This is a feature for Stream based Lambda triggers (Kinesis and DynamoDB Streams) -If you want to streamlined process, Step Function will let you manage states using state machines and supports automatic retry based off the outputs of individual states.",1.2,True,2,6938 -2020-07-29 11:03:18.770,"Is it possible to store an image with a value in a way similar to an array, in a database (Firebase or any other)?","Would it be possible to store an image and a value together in a database? Like in a array? -So it would be like [image, value]. I’m just trying to be able to access the image to print that and then access the value later (for example a image if a multi-choice question and its answer is the value). -Also how would I implement and access this? I’m using Firebase with the pyrebase wrapper for python but if another database is more suitable I’m open to suggestions.","you can set your computer as a server and in database you can store like [image_path, value].",0.0,False,1,6939 -2020-07-29 11:45:40.760,How to change the Anaconda environment of a jupyter notebook?,"I have created a new Anaconda environnement for Python. I managed to add it has an optional environnement you can choose when you create a new Notebook. Hovewer, I'd like to know how can I change the environnement of an already existing Notebook.","open your .ipynb file on your browser. On top, there is Kernel tab. You can find your environments under Change Kernel part.",0.2012947653214861,False,1,6940 -2020-07-29 13:58:51.300,"'pychattr' library in Python, 'n_simulations' parameter","Does anyone know if it is possible to use n_simulation = None in 'MarkovModel' algorithm in 'pychhatr' library in Python? -It throws me an error it must be an integer, but in docsting i have information like that: -'n_simulations : one of {int, None}; default=10000' -I`d like to do something like nsim = NULL in 'markov_model' in 'ChannelAttribution' package in R, these two algorithms are similarly implemented. -I don`t know how does it works exactly, how many simulations from a transition matrix I have using NULL. -Could anyone help with this case? -Regards, -Sylwia","Out of curiosity I spent some minutes staring intensely at the source code of both pychattr module and ChannelAttribution package. -I'm not really familiar with the model, but are you really able to call this in R with ""nsim=NULL""? Unless I missed something if you omit this parameter it will use value 100000 as the default and if parameter exists, the R wrapper will complain if it's not a positive number. -Regards, -Maciej",0.0,False,2,6941 -2020-07-29 13:58:51.300,"'pychattr' library in Python, 'n_simulations' parameter","Does anyone know if it is possible to use n_simulation = None in 'MarkovModel' algorithm in 'pychhatr' library in Python? -It throws me an error it must be an integer, but in docsting i have information like that: -'n_simulations : one of {int, None}; default=10000' -I`d like to do something like nsim = NULL in 'markov_model' in 'ChannelAttribution' package in R, these two algorithms are similarly implemented. -I don`t know how does it works exactly, how many simulations from a transition matrix I have using NULL. -Could anyone help with this case? -Regards, -Sylwia","I checked that 'pychattr' (Python) doesn`t support value None but it supports n_simulations = 0 and it sets n_simulations to 1e6 (1 000 000). -'ChannelAttribution' (R) replaces nsim = NULL and nsim = 0 to nsim = 1e6 (1 000 000) too. -In latest version of 'ChannelAttribution' (27.07.2020) we have nsim_start parameter instead of nsim and it doesn`t support 0 or NULL value anymore. -Important: default value of nsim_start is 1e5 (100 000) and from my experience it`s not enough in many cases. -Regards, -Sylwia",0.0,False,2,6941 -2020-07-29 16:10:55.583,How to know the alpha or critical value of your t test analysis?,"How do you decide the critical values(alpha) and analyze with the p value -example: stats.ttest_ind(early['assignment1_grade'], late['assignment1_grade']) -(2 series with score of their assignments) -I understand the concept that if the p value is greater than the alpha value then the null hypothesis cant be neglected. -Im doing a course and instructor said that the alpha value here is 0.05 but how do you determine it.","The alpha value cannot be determined in the sense that there were a formula to calculate it. Instead, it is arbitrarily chosen, ideally before the study is conducted. -The value alpha = 0.05 is a common choice that goes back to a suggestion by Ronald Fisher in his influential book Statistical Methods for Research Workers (first published in 1925). The only particular reason for this value is that if the test statistic has a normal distribution under the null hypothesis, then for a two-tailed test with alpha = 0.05 the critical values of the test statistic will be its mean plus/minus 2 (more exactly, 1.96) times its standard deviation. -In fact, you don't need alpha when you calculate the p value, because you can just publish the p value and then every reader can decide whether to consider it low enough for any given purpose or not.",0.0,False,1,6942 -2020-07-31 14:50:10.383,Giving interactive control of a Python program to the user,"I need my Python program to do some stuff, and at a certain point give control to the user (like a normal Python shell when you run python3 or whatever) so that he can interact with it via command line. I was thinking of using pwntools's interactive() method but I' m not sure how I would use that for the local program instead of a remote. -How would I do that? -Any idea is accepted, if pwntools is not needed, even better.","Use IPython -If you haven't already, add the package IPython using pip, anaconda, etc. -Add to your code: -from IPython import embed -Then where you want a ""breakpoint"", add: -embed() -I find this mode, even while coding to be very efficient.",0.3869120172231254,False,1,6943 -2020-07-31 15:51:48.670,Python Coverage how to generate Unittest report,"In python I can get test coverage by coverage run -m unittest and the do coverage report -m / coverage html to get html report. -However, it does not show the actual unit test report. The unit test result is in the logs, but I would like to capture it in a xml or html, so I can integrate it with Jenkins and publish on each build. This way user does not have to dig into logs. -I tried to find solution to this but could not find any, please let me know, how we can get this using coverage tool. -I can get this using nose2 - nose2 --html-report --with-coverage --coverage-report html - this will generate two html report - one for unit test and other for coverage. But for some reason this fails when I run with actual project (no coverage data collected / reported)","Ok for those who end up here , I solved it with - -nose2 --html-report --with-coverage --coverage-report html --coverage ./ -The issue I was having earlier with 'no coverage data' was fixed by specifying the the directory where the coverage should be reported, in the command above its with --coverage ./",1.2,True,1,6944 -2020-08-01 13:20:07.317,Rename hundred or more column names in pandas dataframe,"I am working with the John Hopkins Covid data for personal use to create charts. The data shows cumulative deaths by country, I want deaths per day. Seems to me the easiest way is to create two dataframes and subtract one from the other. But the file has column names as dates and the code, e.g. df3 = df2 - df1 subtracts the columns with the matching dates. So I want to rename all the columns with some easy index, for example, 1, 2, 3, .... -I cannot figure out how to do this?","Thanks for the time and effort but I figured out a simple way. -for i, row in enumerate(df): -df.rename(columns = { row : str(i)}, inplace = True) -to change the columns names and then -for i, row in enumerate(df): -df.rename(columns = { row : str( i + 43853)}, inplace = True) -to change them back to the dates I want.",0.0,False,1,6945 -2020-08-02 09:58:49.600,JWT authorization and token leaks,"I need help understanding the security of JWT tokens used for login functionality. Specifically, how does it prevent an attack from an attacker who can see the user's packets? My understanding is that, encrypted or not, if an attacker gains access to a token, they'll be able to copy the token and use it to login themselves and access a protected resource. I have read that this is why the time-to-live of a token should be short. But how much does that actually help? It doesn't take long to grab a resource. And if the attacker could steal a token once, can't they do it again after the refressh? -Is there no way to verify that a token being sent by a client is being sent from the same client that you sent it to? Or am I missing the point?","how does it prevent an attack from an attacker who can see the user's packets? - -Just because you can see someone's packets doesn't mean that you can see the contents. HTTPS encrypts the traffic so even if someone manages to capture your traffic, they will no be able to extract JWT out of it. Every website that is using authentication should only run through HTTPS. If someone is able to perform man-in-the-middle attack then that is a different story. - -they'll be able to copy the token and use it to login themselves and access a protected resource - -Yes but only as the user they stole the token from. JWT are signed which means that you can't modify their content without breaking the signature which will be detected by the server (at least it is computationally infeasible to find the hash collision such that you could modify the content of the JWT). For highly sensitive access (bank accounts, medical data, enterprise cloud admin accounts...) you will need at least 2-factor authentication. - -And if the attacker could steal a token once, can't they do it again after the refressh? - -Possibly but that depends on how the token has been exposed. If the attacked sits on the unencrypted channel between you and the server then sure they can repeat the same process but this exposure might be a result of a temporary glitch/human mistake which might be soon repaired which will prevent attack to use the token once it expires. - -Is there no way to verify that a token being sent by a client is being sent from the same client that you sent it to? - -If the attacker successfully performs man-in-the-middle attack, they can forge any information that you might use to verify the client so the answer is no, there is no 100% reliable way to verify the client. - -The biggest issue I see with JWTs is not JWTs themselves but the way they are handled by some people (stored in an unencrypted browser local storage, containing PII, no HTTPS, no 2-factor authentication where necessary, etc...)",1.2,True,1,6946 -2020-08-02 12:15:56.920,Python runs in Docker but not in Kubernetes hosted in Raspberry Pi cluster running Ubuntu 20,"Here is the situation. -Trying to run a Python Flask API in Kubernetes hosted in Raspberry Pi cluster, nodes are running Ubuntu 20. The API is containerized into a Docker container on the Raspberry Pi control node to account for architecture differences (ARM). -When the API and Mongo are ran outside K8s on the Raspberry Pi, just using Docker run command, the API works correctly; however, when the API is applied as a Deployment on Kubernetes the pod for the API fails with a CrashLoopBackoff and logs show 'standard_init_linux.go:211: exec user process caused ""exec format error""' -Investigations show that the exec format error might be associated with problems related to building against different CPU architectures. However, having build the Docker image on a Raspberry Pi, and are successfully running the API on the architecture, I am unsure this could the source of the problem. -It has been two days and all attempts have failed. Can anyone help?","Fixed; however, something doesn't seem right. -The Kubernetes Deployment was always deployed onto the same node. I connected to that node and ran the Docker container and it wouldn't run; the ""exec format error"" would occur. So, it looks like it was a node specific problem. -I copied the API and Dockerfile onto the node and ran Docker build to create the image. It now runs. That does not make sense as the Docker image should have everything it needs to run. -Maybe it's because a previous image build against x86 (the development machine) remained in that nodes Docker cache/repository. Maybe the image on the node is not overwritten with newer images that have the same name and version number (the version number didn't increment). That would seem the case as the spin up time of the image on the remote node is fast suggesting the new image isn't copied on the remote node. That likely to be what it is. -I will post this anyway as it might be useful. - -Edit: allow me to clarify some more, the root of this problem was ultimately because there was no shared image repository in the cluster. Images were being manually copied onto each RPI (running ARM64) from a laptop (not running ARM64) and this manual process caused the problem. -An image build on the laptop was based from a base image incompatible with ARM64; this was manually copied to all RPI's in the cluster. This caused the Exec Format error. -Building the image on the RPI pulled a base image that supported ARM64; however, this build had to be done on all RPI because there was no central repository in the cluster that Kubernetes could pull newly build ARM64 compatible images to other RPI nodes in the cluster. -Solution: a shared repository -Hope this helps.",0.6730655149877884,False,1,6947 -2020-08-02 12:29:32.010,Getting json from html with same name,"I have issue with scraping page and getting json from it. -= 3.8, DLLs are no longer imported from the -PATH. If gdalXXX.dll is in the PATH, then set the -USE_PATH_FOR_GDAL_PYTHON=YES environment variable to feed the PATH -into os.add_dll_directory(). - -I've been looking for a solution to this but can't seem to figure out how to fix this. Anybody has a solution?","use: -from osgeo import gdal -instead of: -import gdal",0.0,False,1,7107 -2020-11-06 04:17:49.740,How to Get coordinates of detected area in opencv using python,"I have been able to successfully detect an object(face and eye) using haar cascade classifier in python using opencv. When the object is detected, a rectangle is shown around the object. I want to get coordinates of mid point of the two eyes. and want to store them in a array. Can any one help me? how can i do this. any guide","So you already detected the eye? You also have a bounding box around the eye? -So your question comes down to calculatiing the distance between 2 bounding boxes and then dividing it by 2? -Or do I misunderstand? -If you need exact the center between the two eyes a good way to go about that would be to take the center of the 2 boxes bounding the 2 eyes. -Calculate the distance between those two points and divide it by 2. -If you're willing to post your code I'm willing to help more with writing code.",0.0,False,2,7108 -2020-11-06 04:17:49.740,How to Get coordinates of detected area in opencv using python,"I have been able to successfully detect an object(face and eye) using haar cascade classifier in python using opencv. When the object is detected, a rectangle is shown around the object. I want to get coordinates of mid point of the two eyes. and want to store them in a array. Can any one help me? how can i do this. any guide","I suppose you have the coordinates for the bounding boxes of both eyes. -Something like X1:X2 Y1:Y2 for both boxes. -You just have to find the center of these boxes: (X2-X1)/2+X1 and (Y2-Y1)/2+Y1 -You'll get two XY coordinates from this, basically just do the above again with these coordinates, and you'll get the center point",0.0,False,2,7108 -2020-11-06 12:51:43.300,How to search on Google with Selenium in Python?,I'm really new to web scraping. Is there anyone that could tell me how to search on google.com with Selenium in Python?,Selenium probably isn't the best. other libraries/tools would work better. BeautifulSoup is the first one that comes to mind,0.1352210990936997,False,1,7109 -2020-11-06 18:15:31.933,Download cloudtrail event,"I need some advise in one of my usecase regarding Cloudtrail and Python boto3. -I have some cloudtrail events like configured and i need to send the report of all those events manually by downloading the file of events. -I am planning to automate this stuff using python boto3. Can you please advise how can i use boto3 to get the cloudtrail events for some specific date i should paas at runtime along with the csv or json files downloaded and sent over the email. As of now i have created a python script which shows the cloudtrail event but not able to download the files. Please advise","My suggestions is to simply configure the deliver of those events to an S3 bucket, and you have there the file of events. This configuration is part of your trail configuration and doesn't need boto3. -You can then access events files stored on S3 using boto3 (personally the best way to interact with AWS resources), and manipulate those files as you prefer.",0.0,False,1,7110 -2020-11-07 02:37:34.713,Saving Tensorflow models with custom layers,"I read through the documentation, but something wasn't clear for me: if I coded a custom layer and then used it in a model, can I just save the model as SavedModel and the custom layer automatically goes within it or do I have to save the custom layer too? -I tried saving just the model in H5 format and not the custom layer. When I tried to load the model, I had an error on the custom layer not being recognized or something like this. Reading through the documentation, I saw that saving to custom objects to H5 format is a bit more involved. But how does it work with SavedModels?","If I understand your question, you should simply use tf.keras.models.save_model(,'file_name',save_format='tf'). -My understanding is that the 'tf' format automatically saves the custom layers, so loading doesn't require all libraries be present. This doesn't extend to all custom objects, but I don't know where that distinction lies. If you want to load a model that uses non-layer custom objects you have to use the custom_objects parameter in tf.keras.models.load_model(). This is only necessary if you want to train immediately after loading. If you don't intend to train the model immediately, you should be able to forego custom_objects and just set compile=False in load_model. -If you want to use the 'h5' format, you supposedly have to have all libraries/modules/packages that the custom object utilizes present and loaded in order for the 'h5' load to work. I know I've done this with an intializer before. This might not matter for layers, but I assume that it does. -You also need to implement get_config() and save_config() functions in the custom object definition in order for 'h5' to save and load properly.",0.0,False,1,7111 -2020-11-07 06:19:25.707,How to determine whether function returns an iterable object which calculates results on demand?,"How can one surelly tell that function retuns an iterable object, which calculates results on demand, and not an iterator, which returns already calculated results? -For e.g. function filter() from python's documentation says: - -Construct an iterator from those elements of iterable for which function returns true - -Reading that I cat tell that this function returns an object which implements iterable protocol but I can't be sure it won't eat up all my memory if use it with generator which reads values from 16gb file untill I read further and see the Note: - -Note that filter(function, iterable) is equivalent to the generator expression (item for item in iterable if function(item)) - -So, how does one can tell that function calculates returned results on demand and not just iterating over temporary lists which holds already calculated values? I have to inspect sources?","If the doc says that a function returns an iterator, it's pretty safe to assume it calculates items on the fly to save memory. If it did calculate all its items at once, it would almost certainly return a list.",1.2,True,1,7112 -2020-11-07 12:40:31.890,How to get only the whole number without rounding-off?,"how do you get only the whole number of a non-integer value without the use of rounding-off? I have searched for it and I seem to be having a hard time. -For example: - -w = 2.20 -w = 2.00 - -x = 2.50 -x = 2.00 - -y = 3.70 -y = 3.00 - -z = 4.50 -z = 4.00 - -Is it as simple as this or that might get wrong in some values? -x = 2.6 or x = 2.5 or x = 2.4 -x = int(x) -x = 2 - -Is it really simple as that? Thanks for answering this stewpid question.","you can just divided it into (1) -but use (//) like this: -x = x // 1",0.6730655149877884,False,1,7113 -2020-11-08 15:39:33.647,How to install OpenCV in Docker (CentOs)?,"I am trying to install OpenCV in a docker container (CentOS). -I tried installing python first and then tried yum install opencv-contrib but it doesn't work. -Can someone help me out as to how to install OpenCV in Docker (CentOS)?","To install OpenCV use the command: sudo yum install opencv opencv-devel opencv-python -And when the installation is completed use the command to verify: pkg-config --modversion opencv",0.0,False,1,7114 -2020-11-10 12:41:13.300,How can I bypass the 429-error from www.instagram.com?,"i'm solliciting you today because i've a problem with selenium. -my goal is to make a full automated bot that create an account with parsed details (mail, pass, birth date...) So far, i've managed to almost create the bot (i just need to access to gmail and get the confirmation code). -My problem is here, because i've tried a lot of things, i have a Failed to load resource: the server responded with a status of 429 () -So, i guess, instagram is blocking me. -how could i bypass this ?","The answer is in the description of the HTTP error code. You are being blocked because you made too many requests in a short time. -Reduce the rate at which your bot makes requests and see if that helps. As far as I know there's no way to ""bypass"" this check by the server. -Check if the response header has a Retry-After value to tell you when you can try again.",0.0,False,2,7115 -2020-11-10 12:41:13.300,How can I bypass the 429-error from www.instagram.com?,"i'm solliciting you today because i've a problem with selenium. -my goal is to make a full automated bot that create an account with parsed details (mail, pass, birth date...) So far, i've managed to almost create the bot (i just need to access to gmail and get the confirmation code). -My problem is here, because i've tried a lot of things, i have a Failed to load resource: the server responded with a status of 429 () -So, i guess, instagram is blocking me. -how could i bypass this ?","Status code of 429 means that you've bombarded Instagram's server too many times ,and that is why Instagram has blocked your ip. -This is done mainly to prevent from DDOS attacks. -Best thing would be to try after some time ( there might be a Retry-After header in the response). -Also, increase the time interval between each request and set the specific count of number of requests made within a specified time (let's say 1 hr).",0.0,False,2,7115 -2020-11-11 03:48:10.147,Tweepy API Search Filter,"I'm currently learning how to use the Tweepy API, and is there a way to filter quoted Tweets and blocked users? I'm trying to stop search from including quoted Tweets and Tweets from blocked users. I have filtered Retweets and replies already. -Here's what I have: -for tweet in api.search(q = 'python -filter:retweets AND -filter:replies', lang = 'en', count = 100):","To filter quotes, use '-filter:quote'",1.2,True,1,7116 -2020-11-11 22:28:13.737,Read a csv file from s3 excluding some values,"How can I read a csv file from s3 without few values. -Eg: list [a,b] -Except the values a and b. I need to read all the other values in the csv. I know how to read the whole csv from s3. sqlContext.read.csv(s3_path, header=True) but how do I exclude these 2 values from the file and read the rest of the file.","You don't. A file is a sequential storage medium. A CSV file is a form of text file: it's character-indexed. Therefore, to exclude columns, you have to first read and process the characters to find the column boundaries. -Even if you could magically find those boundaries, you would have to seek past those locations; this would likely cost you more time than simply reading and ignoring the characters, since you would be interrupting the usual, smooth block-transfer instructions that drive most file buffering. -As the comments tell you, simply read the file as is and discard the unwanted data as part of your data cleansing. If you need the file repeatedly, then cleanse it once, and use that version for your program.",0.2012947653214861,False,1,7117 -2020-11-12 19:06:49.007,python on windows 10 cannot upgrade modules in virtual environment,"I has been forced to develop python scripts on Windows 10, which I have never been doing before. -I have installed python 3.9 using windows installer package into C:\Program Files\Python directory. -This directory is write protected against regular user and I don't want to elevate to admin, so when using pip globally I use --user switch and python installs modules to C:\Users\AppData\Roaming\Python\Python39\site-packages and scripts to C:\Users\AppData\Roaming\Python\Python39\Scripts directory. -I don't know how he sets this weird path, but at least it is working. I have added this path to %Path% variable for my user. -Problems start, when I'm trying to use virtual environment and upgrade pip: - -I have created new project on local machine in C:\Users\Projects and entered the path in terminal. -python -m venv venv -source venv\Scrips\activate -pip install --upgrade pip - -But then I get error: -ERROR: Could not install packages due to an EnvironmentError: [WinError 5] Access denied: 'C:\Users\\AppData\Local\Temp\pip-uninstall-7jcd65xy\pip.exe' -Consider using the --user option or check the permissions. -So when I try to use --user flag I get: -ERROR: Can not perform a '--user' install. User site-packages are not visible in this virtualenv. -So my questions are: - -why it is not trying to install everything inside virtual enviroment (venv\Scripts\pip.exe)? -how I get access denied, when this folder suppose to be owned by my user? - -When using deprecated easy_install --upgrade pip everything works fine.",I recently had the same issue for some other modules. My solution was simply downgrade from python 3.9 to 3.7. Or make an virtual environment for 3.7 and use that and see how it works.,0.3869120172231254,False,1,7118 -2020-11-13 07:25:48.307,How to show a variable value on the webcam video stream? (python OpenCV),"I coded to open webcam video on a new window using OpenCV cv2.VideoCapture(0). -You can display text on webcam video using cv2.putText() command. But it displays string values only. -How to put varying values in the webcam video that is being displayed on a new window? -For example, if value of variable p is changing all the time, you can easily display it on the command window by writing print(p). -But how can we display values of p over the webcam video?","You can also show changing variables using cv2.putText() method. Just need to convert the variable into string using str() method. Suppose you want to show variable x that is for example an integer and it is always changing. You can use cv2.putText(frame, str(x), org, font, fontScale, color, thickness, cv2.LINE_AA) to do it (You should fill org,font, etc.).",1.2,True,1,7119 -2020-11-13 09:59:06.667,Is there any solution regarding to PyQt library doesn't work in Mac OS Big Sur?,"I've done some project using PyQt library for my class assignmnet. -And I need to check my application working before I submit it. -Today, 3 hours ago I updated my Mac book OS to Big Sur. -And I found out that PyQt library doesn't work. It doesn't show any GUI. -Are there someone know how to fix it?","Related to this, after upgrading to BigSur my app stopped launching its window...I am using the official Qt supported binding PySide2/shiboken2 -Upgrading from PySide2 5.12 to 5.15 fixed the issue. -Steps: - -Remove PySide2/shiboken2 -pip3 uninstall PySide2 -pip3 uninstall shiboken2 - -Reinstall -pip3 install PySide2",0.0,False,2,7120 -2020-11-13 09:59:06.667,Is there any solution regarding to PyQt library doesn't work in Mac OS Big Sur?,"I've done some project using PyQt library for my class assignmnet. -And I need to check my application working before I submit it. -Today, 3 hours ago I updated my Mac book OS to Big Sur. -And I found out that PyQt library doesn't work. It doesn't show any GUI. -Are there someone know how to fix it?","Rolling back to PyQt5==5.13.0 fixed the issue for me! -you should uninstall PyQt5 and then install it using -pip install PyQt5==5.13.0",0.5457054096481145,False,2,7120 -2020-11-13 23:42:07.327,access methods on one socketio namespace from a different one,"I have a flask application that uses flask-socketio and python-socketio to facilitate communication between a socketio server in the cloud and a display device via a hardware device. -I have a display namespace which exposes the display facing events, and also uses a separate client class which connects and talks to the server in the cloud. This works well as designed, but now I want to trigger the connection method in my client class from a different namespace. So far I have not been able to get this to work. -What I have tried is adding the display namespace class to the flask context, then passing that into the socketio.on_namespace() method. Then from the other namespace I am grabbing it from current_app and trying to trigger the connection to the cloud server. This returns a 'RuntimeError: working outside of application context' error. -So at this point I'm still researching how to do this correctly, but I was hoping someone has dealt with something like this before, and knows how to access methods on one namespace from a different one.","I found a solution. Instead of instantiating my client class from the display namespace, I instantiate it before I add the namespaces to socketio. Then I pass the client object into both namespaces when I call the socketio.on_namespace() method.",0.0,False,1,7121 -2020-11-15 07:45:39.937,pypi package imports python file instead of package,"After pip install package_name from my recently uploaded pypi package -It imports python filename directly after installing, -I wanted to use like below -import package_name or from package_name import python_file -but this doesnt work instead this works -import python_file even package is installed name is package_name -pypi package name package_name and -My directory structure is below - -package_name - -setup.py - -folder1 - -python_file - - - - - -In setup.py , i've used package_dir={'': 'folder_1'} -but even import folder_1 or from folder_1 import python_file didnt worked. -I tried if adding __init__.py inside folder_1, it didnt solved. -I've been following Mark Smith - Publish a (Perfect) Python Package on PyPI, -which told this way , but any idea what is happening, how can i solve it??","So what you actual did is to tell python that the root folder is folder_1. -This is not what you want. -You just need to tell that folder_1 (or actually replace it by package_name, see below) is a package and to declare it using: -packages = {'folder1'}. -Usually, people don't do it but let the function find_packages() to do the work for them by packages=find_packages() -In addition package folder should contain a __init__.py. -to conclude you need a folder structure like below and use find_packages(). -It is OK and even popular choice that the project name and it single main package have the same name. - -project_name - -setup.py -package_name - -__init__.py -python_file.py",1.2,True,1,7122 -2020-11-15 11:43:31.347,Checkers board in kivy,"What it is the best way to make a chessboard for checkers using Kivy framework? -I have board.png, white.png, black.png, white_q.png, black_q.png files already. I wonder how to assign to each black tile on my board.png its own coordinate. Should I create 32 transparent widgets placed on black tiles of board.png or it is impossible? And what widget to use for 24 checkers? Any ideas or it is too complicated using Kivy and I should use tkinter?","There are many ways you could do this. It isn't complicated, it's very easy. The best way depends more on how you want to structure your app than anything else. - -I wonder how to assign to each black tile on my board.png its own coordinate - -Set the pos attribute of a widget to control its position, or better in this case use a layout that does what you want. For instance, adding your squares to a GridLayout with the right number of columns will have the right effect without you needing to worry more about positioning them. - -Should I create 32 transparent widgets placed on black tiles of board.png or it is impossible? - -I don't understand what you're asking here. You can make transparent widgets if you want but I don't know why you'd want to. - -And what widget to use for 24 checkers? - -The real question is, what do you want the widget to do? e.g. if you want it to display an image then inherit from Image. -Overall this answer is very generic because your question is very generic. I suggest that if you're stuck, try to ask a more specific question about a task you're struggling with, and give a code example showing where you are now.",0.3869120172231254,False,1,7123 -2020-11-15 20:51:24.707,How to change the value of a variable at run time from another script at remote machine?,"I have a local computer A and remote computer B. Computer A has script client.py Computer B has server.py Script client.py has a variable port. Let's say port = 5535. -I am running client.py on Computer A, which is using the port number for socket communication. I need to change the port number to another port number while the client.py is running so it will switch to another server at runtime after notifying the client to change the port number. I am using pyzmq to send data from the client to the server sending a continuous stream of data. -Is this scenario possible and how can I do it?","Yes, it's possible. You may design / modify the (so far unseen) code so as to PUSH any such need to change a port# on-the-fly to the PULL-side, to release the 5535 and use another one. -The PULL-side shall then call .disconnect() and .close() methods, so as to release the said port 5535 ( plus notify that it has done so, perhaps by another PUSH/PULL to the .bind()-locked party, that it can now unbind and close the .bind()-locked port# 5535 too) and next setup a new connection to an announced ""new_port#"", received from the initial notification ( which ought have been already .bind()-locked on the PUSH-side, ought it not? :o) ). -That easy.",1.2,True,1,7124 -2020-11-16 09:47:54.700,without Loops to Sum Range of odd numbers,is there any way to sum odd numbers from 1 to n but without any loops and if there isn't a way how can i create this by fast algorithm to do this task in less than n loops.,"You can try the one below, which loop through from 1 to n, stepping 2 -sum(range(1,n,2))",0.0,False,1,7125 -2020-11-17 04:00:00.753,How do I activate python virtual environment from a different repo?,"So am working in a group project, we are using python and of the code is on GitHub. My question is how do I activate the virtual environment? Do I make one on my own using the ""python virtual -m venv env"" or the one that's on the repo, if there is such a thing. Thanks","virtual env is used to make your original env clean. you can pip install virtualenv and then create a virtual env like virtualenv /path/to/folder then use source /path/to/folder/bin/activate to activate the env. then you can do pip install -r requirements.txt to install dependencies into the env. then everything will be installed into /path/to/folder/lib -alteratively, you can use /path/to/folder/bin/pip install or /path/to/folder/bin/python without activating the env.",0.2012947653214861,False,2,7126 -2020-11-17 04:00:00.753,How do I activate python virtual environment from a different repo?,"So am working in a group project, we are using python and of the code is on GitHub. My question is how do I activate the virtual environment? Do I make one on my own using the ""python virtual -m venv env"" or the one that's on the repo, if there is such a thing. Thanks","Yes, you'll want to create your own with something like: python -m venv venv. The final argument specifies where your environment will live; you could put it anywhere you like. I often have a venv folder in Python projects, and just .gitignore it. -After you have the environment, you can activate it. On Linux: source venv/bin/activate. Once activated, any packages you install will go into it; you can run pip install -r requirements.txt for instance.",0.0,False,2,7126 -2020-11-17 12:28:03.713,Maintaining label encoding across different files in pandas,"I know how to use scikit-learn and pandas to encode my categorical data. I've been using the category codes in pandas for now which I later will transform into an OneHot encoded format for ML. -My issues is that I need to create a pre-processing pipeline for multiple files with the same data format. I've discovered that using the pandas category codes encoding is not consistent, even if the categories (strings) in the data are identical across multiple files. -Is there a way to do this encoding lexicographically so that it's done the same way across all files or is there any specific method that can be used which would result in the same encoding when applied on multiple files?","The LabelEncoder like all other Sklearn-Transformers has three certain methods: - -fit(): Creates the labels given some input data -transform(): Transforms data to the labels of the encoder instance. It must have called fit() before or will throw an error -fit_transform(): That's a convenience-method that will create the labels and transform the data directly. - -I'm guessing you are calling fit_transform everywhere. To fix this, just call the fit-method once (on a superset of all your data because it will throw an error if it encounters a label that was not present in the data you called fit on) and than use the transform method.",0.0,False,1,7127 -2020-11-18 17:55:34.847,Using Python to access DirectShow to create and use Virtual Camera(Software Only Camera),"Generally to create a Virtual Camera we need to create a C++ application and include DirectShow API to achieve this. But with the modules such as -win32 modules and other modules we can use win32 api which lets us use these apis in python. -Can anyone Help sharing a good documentation or some Sample codes for doing this?","There is no reliable way to emulate a webcam on Windows otherwise than supplying a driver. Many applications take simpler path with DirectShow, and emulate a webcam for a subset of DirectShow based applications (in particular, modern apps will be excluded since they don't use DirectShow), but even in this case you have to develop C++ camera enumation code and connect your python code with it.",0.3869120172231254,False,1,7128 -2020-11-19 19:45:23.240,No module names xlrd,"I am working out of R Studio and am trying to replicate what I am doing in R in Python. On my terminal, it is saying that I have xlrd already installed but when I try to import the package (import xlrd) in R Studio, it tells me: ""No module named 'xlrd'"". Does anyone know how to fix this?","I have solved this on my own. In your terminal, go to ls -a and this will list out applications on your laptop. If Renviron is there, type nano .Renviron to write to the Renviron file. Find where Python is stored on your laptop and type RETICULATE_PYTHON=(file path where Python is stored). ctrl + x to exit, y to save and then hit enter. Restart R studio and this should work for you.",0.3869120172231254,False,1,7129 -2020-11-20 13:54:11.863,How to Order a fraction of a Crypto (like Bitcoin) in zipline?,"Basically as you all know we can backtest our strategies in Zipline, the problem is that Zipline is developed for stock markets and the minimum order of an asset that can be ordered is 1 in those markets but in crypto markets we are able to order a fraction of a Crypto currency. -So how can I make zipline to order a fraction of Bitcoin base on the available capital?","You can simulate your test on a smaller scale, e.g. on Satoshi level (1e8). -I can think of two methods: - -Increase your capital to the base of 1e8, and leave the input as is. This way you can analyse the result in Satoshi, but you need to correct for the final portfolio value and any other factors that are dependent on the capital base. -Scale the input to Satoshi or any other level and change the handle_data method to either order on Satoshi level or based on your portfolio percentage using order_target_percent method. - -NOTE: Zipline rounds the inputs to 3 decimal points. So re-scaling to Satoshi turns prices that are lower than 5000 to NaN (not considering rounding errors for higher prices). My suggestion is to either use 1e5 for Bitcoin or log-scale.",0.0,False,1,7130 -2020-11-21 23:14:34.487,"Pandas, find and delete rows","Been searching for a while in order to understand how to do this basic task without any success which is very strange. -I have a dataset where some of the rows contain '-', I have no clue under which columns these values lie. -How do I search in the whole dataset (including all columns) for '-' and drop the rows containing this value? -thank you!","This is a bit more robust than wwnde's answer, as it will work if some of the columns aren't originally strings: -df.loc[~df.apply(lambda x: any('-' in str(col) for col in x), axis = 1)] -If you have data that's stored as datetime, it will display as having -, but will return an error if you check for inclusion without converting to str first. Negative numbers will also return True once converted to str. If you want different behavior, you'll have to do something more complicated, such as -df.loc[~df.apply(lambda x: any('-' in col if isinstance(col, str) else False for col in x), axis = 1)]",1.2,True,1,7131 -2020-11-22 09:23:23.773,"How to resize a depth map from size [400,400] into size [60,60]?","I have a depth map image which was obtained using a kinect camera. -In that image I have selected a region of size [400,400] and stored it as another image. -Now, I would like to know how to resize this image into a size of [x,y] in python.","I don't recommend to reduce resolution of depth map the same way like it is done for images. Imagine a scene with a small object 5 m before the wall: - -Using bicubic/bilinear algorithms you will get depth of something between the object and the wall. In reality there is just a free space in between. -Using nearest-neighbor interpolation is better but you are ignoring a lot of information and in some cases it may happed that the object just disappears. - -The best approach is to use the Mode function. Divide the original depth map into windows. Each window will represent one pixel in the downsized map. For each of them calculate the most frequent depth value. You can use Python's statistics.mode() function.",0.0,False,1,7132 -2020-11-22 16:19:49.853,Raspberry pi python editor,"I was writing code to make a facial recognition, but my code did not work because I was writing on verison 3, do you know how to download python 3 on the raspberry pi?","Linux uses package managers to download packages or programing languages -,raspberry pi uses apt(advanced package tool) -This is how you use APT to install python3: -sudo apt-get install python3 -OR -sudo apt install python3 -and to test if python3 installed correctly type: -python3 -If a python shell opens python3 has been installed properly",1.2,True,1,7133 -2020-11-23 15:05:15.013,how to authorize only flutter app in djano server?,"While I'm using Django as my backend and flutter as my front end. I want only the flutter app to access the data from django server. Is there any way to do this thing? -Like we use allowed host can we do something with that?",You can use an authentication method for it. Only allow for the users authenticated from your flutter app to use your backend.,0.3869120172231254,False,1,7134 -2020-11-23 17:14:54.653,pymongo getTimestamp without ObjectId,"in my mongodb, i have a collection where the docs are created not using ObjectId, how can I get the timestamp (generation_time in pymongo) of those docs? Thank you","If you don't store timestamps in documents, they wouldn't have any timestamps to retrieve. -If you store timestamps in some other way than via ObjectId, you would retrieve them based on how they are stored.",1.2,True,1,7135 -2020-11-24 05:55:23.327,using a pandas dataframe without headers to write to mysql with to_sql,"I have a dataframe created from an excel sheet (the source). -The excel sheet will not have a header row. -I have a table in mysql that is already created (the target). It will always be the exact same layout as the excel sheet. -source_data = pd.read_excel(full_path, sheet_name=sheet_name, skiprows=ignore_rows, header=None) -db_engine = [function the returns my mysql engine] -source_data.to_sql(name=table_name, con=db_engine, schema=schema_name, if_exists='append', index=False) -This fails with an error due to pandas using numbers as column names in the insert statement.. -[SQL: INSERT INTO [tablename] (0, 1) VALUES (%(0)s, %(1)s)] -error=(pymysql.err.OperationalError) (1054, ""Unknown column '0' in 'field list' -how can i get around this? Is there a different insert method i can use? do i really have to load up the dataframe with the proper column names from the table?","Maybe after importing the data into Pandas, you can rename the columns to something that is not a number, e.g. ""First"", ""Second"", etc. or [str(i) for i in range(len(source_data))] -This would resolve the issue of SQL being confused by the numerical labels.",0.0,False,2,7136 -2020-11-24 05:55:23.327,using a pandas dataframe without headers to write to mysql with to_sql,"I have a dataframe created from an excel sheet (the source). -The excel sheet will not have a header row. -I have a table in mysql that is already created (the target). It will always be the exact same layout as the excel sheet. -source_data = pd.read_excel(full_path, sheet_name=sheet_name, skiprows=ignore_rows, header=None) -db_engine = [function the returns my mysql engine] -source_data.to_sql(name=table_name, con=db_engine, schema=schema_name, if_exists='append', index=False) -This fails with an error due to pandas using numbers as column names in the insert statement.. -[SQL: INSERT INTO [tablename] (0, 1) VALUES (%(0)s, %(1)s)] -error=(pymysql.err.OperationalError) (1054, ""Unknown column '0' in 'field list' -how can i get around this? Is there a different insert method i can use? do i really have to load up the dataframe with the proper column names from the table?","Found no alternatives.. went with adding the column names to the data frame during the read.. -So first i constructed the list of column names -sql = (""select [column_name] from [table i get my metadata from];"") -db_connection = [my connection for sqlalchemy] -result = db_connection.execute(sql) -column_names = [] -for column in result: - column_names.append(column[0]) -And then i use that column listing in the read command: -source_data = pd.read_excel(full_path, sheet_name=sheet_name, skiprows=ignore_rows,header=None, names=column_names) -the to_sql statement then runs without error.",0.0,False,2,7136 -2020-11-24 18:45:59.360,Getting skeletal data in pykinect (xbox 360 version),"I'm having trouble finding any sort of documentation or instruction for pykinect, specifically for the xbox 360 version of the kinect. how do I get skeletal data or where do I find the docs?? if I wasn't clear here please let me know!","To use python with the kinect 360 you need the follwing: -python 2.7 -windows kinect sdk 1.8 -pykinect - NOT pykinect2",-0.3869120172231254,False,1,7137 -2020-11-25 09:51:23.410,How to implement a MIDI keyboard into python,"Looking to create a GUI based 25-key keyboard using PYQT5, which can support MIDI controller keyboards. However, I don’t know where to start (What libraries should I use and how do I go about finding a universal method to supporting all MIDI controller keyboards). I plan to potentially use the Mido Library, or PyUSB but I am still confused as to how to make this all function. Any starting guides would be much appreciated.","MIDI is a universal standard shared by all manufacturers, so you don't have to worry about ""supporting all MIDI controller keyboards"", you just have to worry about supporting the MIDI studio of your system. -You'll have to scan your environment to get the existing MIDI ports. With the list of existing ports you can let the user choose to which port he wants to send the events generated by your keyboard and/or from which port he wants to receive events that will animate the keyboard (for instance from a physical MIDI keyboard connected to your computer), possibly all available input ports. -To support input events, you'll need a kind of callback prepared to receive the incoming notes on and off (which are the main relevant messages for a keyboard) at any time. That also means that you have to filter the received events that are not of those types because, in MIDI, a stream of events is subject to contain many kinds of other events mixed with the notes (pitch bend, controllers, program change, and so on). -Finally notice that MIDI doesn't produce any sound by itself. So if you plane to hear something when you play on your keyboard, the produced MIDI events should be send to a device subject to produce the sound (for instance a synthesizer or virtual instrument) via a port that this device receives. -For the library, Mido seems to be a pretty good choice : it has all the features needed for such a project.",0.6730655149877884,False,1,7138 -2020-11-25 11:44:47.813,flask / flask_restful : calling routes in one blueprint from another route in a different blueprint,"I'm working on a very basic Web Application (built using flask and flask_restful) with unrelated views split into different blueprints. -Different blueprints deal with a different instance of a class. -Now I want to design a page with status(properties and value) of all the classes these blueprints are dealing with. The page is a kind of a control panel of sorts. -For this I want to call all the status routes (defined by me) in different blueprints from a single route(status page route) in a different blueprint. I have been searching for a while on how to make internal calls in Flask / Flask_restful, but haven't found anything specifically for this. So.... - -I would love to find out how to make these internal calls. -Also, is there any problem or convention against making internal calls. -I also thought of making use of the requests calls using Requests module, but that feels more like a hack. Is this the only option I got??? If yes, is there a way I dont have to hard code the url in them like using something close to url_for() in flask?? - -Thanks.. :)","I would love to find out how to make these internal calls. - - -Ans: use url_for() or Requests module, as u do for any other post or get method. - - -Also, is there any problem or convention against making internal calls ? - - -Ans: I didn't find any even after intensive searching. - - -I also thought of making use of the requests calls using Requests module, but that feels more like a hack. Is this the only option I -got??? If yes, is there a way I don't have to hard code the url in -them like using something close to url_for() in flask?? - - -Ans: If you don't wanna use Requests module, url_for() is the simplest and cleanest option there is. Hard coded path is the only option.",1.2,True,1,7139 -2020-11-25 19:10:03.817,"When doing runserver, keep getting new data loaded in my database","Every time I do a: python manage.py runserver -And I load the site, python gets data and puts this in my database. -Even when I already filled some info in the database. Enough to get a view of what I am working on. -Now it is not loading the information I want and instead putting in new information to add to the database so it can work with some data. -What is the reason my data in the database is not being processed? -And how do I stop new data being loaded into the database.","May be it is happening due to migration file first sometimes when you migrate models into database query language with same number -python manage.py makemigrations 0001 -This ""0001"" has to be changed everytime -To solve your problem once delete the migrations file and then again migrate all models and then try -Tell if this work",0.0,False,1,7140 -2020-11-26 13:38:11.537,How to find the stitch (seam) position between two images with OpenCV?,"I find many examples of passing a list of images, and returning a stitched image, but not much information about how these images have beeen stitched together. -In a project, we have a camera fixed still, pointing down, and coveyers pass underneath. The program detects objects and start recording images. However some objects do not enter completely in the image, so we need to capture multiple images and stich then together, but we need to know the position of the stitched image because there are other sensors synchronized with the captured image, and we need to also synchronize their readings within the stitched image (i.e. we know where the reading is within each single capture, but not if captures are stitched together). -In short, given a list of images, how can we find the coordinates of each images relative to each other?","Basically while stiching correspondence between two (or more) images are setup. This is done with some constant key points. After finding those key points the images are warped or transformed & put together, i.e. stitched. -Now those key points could be set/ noted as per a global coordinate system (containing all images). Then one can get the position after stitching too.",0.0,False,1,7141 -2020-11-27 03:21:57.860,Unable to change data types of certain columns read from xslx and by Pandas,"I import an Excel file with pandas and when I try to convert all columns to float64 for further manipulation. I have several columns that have a type like: -0 -column_name_1 float64 -column_name_1 float64 -dtype: object -and it is unable to do any calculations. May I ask how I could change this column type to float64?",I just solved it yesterday and it is because I have two same columns in the Data frame and it causes that when I try to access pd['something'] it automatically combine two columns together and then it becomes an object instead of float64,0.0,False,1,7142 -2020-11-28 07:18:31.807,How to update an py made exe file from my pc to people I have sent it to?,What I mean is that I have a py file which I have converted to an exe file. So I wanted to know in case I decide to update the py file then how do I make it if I have sent it to someone the same changes occur in his file as well whether the exe or py file.,"Put your version of the program on a file share, or make it otherwise available in the internet and build in an update check in the program. So that it checks the URL for a new version everytime it is started. -I guess this is the most common way to do something like that.",0.0,False,1,7143 -2020-11-29 05:47:22.927,Is there any way to return the turtle object that is clicked?,"I'm making a matching game where there are several cards faced upside down and the user has to match the right pairs. The cards faced upside down are all turtle objects. -For eg. if there are 8 faced down cards, there are 8 turtle objects. -I'm having some trouble figuring out how to select the cards since I don't know which turtle is associated with the particular card selected by the user. I do have a nested list containing all turtles and those with similar images are grouped together. Is there any way to return the turtle object selected by the user?","If i got your question, one way to do so is that you should provide some id attribute to each turtle which will identify it. Then you can check easily which turtle was selected by the user.",0.0,False,1,7144 -2020-11-29 10:11:25.313,Nativescript can't find six,"I installed Nativescript successfully and it works when running ns run android. -However, when I try to use ns run ios I get the ominous WARNING: The Python 'six' package not found.-error -Same happens, when I try to use ns doctor. -I tried EVERYTHING that I found on the web. Setting PATH, PYTHONPATH, re-install python, six and everything - nothing helped. -Re-install of six tells me Requirement already satisfied. -Any ideas how to make this work??? -I'm on MacOS Catalina.","It seems I have a total mess with paths and python installations on my Mac. -I found like 6 different pip-paths and like 4 different python paths. -Since I have no idea which ones I can delete, I tried installing six with all pip-versions I found and that helped. -How to clean up this mess is likely a subject for another thread :)",1.2,True,1,7145 -2020-12-01 03:18:45.860,I have different excel files in the same folder,"I have different excel files in the same folder, in each of them there are the same sheets. I need to select the last sheet of each file and join them all by the columns (that is, form a single table). The columns of all files are named the same. I think it is to identify the dataframe of each file and then paste them. But I do not know how","Just do what Recessive said and use a for loop to read the excel file one by one and do the following: - -excel_files = os.listdir(filepath) - - -for file in excel_files: - -read excel file sheet - - - - -save specific column to variable - - - - -end of loop - - - -concatenate each column from different variables to one dataframe",0.0,False,1,7146 -2020-12-01 16:05:05.240,Added more parameters to smtplib.SMTP in python,"Im trying to make a script that sent an email with python using smtp.smtplib , almost of examples i found while googling shows how to call this function with only smtpserver and port parameters. -i want to added other paramaters : domain and binding IP -i tried this : server = smtplib.SMTP(smtpserver, 25,'mydomain.com',5,'myServerIP') -I got this as error : TypeError: init() takes at most 5 arguments (6 given) -Can you suggest a way to do this?",This error is likely because the parameters are invalid (there is one too many). Try looking at the smtplib docs to see what parameters are valid,0.0,False,1,7147 -2020-12-02 00:32:33.690,How could i delete several lines of code at the same time in Jupiter notebook?,I want to delete/tab several lines of code at the same time in Jupiter notebook. how could i do that? Is there hot keys for that?,"While in the notebook, click to the left of the grey input box where it says In []: (You'll see the highlight color go from green to blue) -While it's blue, hold down shift and use your up arrow key to select the rows above or below -Press D twice -Click back into the cell and the highlight will turn back to green.",0.3869120172231254,False,1,7148 -2020-12-02 03:27:38.637,python : Compute columns of data frames and add them to new columns,"I want to make a new column by calculating existing columns. -For example df -df -no data1 data2 -1 10 15 -2 51 46 -3 36 20 -...... -i want to make this -new_df -no data1 data2 data1/-2 data1/2 data2/-2 data2/2 -1 10 15 -5 5 -7.5 7.5 -2 51 46 -25.5 25.5 -23 23 -3 36 20 -18 18 -9 9 -but i don't know how to make this as efficient as possible","To create a new df column based on the calculations of two or more other columns, you would have to define a new column and set it equal to your equation. For example: -df['new_col'] = df['col_1'] * df['col_2']",0.0,False,1,7149 -2020-12-02 08:33:53.520,How to decrypt django pbkdf2_sha256 algorthim password?,"I need user_password plaintext using Django. I tried many ways to get plaintext in user_password. but It's not working. So, I analyzed how the Django user password is generated. it's using the make_password method in the Django core model. In this method generating the hashed code using( pbkdf2_sha256) algorthm. If any possible to decrypt the password. -Example: -pbkdf2_sha256$150000$O9hNDLwzBc7r$RzJPG76Vki36xEflUPKn37jYI3xRbbf6MTPrWbjFrgQ=","As you have already seen, Django uses hashing method like SHA256 in this case. Hashing mechanisms basically use lossy compression method, so there is no way to decrypt hashed messages as they are irreversible. Because it is not encryption and there is no backward method like decryption. It is safe to store password in the hashed form, as only creator of the password should know the original password and the backend system just compares the hashes. -This is normal situation for most backend frameworks. Because this is made for security reasons so far. Passwords are hashed and saved in the database so that even if the malicious user gets access to the database, he can't find usefull information there or it will be really hard to crack the hashes with some huge words dictionary.",1.2,True,1,7150 -2020-12-02 10:02:42.763,Find answer to tcp packet in PCAP with scapy,"I parse pcap file with scapy python , and there is TCP packet in that pcap that I want to know what is the answer of this pcaket, How can I do that? -For example : client and server TCP stream -client-> server : ""hi"" -server-> client : ""how are you"" -When I get ""hi"" packet (with scapy) how can I get ""how are you"" ?","Look at the TCP sequence number of the message from the client. Call this SeqC. -Then look for the first message from the client whose TCP acknowledgement sequence is higher than SeqC (usually it will be equal to SeqC plus the size of the client's TCP payload). Call this PacketS1. -Starting with PacketS1, collect the TCP payloads from all packets until you see a packet sent by the server with the TCP PSH (push) flag set. This suggests the end of the application-layer message. Call these payloads PayloadS1 to PayloadSN. -Concatenate PayloadS1 to PayloadSN. This is the likely application-layer response to the client message.",0.6730655149877884,False,1,7151 -2020-12-02 14:42:06.810,How do I keep changes made within a python GUI?,"For, example If a button click turns the background blue, or changes the button's text, how do I make sure that change stays even after i go to other frames?",One way to go is to create a configuration file (e.g. conf.ini) where you store your changes or apply them to other dialogs. It will allow you to keep changes after an app restarted.,0.0,False,1,7152 -2020-12-04 09:56:10.630,raspberry pi using a webcam to output to a website to view,"I am currently working on a project in which I am using a webcam attached to a raspberry pi to then show what the camera is seeing through a website using a client and web server based method through python, However, I need to know how to link the raspberry pi to a website to then output what it sees through the camera while then also outputting it through the python script, but then i don't know where to start -If anyone could help me I would really appreciate it. -Many thanks.","So one way to do this with python would be to capture the camera image using opencv in a loop and display it to a website hosted on the Pi using a python frontend like flask (or some other frontend). However as others have pointed out, the latency on this would be so bad any processing you wish to do would be nearly impossible. -If you wish to do this without python, take a look at mjpg-streamer, that can pull a video feed from an attached camera and display it on a localhost website. The quality is fairly good on localhost. You can then forward this to the web (if needed) using port forwarding or an application like nginx. -If you want to split the recorded stream into 2 (to forward one to python and to broadcast another to a website), ffmpeg is your best bet, but the FPS and quality would likely be terrible.",0.0,False,1,7153 -2020-12-04 10:21:30.123,"Does python mne raw object represent a single trail? if so, how to average across many trials?","I'm new to python MNE and EEG data in general. -From what I understand, MNE raw object represent a single trial (with many channels). Am I correct? What is the best way to average data across many trials? -Also, I'm not quite sure what the mne.Epochs().average() represents. Can anyone pls explain? -Thanks a lot.","From what I understand, MNE raw object represent a single trial (with many channels). Am I correct? - -An MNE raw object represents a whole EEG recording. If you want to separate the recording into several trials, then you have to transform the raw object into an ""epoch"" object (with mne.Epochs()). You will receive an object with the shape (n_epochs, n_channels and n_times). - -What is the best way to average data across many trials? Also, I'm not quite sure what the mne.Epochs().average() represents. Can anyone pls explain? - -About ""mne.Epochs().average()"": if you have an ""epoch"" object and want to combine the data of all trials into one whole recording again (for example, after you performed certain pre-processing steps on the single trials or removed some of them), then you can use the average function of the class. Depending on the method you're choosing, you can calculate the mean or median of all trials for each channel and obtain an object with the shape (n_channels, n_time). -Not quite sure about the best way to average the data across the trials, but with mne.epochs.average you should be able to do it with ease. (Personally, I always calculated the mean for all my trials for each channel. But I guess that depends on the problem you try to solve)",1.2,True,1,7154 -2020-12-05 19:15:10.533,How can i have 2D bounding box on a sequence of RGBD frames from a 3D bounding box in point clouds?,"i have a 3d point clouds of my object by using Open3d reconstruction system ( makes point clouds by a sequence of RGBD frames) also I created a 3d bounding box on the object in point clouds -my question is how can I have 2d bounding box on all of the RGB frames at the same coordinates of 3d bounding box? -my idea Is to project 3d bb to 2d bb but as it is clear, the position of the object is different in each frame, so I do not know how can i use this approach? -i appreciate any help or solution, thanks","calculate points for the eight corners of your box -transform those points from the world frame into your chosen camera frame -project the points, apply lens distortion if needed. - -OpenCV has functions for some of these operations and supports you with matrix math for the rest. -I would guess that Open3d gives you pose matrices for all the cameras. you use those to transform from the world coordinate frame to any camera's frame.",1.2,True,1,7155 -2020-12-05 23:26:35.533,Create a schedule where a group of people all talk to each other - with restrictions,"Problem statement -I would like to achieve the following: -(could be used for example to organize some sort of a speeddating event for students) -Create a schedule so people talk to each other one-on-one and this to each member of the group. -but with restrictions. - -Input: list of people. (eg. 30 people) -Restrictions: some of the people should not talk to each other (eg. they know each other) -Output: List of pairs (separated into sessions) just one solution is ok, no need to know all of the possible outcomes - -Example -eg. Group of 4 people - -John -Steve -Mark -Melissa - -Restrictions: John - Mellisa -> NO -Outcome -Session one - -John - Steve -Mark - Melissa - -Session two - -John - Mark -Steve - Melissa - -Session three - -Steve - Mark - -John and Mellisa will not join session three as it is restriction. -Question -Is there a way to approach this using Python or even excel? -I am especially looking for some pointers how this problem is called as I assume this is some Should I look towards some solver? Dynamic programming etc?","Your given information is pretty generous, you have a set of all the students, and a set of no-go pairs (because you said it yourself, and it makes it easy to explain, just say this is a set of pairs of students who know each other). So we can iterate through our students list creating random pairings so long as they do not exist in our no-go set, then expand our no-go set with them, and recurse on the remaining students until we can not create any pairs that do not exist already in the no-go set (we have pairings so that every student has met all students).",0.0,False,1,7156 -2020-12-06 10:22:21.857,Is there any way to know the command-line options available for a separate program from Python?,"I am relatively new to the python's subprocess and os modules. So, I was able to do the process execution like running bc, cat commands with python and putting the data in stdin and taking the result from stdout. -Now I want to first know that a process like cat accepts what flags through python code (If it is possible). -Then I want to execute a particular command with some flags set. -I googled it for both things and it seems that I got the solution for second one but with multiple ways. So, if anyone know how to do these things and do it in some standard kind of way, it would be much appreciated.","In the context of processes, those flags are called arguments, hence also the argument vector called argv. Their interpretation is 100% up to the program called. In other words, you have to read the manpages or other documentation for the programs you want to call. -There is one caveat though: If you don't invoke a program directly but via a shell, that shell is the actual process being started. It then also interprets wildcards. For example, if you run cat with the argument vector ['*'], it will output the content of the file named * if it exists or an error if it doesn't. If you run /bin/sh with ['-c', 'cat *'], the shell will first resolve * into all entries in the current directory and then pass these as separate arguments to cat.",1.2,True,1,7157 -2020-12-06 10:45:49.563,Pandas: How to calculate the percentage of one column against another?,"I am just trying to calculate the percentage of one column against another's total, but I am unsure how to do this in Pandas so the calculation gets added into a new column. -Let's say, for argument's sake, my data frame has two attributes: - -Number of Green Marbles -Total Number of Marbles - -Now, how would I calculate the percentage of the Number of Green Marbles out of the Total Number of Marbles in Pandas? -Obviously, I know that the calculation will be something like this: - -(Number of Green Marbles / Total Number of Marbles) * 100 - -Thanks - any help is much appreciated!",df['percentage columns'] = (df['Number of Green Marbles']) / (df['Total Number of Marbles'] ) * 100,0.0,False,1,7158 -2020-12-06 15:58:58.593,int to str in python removes leading 0s,"So right now, I'm making a sudoku solver. You don't really need to know how it works, but one of the checks I take so the solver doesn't break is to check if the string passed (The sudoku board) is 81 characters (9x9 sudoku board). An example of the board would be: ""000000000000000000000000000384000000000000000000000000000000000000000000000000002"" -this is a sudoku that I've wanted to try since it only has 4 numbers. but basically, when converting the number to a string, it removes all the '0's up until the '384'. Does anyone know how I can stop this from happening?","There is no way to prevent it from happening, because that is not what is happening. Integers cannot remember leading zeroes, and something that does not exist cannot be removed. The loss of zeroes does not happen at conversion of int to string, but at the point where you parse the character sequence into a number in the first place. -The solution: keep the input as string until you don't need the original formatting any more.",1.2,True,1,7159 -2020-12-06 18:29:12.933,How does urllib3 determine which TLS extensions to use?,"I'd like to modify the Extensions that I send in the client Hello packet with python. -I've had a read of most of the source code found on GitHub for urllib3 but I still don't know how it determines which TLS extensions to use. -I am aware that it will be quite low level and the creators of urllib3 may just import another package to do this for them. If this is the case, which package do they use? -If not, how is this determined? -Thanks in advance for any assistance.",The HTTPS support in urllib3 uses the ssl package which uses the openssl C-library. ssl does not provide any way to directly fiddle with the TLS extension except for setting the hostname in the TLS handshake (i.e. server_name extension aka SNI).,1.2,True,1,7160 -2020-12-07 22:29:46.250,tkinter in Pycharm (python version 3.8.6),"I'm using Pycharm on Windows 10. -Python version: 3.8.6 -I've checked using the CMD if I have tkinter install python -m tkinter. It says I have version 8.6 -Tried: - -import tkinter. -I get ""No module named 'tkinter' "" - -from tkinter import *. -I get ""Unresolved reference 'tkinter'"" - -Installed future package but that didn't seem to change the errors. - - -Any suggestions on how to fix this issue? -Thank you!","You can try ""pip install tkinter"" in cmd",-0.2012947653214861,False,2,7161 -2020-12-07 22:29:46.250,tkinter in Pycharm (python version 3.8.6),"I'm using Pycharm on Windows 10. -Python version: 3.8.6 -I've checked using the CMD if I have tkinter install python -m tkinter. It says I have version 8.6 -Tried: - -import tkinter. -I get ""No module named 'tkinter' "" - -from tkinter import *. -I get ""Unresolved reference 'tkinter'"" - -Installed future package but that didn't seem to change the errors. - - -Any suggestions on how to fix this issue? -Thank you!","You just verify in the project settings, sometimes Pycharm doesn't use the same interpreter.",-0.2012947653214861,False,2,7161 -2020-12-07 23:17:05.743,how to convert a string to list I have a string how to convert it to a list?,"I have a string like: string = ""[1, 2, 3]"" -I need to convert it to a list like: [1, 2, 3] -I've tried using regular expression for this purpose, but to no avail","Try -[int(x) for x in arr.strip(""[]"").split("", "")], or if your numbers are floats you can do [float(x) for x in arr.strip(""[]"").split("", "")]",0.2655860252697744,False,1,7162 -2020-12-08 14:02:34.340,2D numpy array showing as 1D,"I have a numpy ndarray train_data of length 200, where every row is another ndarray of length 10304. -However when I print np.shape(train_data), I get (200, 1), and when I print np.shape(train_data[0]) I get (1, ), and when I print np.shape(train_data[0][0]) I get (10304, ). -I am quite confused with this behavior as I supposed the first np.shape(train_data) should return (200, 10304). -Can someone explains to me why this is happening, and how could I get the array to be in shape of (200, 10304)?","I'm not sure why that's happening, try reshaping the array: -B = np.reshape(A, (-1, 2))",0.0,False,1,7163 -2020-12-08 16:51:13.820,Multiple threads sending over one socket simultaneously?,"I have two python programs. Program 1 displays videos in a grid with multiple controls on it, and Program 2 performs manipulations to the images and sends it back depending on the control pressed in Program 1. -Each video in the grid is running in its own thread, and each video has a thread in Program 2 for sending results back. -I'm running this on the same machine though and I was unable to get multiple socket connections working to and from the same address (localhost). If there's a way of doing that - please stop reading and tell me how! -I currently have one socket sitting independent of all of my video threads in Program 1, and in Program 2 I have multiple threads sending data to the one socket in an array with a flag for which video the data is for. The problem is when I have multiple threads sending data at the same time it seems to scramble things and stop working. Any tips on how I can achieve this?","Regarding If there's a way of doing that - please stop reading and tell me how!. -There's a way of doing it, assuming you are on Linux or using WSL on Windows, you could use the hostname -I commend which will output an IP that looks like 192.168.X.X. -You can use that IP in your python program by binding your server to that IP instead of localhost or 127.0.0.1.",0.0,False,1,7164 -2020-12-08 20:00:28.467,"Grabbing values (Name, Address, Phone, etc.) from directory websites like TruePeopleSearch.com with Chrome Developer Tool","Good day everybody. I'm still learning parsing data with Python. I'm now trying to familiarize myself with Chrome Developer Tools. My question is when inspecting a directory website like TruePeopleSearch.com, how do I copy or view the variables that holds the data such as Name, Phone, and Address? I tried browsing the tool, but since I'm new with the Developer tool, I'm so lost with all the data. I would appreciate if the experts here points me to the right direction. -Thank you all!","Upon further navigating the Developer Console, I learned that these strings are located in these variables, by copying the JS paths. -NAME & AGE -document.querySelector(""#personDetails > div:nth-child(1)"").innerText -ADDRESS -document.querySelector(""#personDetails > div:nth-child(4)"").innerText -PHONE NUMBERS -document.querySelector(""#personDetails > div:nth-child(6)"").innerText -STEP 1 -From the website, highlight are that you need to inspect and click ""Inspect Element"" -STEP 2 -Under elements, right-click the highlighted part and copy the JS path -STEP 3 -Navigate to console and paste the JS path and add .innerText and press Enter",0.0,False,1,7165 -2020-12-09 07:30:40.480,Can you plot the accuracy graph of a pre-trained model? Deep Learning,"I am new to Deep Learning. I finished training a model that took 8 hours to run, but I forgot to plot the accuracy graph before closing the jupyter notebook. -I need to plot the graph, and I did save the model to my hard-disk. But how do I plot the accuracy graph of a pre-trained model? I searched online for solutions and came up empty. -Any help would be appreciated! Thanks!","What kind of framework did you use and which version? In the future problem, you may face, this information can play a key role in the way we can help you. -Unfortunately, for Pytorch/Tensorflow the model you saved is likely to be saved with only the weights of the neurons, not with its history. Once Jupyter Notebook is closed, the memory is cleaned (and with it, the data of your training history). -The only thing you can extract is the final loss/accuracy you had. -However, if you regularly saved a version of the model, you can load them and compute manually the accuracy/loss that you need. Next, you can use matplotlib to reconstruct the graph. -I understand this is probably not the answer you were looking for. However, if the hardware is yours, I would recommend you to restart training. 8h is not that much to train a model in deep learning.",0.0,False,1,7166 -2020-12-09 13:03:41.490,"How do I handle communication between object instances, or between modules?","I appear to be missing some fundamental Python concept that is so simple that no one ever talks about it. I apologize in advance for likely using improper description - I probably don't know enough to ask the question correctly. -Here is a conceptual dead end I have arrived at: -I have an instance of Class Net, which handles communicating with some things over the internet. -I have an instance of Class Process, which does a bunch of processing and data management -I have an instance of Class Gui, which handles the GUI. -The Gui instance needs access to Net and Process instances, as the callbacks from its widgets call those methods, among other things. -The Net and Process instances need access to some of the Gui instances' methods, as they need to occasionally display stuff (what it's doing, results of queries, etc) -How do I manage it so these things talk to each other? Inheritance doesn't work - I need the instance, not the class. Besides, inheritance is one way, not two way. -I can obviously instantiate the Gui, and then pass it (as an object) to the others when they are instantiated. But the Gui then won't know about the Process and Net instances. I can of course then manually pass the Net and Process instances to the Gui instance after creation, but that seems like a hack, not like proper practice. Also the number of interdependencies I have to manually pass along grows rather quickly (almost factorially?) with the number of objects involved - so I expect this is not the correct strategy. -I arrived at this dead end after trying the same thing with normal functions, where I am more comfortable. Due to their size, the similarly grouped functions lived in separate modules, again Net, Gui, and Process. The problem was exactly the same. A 'parent' module imports 'child' modules, and can then can call their methods. But how do the child modules call the parent module's methods, and how do they call each other's methods? Having everything import everything seems fraught with peril, and again seems to explode as more objects are involved. -So what am I missing in organizing my code that I run into this problem where apparently all other python users do not?","The answer to this is insanely simple. -Anything that needs to be globally available to other modules can be stored its own module, global_param for instance. Every other module can import global_param, and then use and modify its contents as needed. This avoids any issues with circular importing as well. -Not sure why it took me so long to figure this out...",0.3869120172231254,False,1,7167 -2020-12-09 18:38:18.553,"On single gpu, can TensorFlow train a model which larger than GPU memory?","If I have a single GPU with 8GB RAM and I have a TensorFlow model (excluding training/validation data) that is 10GB, can TensorFlow train the model? -If yes, how does TensorFlow do this? -Notes: - -I'm not looking for distributed GPU training. I want to know about single GPU case. -I'm not concerned about the training/validation data sizes.","No you can not train a model larger than your GPU's memory. (there may be some ways with dropout that I am not aware of but in general it is not advised). Further you would need more memory than even all the parameters you are keeping because your GPU needs to retain the parameters along with the derivatives for each step to do back-prop. -Not to mention the smaller batch size this would require as there is less space left for the dataset.",0.0,False,1,7168 -2020-12-09 19:13:03.913,How would I use a bot to send multiple reactions on one message? Discord.py,this is kind of a dumb question but how would I make a discord.py event to automatically react to a message with a bunch of different default discord emojis at once. I am new to discord.py,You have to use on_message event. Its a default d.py function. It is an automatic function.,0.0,False,1,7169 -2020-12-10 05:08:39.017,How can I get my server to UDP multicast to clients across the internet? Do I need a special multicast IP address?,"I am creating a multiplayer game and I would like the communication between my server program (written in python) and the clients (written in c# - Unity) to happen via UDP sockets. -I recently came across the concept of UDP Multicast, and it sounds like it could be much better for my use case as opposed to using UDP Unicast , because my server needs to update all of the clients (players) with the same content every interval. So, rather than sending multiple identical packets to all the clients with UDP unicast, I would like to be able to only send one packet to all the clients using multicast, which sounds much more efficient. -I am new to multicasting and my questions are: -How can I get my server to multicast to clients across the internet? -Do I need my server to have a special public multicast IP address? If so how do I get one? -Is it even possible to multicast across the internet? or is multicasting available only within my LAN? -And what are the pros and cons with taking the multicast approach? -Thank you all for your help in advance!!","You can't multicast on the Internet. Full stop. -Basically, multicast is only designed to work when there's someone in charge of the whole network to set it up. As you noted, that person needs to assign the multicast IP addresses, for example.",1.2,True,1,7170 -2020-12-10 07:37:54.630,Create symlink on a network drive to a file on same network drive (Win10),"Problem statement: -I have a python 3.8.5 script running on Windows 10 that processes large files from multiple locations on a network drive and creates .png files containing graphs of the analyzed results. The graphs are all stored in a single destination folder on the same network drive. It looks something like this -Source files: -\\drive\src1\src1.txt -\\drive\src2\src2.txt -\\drive\src3\src3.txt -Output folder: -\\drive\dest\out1.png -\\drive\dest\out2.png -\\drive\dest\out3.png -Occasionally we need to replot the original source file and examine a portion of the data trace in detail. This involves hunting for the source file in the right folder. The source file names are longish alphanumerical strings so this process is tedious. In order to make it less tedious I would like to creaty symlinks to the orignal source files and save them side by side with the .png files. The output folder would then look like this -Output files: -\\drive\dest\out1.png -\\drive\dest\out1_src.txt -\\drive\dest\out2.png -\\drive\dest\out2_src.txt -\\drive\dest\out3.png -\\drive\dest\out3_src.txt -where \\drive\dest\out1_src.txt is a symlink to \\drive\src1\src1.txt, etc. -I am attempting to accomplish this via -os.symlink('//drive/dest/out1_src.txt', '//drive/src1/src1.txt') -However no matter what I try I get - -PermissionError: [WinError 5] Access is denied - -I have tried running the script from an elevated shell, enabling Developer Mode, and running -fsutil behavior set SymlinkEvaluation R2R:1 -fsutil behavior set SymlinkEvaluation R2L:1 -but nothing seems to work. There is absolutely no problem creating the symlinks on a local drive, e.g., -os.symlink('C:/dest/out1_src.txt', '//drive/src1/src1.txt') -but that does not accomplish my goals. I have also tried creading links on the local drive per above then then copying them to the network location with -shutil.copy(src, dest, follow_symlinks = False) -and it fails with the same error message. Attempts to accomplish the same thing directly in the shell from an elevated shell also fail with the same ""Access is denied"" error message -mklink \\drive\dest\out1_src.txt \\drive\src1\src1.txt -It seems to be some type of a windows permission error. However when I run fsutil behavior query SymlinkEvaluation in the shell I get - -Local to local symbolic links are enabled. -Local to remote symbolic links are enabled. -Remote to local symbolic links are enabled. -Remote to remote symbolic links are enabled. - -Any idea how to resolve this? I have been googling for hours and according to everything I am reading it should work, except that it does not.","Open secpol.msc on PC where the newtork share is hosted, navigate to Local Policies - User Rights Assignment - Create symbolic links and add account you use to connect to the network share. You need to logoff from shared folder (Control Panel - All Control Panel Items - Credential Manager or maybe you have to reboot both computers) and try again.",0.0,False,1,7171 -2020-12-11 11:57:46.063,How to downgrade python from 3.9.0 to 3.6,"I'm trying to install PyAudio but it needs a Python 3.6 installation and I only have Python 3.9 installed. I tried to switch using brew and pyenv but it doesn't work. -Does anyone know how to solve this problem?","You may install multiple versions of the same major python 3.x version, as long as the minor version is different in this case x here refers to the minor version, and you could delete the no longer needed version at anytime since they are kept separate from each other. -so go ahead and install python 3.6 since it's a different minor from 3.9, and you could then delete 3.9 if you would like to since it would be used over 3.6 by the system, unless you are going to specify the version you wanna run.",1.2,True,1,7172 -2020-12-11 16:40:32.080,Running functions siultaneoulsy in python,"I am making a small program in which I need a few functions to check for something in the background. -I used module threading and all those functions indeed run simultaneously and everything works perfectly until I start adding more functions. As the threading module makes new threads, they all stay within the same process, so when I add more, they start slowing each other down. -The problem is not with the CPU as it's utilization never reaches 100% (i5-4460). I also tried the multiprocessing module which creates a new process for function, but then it seems that variables can't be shared between different processes or I don't know how. (newly started process for each function seems to take existing variables with itself, but my main program cannot access any of the changes that function in the separate process makes or even new variables it creates) -I tried using the global keyword but it seems to have no effect in multiprocessing as it does in threading. -How could I solve this problem? -I am pretty sure that I have to create new processes for those background functions but I need to get some feedback from them and that part I don't know to solve.",I ended up using multiprocessing Value,1.2,True,1,7173 -2020-12-11 21:06:25.180,Python not using proper pip,"I'm running CentOS 8 that came with native Python 3.6.8. I needed Python 3.7 so I installed Python 3.7.0 from sources. Now, python command is unknown to the system, while commands python3 and python3.7 both use Python 3.7. -All good until now, but I can't seem to get pip working. -Command pip returns command not found, while python3 -m pip, python3.7 -m pip, python3 -m pip3, and python3.7 -m pip3 return No module named pip. Only pip command that works is pip3. -Now whatever package I install via pip3 does not seem to install properly. Example given, pip3 install tornado returns Requirement already satisfied, but when I try to import tornado in Python 3.7 I get ModuleNotFoundError: No module named 'tornado'. Not the same thing can be said when I try to import it in Python 3.6, which works flawlessly. From this, I understand that my pip only works with Python 3.6, and not with 3.7. -Please tell me how can I use pip with Python 3.7, thank you.","I think the packages you install will be installed for the previous version of Python. I think you should update the native OS Python like this: - -Install the python3.7 package using apt-get -sudo apt-get install python 3.7 -Add python3.6 & python3.7 to update-alternatives: -sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.6 1 -sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.7 2 -Update python3 to point to Python 3.7: -`sudo update-alternatives --config python3 -Test the version: -python3 -V",0.0,False,2,7174 -2020-12-11 21:06:25.180,Python not using proper pip,"I'm running CentOS 8 that came with native Python 3.6.8. I needed Python 3.7 so I installed Python 3.7.0 from sources. Now, python command is unknown to the system, while commands python3 and python3.7 both use Python 3.7. -All good until now, but I can't seem to get pip working. -Command pip returns command not found, while python3 -m pip, python3.7 -m pip, python3 -m pip3, and python3.7 -m pip3 return No module named pip. Only pip command that works is pip3. -Now whatever package I install via pip3 does not seem to install properly. Example given, pip3 install tornado returns Requirement already satisfied, but when I try to import tornado in Python 3.7 I get ModuleNotFoundError: No module named 'tornado'. Not the same thing can be said when I try to import it in Python 3.6, which works flawlessly. From this, I understand that my pip only works with Python 3.6, and not with 3.7. -Please tell me how can I use pip with Python 3.7, thank you.","It looks like your python3.7 does not have pip. -Install pip for your specific python by running python3.7 -m easy_install pip. -Then, install packages by python3.7 -m pip install -Another option is to create a virtual environment from your python3.7. The venv brings pip into it by default. -You create venv by python3.7 -m venv ",1.2,True,2,7174 -2020-12-13 14:28:27.847,How to communicate with Cylon BMS controller,"I try to communicate with Cylon device (UC32) by Bacnet protocol (BAC0) but I can not discover any device. And I try with Yabe and it does not have any result. -Is there any document describing how to create my communication driver? -Or any technique which can be uswed to connect with this device?","(Assuming you've set the default gateway address - for it to know where to return it's responses, but only if necessary.) -If we start with the assumption that maybe the device is not (by default) listening for broadcasts or having some issue sending it - a bug maybe (although probably unlikely), then you could send a unicast/directed message, e.g. use the Read-Property service to read back the (already known) BOIN (BACnet Object Instance Number), but you would need a (BACnet) client (application/software) that provides that option, like possibly one of the 'BACnet stack' cmd-line tools or maybe via the (for most part) awesome (but advanced-level) 'VTS (Visual Test Shell)' tool. -As much as it might be possible to discover what the device's BOIN (BACnet Object Instance Number) is, it's better if you know it already (- as a small few device's might not make it easy to discover - i.e. you might have to resort to using a round-robin bruteforce approach, firing lots of requests - one after the other with only the BOIN changed/incremented by 1, until you receive/see a successful response).",0.3869120172231254,False,1,7175 -2020-12-13 15:07:08.070,Create PM2 Ecosystem File from current processes,"I'm running a few programs (NodeJS and python) in my server (Ubuntu 20.04). I use PM2 CLI to create and manage processes. Now I want to manage all process through an echo system file. But when I run pm2 ecosystem, it just creates a sample file. I want to save my CURRENT PROCESSES to the echo system file and modify it. Anyone know how to save pm2 current process as echo system file?","If you use pm2 startup pm2 creates a file named ~/.pm2/dump.pm2 with all running processes (with too many parameters, as it saves the whole environment in the file) -Edit: -This file is similar to the output of the command pm2 prettylist",1.2,True,1,7176 -2020-12-13 20:33:50.563,"Git, heroku, pre-receive hook declined","So I was trying to host a simple python script on Heroku.com, but encountered this error. After a little googling, I found this on the Heroku's website: git, Heroku: pre-receive hook declined, Make sure you are pushing a repo that contains a proper supported app ( Rails, Django etc.) and you are not just pushing some random repo to test it out. -Problem is I have no idea how these work, and few tutorials I looked up were for more detailed use of those frameworks. What I need to know is how can i use them with a simple 1 file python script. Thanks in advance.","Okay I got it. It was about some unused modules in requirements.txt, I'm an idiot for not reading the output properly ‍♂️",0.0,False,1,7177 -2020-12-13 23:30:31.457,How to get author's Discord Tag shown,"How do I display the user's Name + Discord Tag? As in: -I know that; -f""Hello, <@{ctx.author.id}>"" -will return the user, and being pinged. -(@user) -And that; -f""Hello, {ctx.author.name}"" -will return the user's nickname, but WITHOUT the #XXXX after it. -(user) -But how do I get it to display the user's full name and tag? -(user#XXXX)",To get user#XXXX you can just do str(ctx.author) (or just put it in your f-string and it will automatically be converted to a string). You can also do ctx.author.discriminator to get their tag (XXXX).,0.2012947653214861,False,1,7178 -2020-12-14 15:50:01.883,How to scrape data from multiple unrelated sections of a website (using Scrapy),"I have made a Scrapy web crawler which can scrape Amazon. It can scrape by searching for items using a list of keywords and scrape the data from the resulting pages. -However, I would like to scrape Amazon for large portion of its product data. I don't have a preferred list of keywords with which to query for items. Rather, I'd like to scrape the website evenly and collect X number of items which is representative of all products listed on Amazon. -Does anyone know how scrape a website in this fashion? Thanks.","I'm putting my comment as an answer so that others looking for a similar solution can find it easier. -One way to achieve this is to going through each category (furniture, clothes, technology, automotive, etc.) and collecting a set number of items there. Amazon has side/top bars with navigation links to different categories, so you can let it run through there. -The process would be as follows: - -Follow category urls from initial Amazon.com parse -Use a different parse function for the callback, one that will scrape however many items from that category -Ensure that data is writing to a file (it will probably be a lot of data) - -However, such an approach would not be representative in the proportions of each category in the total Amazon products. Try looking for a ""X number of results"" label for each category to compensate for that. Good luck with your project!",1.2,True,1,7179 -2020-12-16 08:12:51.783,How to change colors of pip error messages in windows powershell,"The error messages printed by pip in my Windows PowerShell are dark red on dark blue (default PowerShell background). This is quite hard to read and I'd like to change this, but I couldn't find any hint to how to do this. Even not, if this is a default in Python applied to all stderr-like output, or if it's specific to pip. -My configuration: Windows 10, Python 3.9.0, pip 20.2.3. -Thanks for your help!","Coloring in pip is done via ANSI escape sequences. So the solution to this problem would be, to either change the way, PowerShell displays ANSI colors or the color scheme pip uses. Pip provides though a command-line switch '--no-color' which can be used to deactivate coloring the output.",0.0,False,1,7180 -2020-12-16 12:06:31.327,python api verified number usinf firebase,"I will create python api using Django -now I trying to verify phone number using firebase authentication end send SMS to user but I don't know how I will do","The phone number authentication in Firebase is only available from it's client-side SDKs, so the code that runs directly in your iOS, Android or Web app. It is not possible to trigger sending of the SMS message from the server. -So you can either find another service to send SMS messages, or to put the call to send the SMS message into the client-side code and then trigger that after it calls your Django API.",1.2,True,1,7181 -2020-12-16 16:21:38.647,ImportError: No module named 'sklearn.compose' with scikit-learn==0.23.2,"I'm fully aware of the previous post regarding this error. That issue was with scikit-learn < 0.20. But I'm having scikit-learn 0.23.2 and I've tried uninstall and reinstall 0.22 and 0.23 and I still have this error. -Followup: Although pip list told me the scikit-learn version is 0.23.2, but when I ran sklearn.__version__, the real version is 0.18.1. Why and how to resolve this inconsistency? (Uninstall 0.23.2 didn't work)","[RESOLVED] -It turned out that my Conda environment has different sys.path as my jupyter environment. The jupyter environment used the system env, which is due to the fact that I installed the ipykernel like this: python -m ipykernel install without use --user flag. The correct way should be to do so within the Conda env and run pip install jupyter",0.0,False,1,7182 -2020-12-17 08:39:49.780,How can I transform a list to array quickly in the framework of Mxnet?,"I have a list which has 8 elements and all of those elements are arrays whose shape are (3,480,364).Now I want to transform this list to array as (8,3,480,364).When I use the array=nd.array(list) this command,it will takes me a lot of time and sometimes it will send 'out of memory' error.When I try to use this command array=np.stack(list.aixs=0),when I debug the code,it will stay at this step and can't run out the result.So I wonder how can I transform a list to array quickly when I use the Mxnet framework?","Your method of transforming a list of lists into an array is correct, but an 'out of memory' error means you are running out of memory, which would also explain the slowdown. -How to check how much RAM you have left: -on Linux, you can run free -mh in the terminal. -How to check how much memory a variable takes: -The function sys.getsizeof tells you memory size in bytes. -You haven't said what data type your arrays have, but, say, if they're float64, that's 8 bytes per element, so your array of 8 * 3 * 480 * 364 = 4193280 elements should only take up 4193280 * 8 bytes = about 30 Mb. So, unless you have very little RAM left, you probably shouldn't be running out of memory. -So, I'd first check your assumptions: does your list really only have 8 elements, do all the elements have the same shape of (3, 480, 364), what is the data type, do you create this array once or a thousand times? You can also check the size of a list element: sys.getsizeof(list[0]). -Most likely this will clear it up, but what if your array is really just too big for your RAM? -What to do if an array doesn't fit in memory -One solution is to use smaller data type (dtype=np.float32 for floating point, np.int32 or even np.uint8 for small integer numbers). This will sacrifice some precision for floating point calculations. -If almost all elements in the array are zero, consider using a sparse matrix. -For training a neural net, you can use a batch training algorithm and only load data into memory in small batches.",0.0,False,1,7183 -2020-12-18 05:07:49.150,How do you set up a python project to be able to send to others without them having to manually copy and paste the code into an editor,"I made a cool little project for my friend, basically a timer using tkinter, but I am confused on how to let them access this project without having vscode or pycharm. Is it possible for them to just see the Tkinter window or something like that? Is there an application for this? Sorry if this is a stupid question.","You can just built an .exe (Application) of your project. Then just share the application file and anyone can use the application through .exe. You can use pyinstaller to convert your python code to exe. -pip install pyinstaller -then cd to the project folder then run the following command -pyinstaller --onefile YourFileName.py -if you want to make exe without console showing up then use this command -pyinstaller --onefile YourFileName.py --noconsole",0.6730655149877884,False,1,7184 -2020-12-18 06:28:45.840,Deploy Python Web Scraping files on Azure cloud(function apps),"I have 2 python files that do Web scraping using Selenium and Beautifulsoup and store the results in separate CSV files say file1.csv and file2.csv. Now, I want to deploy these files on the Azure cloud, I know Azure function apps will be ideal for this. But, I don't know how Functions app will support Selenium driver on it. -Basically, I want to time trigger my 2 web scraping files and store the results in two separate files file1.csv and file2.csv that will be stored in blob storage on Azure cloud. Can someone help me with this task? -How can I use the selenium driver on Azure functions app?","Deploying on virtual machines or EC2 is the only option that one can use to achieve this task. -Also, with Heroku, we will be able to run selenium on the cloud by adding buildpacks. But when it comes to storing the files, we will not be able to store files on heroku as heroku does not persist the files. So, VMs or EC2 instances are the only options for this task.",1.2,True,1,7185 -2020-12-18 19:17:18.420,Do I have to sort dates chronologically to use pandas.DataFrame.ewm?,"I need to calculate EMA for a set of data from csv file where dates are in descending order. -When I apply pandas.DataFrame.ewm I get EMA for the latest (by date) equal to the value. This is because ewm starts observation from top to bottom in DataFrame. -So far, I could not find option to make it reverse for ewm. So I guess, I will have to reverse all my dataset. -Maybe somebody knows how to make ewm start from bottom values? -Or is it recommended to always use datetimeindex sorted chronologically? From oldest values on top to newest on the bottom?","From pandas' documentation: - -Times corresponding to the observations. Must be monotonically increasing and datetime64[ns] dtype. - -I guess, datetimeindex must be chronological..",1.2,True,1,7186 -2020-12-19 15:35:48.737,How should I handle a data set with around 300000 small groups of data tables?,"I have a data science project in Python and I wonder how to manage my data. Some details about my situation: - -My data consists of a somewhat larger number of football matches, currently around 300000, and it is supposed to grow further as time goes on. Attached to each match are a few tables with different numbers of rows/columns (but similar column formats across different matches). -Now obviously I need to iterate through this set of matches to do some computations. So while I don’t think that I can hold the whole database in memory, I guess it would make sense to have at least chunks in memory, do computations on that chunk, and release it. -At the moment I have split everything up into single matches, gave each match an ID and created a folder for each match with the ID as folder name. Then I put the corresponding tables as small individual csv files into the folder that belongs to a given match. Additionally, I have an „overview“ DataFrame with some „metadata“ columns, one row per match. I put this row as a small json into each match folder for convenience as well. -I guess there would also be other ways to split the whole data set into chunks than match-wise, but for prototyping/testing my code with a small number of matches, it actually turned out to be quite handy to be able to go to a specific match folder in a file manager and look at one of these tables in a spreadsheet program (although similar inspections could obviously also be made from code in appropriate settings). But now I am at the point where this huge number of quite small files/folders slows down the OS so much that I need to do something else. -Just to be able to deal with the data at all right now, I simply created an additional layer of folder hierarchy like „range-0“ contains folders 0-9999, „range-1“ contains 10000-19999 etc. But I‘m not sure if this is the way to go forward. -Maybe I could simply save one chunk - whatever one chunk is - as a json file, but would lose some of the ease of the manual inspection. -At least everything is small enough, so that I can do my statistical analyses on a single machine, such that I think I can avoid map/reduce-type algorithms. -On another note, I have close to zero knowledge about database frameworks (I have written a few lines of SQL in my life), and I guess I would be the only person making requests to my database, so I am in doubt that this makes sense. But in case it does, what are the advantages of such an approach? - -So, to the people out there having some experience with handling data in such projects - what kind of way to manage my data, on a conceptual level, would you suggest or recommend to use in such a setting (independent of specific tools/libraries etc.)?","Your arrangement is not bad at all. We are not used to think of it this way, but modern filesystems are themselves very efficient (noSQL) databases. -All you have to do is having auxiliary files to work as indexes and metadata so your application can find its way. From your post, it looks like you already have that done to some degree. -Since you don't give more specific details of the specific files and data you are dealing with, we can't suggest specific arrangements. If the data is proper to be arranged in an SQL tabular representation, you could get benefits from putting all of it in a database and use some ORM - you'd also have to write adapters to get the Python object data into Pandas for your numeric analysis if you that, and it might end up being a superfluous layer if you are already getting it to work. -So, just be sure that whatever adaptations you do to get the files easier to deal with by hand (like the extra layer of folders you mention), don't get in the way of your code - i.e., make your code so that it automatically find its way across this, or any extra layers you happen to create (this can be as simple as having the final game match folder full path as a column in your ""overview"" dataframe)",1.2,True,1,7187 -2020-12-19 18:54:45.050,pip install a specific version of PyQt5,"I am using spyder & want to install finplot. However when I did this I could not open spyder and had to uninstall & reinstall anconda. -The problem is to do with PyQt5 as I understand. The developer of finplot said that one solution would be to install PyQt5 version 5.9. - -Error: spyder 4.1.3 has requirement pyqt5<5.13; python_version >= ""3"", but you'll have pyqt5 5.13.0 which is incompatible - -My question is how would I do this? To install finplot I used the line below, - -pip install finplot - -Is there a way to specify that it should only install PyQt5?","As far as I understand you just want to install PyQT5 version 9.0.You can try this below if you got pip installed on your machine - -pip install PyQt5==5.9 - -Edit: First you need to uninstall your pyQT5 5.13 - -pip uninstall PyQt5",0.6730655149877884,False,1,7188 -2020-12-19 22:58:12.080,Running another script while sharing functions and variable as in jupyter notebook,"I have a notebook that %run another notebook under JupyterLab. They can call back and forth each other functions and share some global variables. -I now want to convert the notebooks to py files so it can be executed from the command line. -I follow the advice found on SO and imported the 2nd file into the main one. -However, I found out that they can not call each other functions. This is a major problem because the 2nd file is a service to the main one, but it uses continuously functions that are part of the main one. -Essentially, the second program is non-GUI and it is driven by the main one which is a GUI program. Thus whenever the service program needs to print, it checks to see if a flag is set that tells it that it runs in a GUI mode, and then instead of simple printing it calls a function in the main one which knows how to display it on the GUI screen. I want to keep this separation. -How can I achieve it without too much change to the service program?","I ended up collecting all the GUI functions from the main GUI program, and putting them into a 3rd file in a class, including the relevant variables. -In the GUI program, just before calling the non GUI program (the service one) I created the class and set all the variables, and in the call I passed the class. -Then in the service program I call the functions that are in the class and got the variables needed from the class as well. -The changes to the service program were minor - just reading the variables from the class and change the calls to the GUI function to call the class functions instead.",0.0,False,1,7189 -2020-12-19 23:06:07.333,How to evaluate trained model Average Precison and Mean AP with IOU=0.3,"I trained a model using Tensorflow object detection API using Faster-RCNN with Resnet architecture. I am using tensorflow 1.13.1, cudnn 7.6.5, protobuf 3.11.4, python 3.7.7, numpy 1.18.1 and I cannot upgrade the versions at the moment. I need to evaluate the accuracy (AP/mAP) of the trained model with the validation set for the IOU=0.3. I am using legacy/eval.py script on purpose since it calculates AP/mAP for IOU=0.5 only (instead of mAP:0.5:0.95) -python legacy/eval.py --logtostderr --pipeline_config_path=training/faster_rcnn_resnet152_coco.config --checkpoint_dir=training/ --eval_dir=eval/ -I tried several things including updating pipeline config file to have min_score_threshold=0.3: -eval_config: { -num_examples: 60 -min_score_threshold: 0.3 -.. -Updated the default value in the protos/eval.proto file and recompiled the proto file to generate new version of eval_pb2.py -// Minimum score threshold for a detected object box to be visualized -optional float min_score_threshold = 13 [default = 0.3]; -However, eval.py still calculates/shows AP/mAP with IOU=0.5 -The above configuration helped only to detect objects on the images with confidence level < 0.5 in the eval.py output images but this is not what i need. -Does anybody know how to evaluate the model with IOU=0.3?",I finally could solve it by modifing hardcoded matching_iou_threshold=0.5 argument value in multiple method arguments (especially def __init) in the ../object_detection/utils/object_detection_evaluation.py,1.2,True,1,7190 -2020-12-20 12:53:05.890,random_state in random forest,"I just learned how to use the machine learning model Random Forest; however, although I read about the random_state parameter, I couldn't understand what it does. For example, what is the difference between random_state = 0 and random_state = 300 -Can someone please explain?","In addition, most people use the number 42 when we use random_state. -For example, random_state = 42 and there's a reason for that. -Below is the answer. -The number 42 is, in The Hitchhiker's Guide to the Galaxy by Douglas Adams, the ""Answer to the Ultimate Question of Life, the Universe, and Everything"", calculated by an enormous supercomputer named Deep Thought over a period of 7.5 million years. Unfortunately, no one knows what the question is",0.0,False,3,7191 -2020-12-20 12:53:05.890,random_state in random forest,"I just learned how to use the machine learning model Random Forest; however, although I read about the random_state parameter, I couldn't understand what it does. For example, what is the difference between random_state = 0 and random_state = 300 -Can someone please explain?",Random forests introduce stochasticity by randomly sampling data and features. Running RF on the exact same data may produce different outcomes for each run due to these random samplings. Fixing the seed to a constant i.e. 1 will eliminate that stochasticity and will produce the same results for each run.,0.0,False,3,7191 -2020-12-20 12:53:05.890,random_state in random forest,"I just learned how to use the machine learning model Random Forest; however, although I read about the random_state parameter, I couldn't understand what it does. For example, what is the difference between random_state = 0 and random_state = 300 -Can someone please explain?","train_test_split splits arrays or matrices into random train and test subsets. That means that everytime you run it without specifying random_state, you will get a different result, this is expected behavior. -When you use random_state=any_value then your code will show exactly same behaviour when you run your code.",0.0,False,3,7191 -2020-12-20 23:15:46.737,Get the number of boosts in a server discord.py,"I am trying to make a server info command and I want it to display the server name, boost count, boost members and some other stuff as well. -Only problem is I have looked at the docs and searched online and I cant find out how to find the boost information. -I dont have any code as Ive not found any code to try and use for myself -Is there any way to get this information?","Guild Name - guild_object.name -Boost count - guild_object.premium_subscription_count -Boosters, the people who boosted the server - guild_object.premium_subscribers -If your doing this in a command as I assume, use ctx.guild instead of guild_object. For anything further, you can re-read the docs as all of the above information is in it under the discord.Guild",1.2,True,1,7192 -2020-12-21 17:02:29.590,find frequency of a int appear in a list of interval,"I were given a list of intervals, for example [[10,40],[20,60]] and a list of position [5,15,30] -we should return the frequency of position appeared in the list, the answer would be [[5,0],[15,1],[30,2]] because 5 didn't cover by the interval and 15 was covered once, 30 was covered twice. -If I just do a for loop the time complexity would be O(m*n) m is the number of the interval, n is the number of position -Can I preprocess the intervals and make it faster? I was thinking of sort the interval first and use binary search but I am not sure how to implement it in python, Can someone give me a hint? Or can I use a hashtable to store intervals? what would be the time complexity for that?","You can use a frequency array to preprocess all interval data and then query for any value to get the answer. Specifically, create an array able to hold the min and max possible end-points of all the intervals. Then, for each interval, increment the frequency of the starting interval point and decrease the frequency of the value just after the end interval. At the end, accumulate this data for the array and we will have the frequency of occurrence of each value between the min and max of the interval. Each query is then just returning the frequency value from this array. - -freq[] --> larger than max-min+1 (min: minimum start value, max: maximum end value) -For each [L,R] --> freq[L]++, freq[R+1] = freq[R+1]-1 -freq[i] = freq[i]+freq[i-1] -For any query V, answer is freq[V] - -Do consider tradeoffs when range is very large compared to simple queries, where simple check for all may suffice.",0.0,False,1,7193 -2020-12-22 08:56:10.800,"Convert Json format String to Link{""link"":""https://i.imgur.com/zfxsqlk.png""}","I try to convert this String to only the link: {""link"":""https://i.imgur.com/zfxsqlk.png""} -I'm trying to create a discord bot, which sends random pictures from the API https://some-random-api.ml/img/red_panda. -With imageURL = json.loads(requests.get(redpandaurl).content) I get the json String, but what do I have to do that I only get the Link like this https://i.imgur.com/zfxsqlk.png -Sorry if my question is confusingly written, I'm new to programming and don't really know how to describe this problem.","What you get from json.loads() is a Python dict. You can access values in the dict by specifying their keys. -In your case, there is only one key-value pair in the dict: ""link"" is the key and ""https://i.imgur.com/zfxsqlk.png"" is the value. You can get the link and store it in the value by appending [""link""] to your line of code: -imageURL = json.loads(requests.get(redpandaurl).content)[""link""]",0.0,False,1,7194 -2020-12-23 07:39:41.123,Finding or building a python security profiler,"I want a security profiler for python. Specifically, I want something that will take as input a python program and tell me if the program tries to make system calls, read files, or import libraries. If such a security profiler exists, where can I find it? If no such thing exists and I were to write one myself, where could I have my profiler 'checked' (that is, verified that it works). -If you don't find this question appropriate for SO, let me know if there is another SE site I can post this on, or if possible, how I can change/rephrase my question. Thanks","Usually, python uses an interpreter called CPython. It is hard to say for python code by itself if it opens files or does something special, due a lot of python libraries and interpreter itself are written in C, and system calls/libc calls can happen only from there. Also python syntax by itself can be very obscure. -So, by answering your suspect: I suspect this would need specific knowledge of the python programming language, it does not look like that, due it is about C language. -You can think it is possible to patch CPython itself. Well it is not correct too as I guess. A lot of shared libraries use C/C++ code as CPython itself. Tensorflow, for example. -Going further, I guess it is possible to do following things: - -patch the compiler which compiles C/C++ code for CPython/modules, which is hard I guess. -just use an usual profiler, and trace which files, directories and calls are used by python itself for operation, and whitelist them, due they are needed, which is the best option by my opinion (AppArmor for example). -maybe you can be interested in the patching of CPython itself, where it is possible to hook needed functions and calls to external C libraries, but it can be annoying due you will have to revise every added library to your project, and also C code is often used for performance (e.g. json module), which doesn't open too much things.",1.2,True,1,7195 -2020-12-23 23:02:35.663,How can I let the user of an Django Admin Page control which list_display fields are visible?,"I have an ModelAdmin with a set of fields in list_display. -I want the user to be able to click a checkbox in order to add or remove these fields. -Is there a straightforward way of doing this? I've looked into Widgets but I'm not sure how they would change the list_display of a ModelAdmin","To do this I had to - -Override an admin template (and change TEMPLATES in settings.py). I added a form with checkboxes so user can set field -Add a new model and endpoint to update it (the model stores the fields to be displayed, the user submits a set of fields in the new admin template) -Update admin.py, overriding get_list_display so it sets fields based on the state of the model object updated",1.2,True,1,7196 -2020-12-24 16:49:26.270,What is the difference between a+=1 and a=+1..?,"how to understand difference between a+=1 and a=+1 in Python? -it seems that they're different. when I debug them in Python IDLE both were having different output.","a+=1 is a += 1, where += is a single operator meaning the same as a = a + 1. -a=+1 is a = + 1, which assigns + 1 to the variable without using the original value of a",0.2012947653214861,False,2,7197 -2020-12-24 16:49:26.270,What is the difference between a+=1 and a=+1..?,"how to understand difference between a+=1 and a=+1 in Python? -it seems that they're different. when I debug them in Python IDLE both were having different output.","It really depends on the type of object that a references. -For the case that a is another int: -The += is a single operator, an augmented assignment operator, that invokes a=a.__add__(1), for immutables. It is equivalent to a=a+1 and returns a new int object bound to the variable a. -The =+ is parsed as two operators using the normal order of operations: - -+ is a unary operator working on its right-hand-side argument invoking the special function a.__pos__(), similar to how -a would negate a via the unary a.__neg__() operator. -= is the normal assignment operator - -For mutables += invokes __iadd__() for an in-place addition that should return the mutated original object.",0.1016881243684853,False,2,7197 -2020-12-24 19:05:39.640,different python files sharing the same variables,"I would like to know please, how can I define variables in a python file and share these variables with their values with multiple python files?","To do this, you can create a new module specifically for storing all the global variables your application might need. For this you can create a function that will initialize any of these globals with a default value, you only need to call this function once from your main class, then you can import the globals file from any other class and use those globals as needed.",1.2,True,2,7198 -2020-12-24 19:05:39.640,different python files sharing the same variables,"I would like to know please, how can I define variables in a python file and share these variables with their values with multiple python files?","You can create a python module -Create a py file inside that module define variables and import that module in the required places.",0.0,False,2,7198 -2020-12-25 13:44:08.990,How to connect a Python Flask backend to a React front end ? How does it work together?,I am making a website. And I want to know how to connect React js to my Flask back end. I have tried searching online but unfortunately it was not what I am looking for. If you know how to do it please recomend me some resources. And I also want to know the logic of how Flask and React work together.,"Flask is a backend micro-service and react is a front-end framework. Flask communicates with the database and makes the desired API hit points. The backend listens for any API request and sends the corresponding response as a JSON format. So using React you can make HTTP requests to the backend. -For testing purposes have the backend and frontend separated and communicate only using the REST APIs. For production, use the compiled js of React as static files and render only the index.html of the compiled react from the backend. -P.S: I personally recommend Django rest framework over flask if you are planning to do huge project.",1.2,True,1,7199 -2020-12-26 19:08:35.663,AES 128 bit encryption of bitstream data in python,"I am trying to encrypt a bitstream data or basically a list of binary data like this [1,0,1,1,1,0,0,1,1,0,1,1,0,1] in python using AES encryption with block size of 128bit, the problem is that i want the output to be binary data as well and the same size as the original binary data list, is that possible?how do i do that?","Yes, there are basically two ways: - -You have a unique value tied to the data (for instance if they are provided in sequence then you can create a sequence number) then you can simply use the unique value as nonce and then use AES encryption in counter mode. Counter mode doesn't expand the data but it is insecure if no nonce is supplied. Note that you do need the nonce when decrypting. - -You use format preserving encryption or FPE such as FF1 and FF3 defined by NIST. There are a few problems with this approach: - -there are issues with these algorithms if the amount of input data is minimal (as it seems to be in your case); -the implementations of FF1 and FF3 are generally hard to find; -if you have two unique bit values then they will result in identical ciphertext. - - - -Neither of these schemes provide integrity or authenticity of the data obviously, and they by definition leak the size of the plaintext.",1.2,True,1,7200 -2020-12-26 21:26:15.483,Running encrypted python code using RSA or AES encryption,"As I was working on a project the topic of code obfuscation came up, as such, would it be possible to encrypt python code using either RSA or AES and then de-code it on the other side and run it?. And if it's possible how would you do it?. I know that you can obfuscate code using Base64, or XOR, but using AES or RSA would be an interesting application. This is simply a generic question for anyone that may have an idea on how to do it. I am just looking to encrypt a piece of code from point A, send it to point B, have it decrypted at point B and run there locally using either AES or RSA. It can be sent by any means, as long as the code itself is encrypted and unreadable.","Yes this is very possible but would require some setup to work. -First off Base64 is an encoder for encoding data from binary/bytes to a restricted ascii/utf subset for transmission usually over http. Its not really an obfuscator, more like a packager for binary data. -So here is what is needed for this to work. - -A pre-shared secret key that both point A and point B have. This key cannot be transmitted along with the code since anyone who gets the encrypted code would also get the key to decrypt it. - -There would need to be an unencrypted code/program that allows you to insert that pre-shared key to use to decrypt the encrypted code that was sent. Can't hardcode the key into the decryptor since again anyone with the decryptor can now decrypt the code and also if the secrey key is leaked you would have to resend out the decryptor to use a different key. - -Once its decrypted the ""decryptor"" could save that code to a file for you to run or run the code itself using console commands or if its a python program you can call eval or use importlib to import that code and call the function within. -WARNING: eval is known to be dangerous since it will execute whatever code it reads. If you use eval with code you dont trust it can download a virus or grab info from your computer or anything really. DO NOT RUN UNTRUSTED CODE. - - -Also there is a difference between AES and RSA. One is a symmetric cipher and the other is asymmetric. Both will work for what you want but they require different things for encryption and decryption. One uses a single key for both while the other uses one for encryption and one for decryption. So something to think about.",1.2,True,1,7201 -2020-12-29 07:50:36.320,How to send and receive data (and / or data structures) from a C ++ script to a Python script?,"I am working on a project that needs to do the following: - -[C++ Program] Checks a given directory, extracts all the names (full paths) of the found files and records them in a vector. -[C++ Program] ""Send"" the vector to a Python script. -[Python Script] ""Receive"" the vector and transform it into a List. -[Python Script] Compares the elements of the List (the paths) against the records of a database and removes the matches from the List (removes the paths already registered). -[Python Script] ""Sends"" the processed List back to the C++ Program. -[C++ Program] ""Receives"" the List, transforms it into a vector and continues its operations with this processed data. - -I would like to know how to send and receive data structures (or data) between a C ++ Script and a Python Script. -For this case I put the example of a vector transforming into a List, however I would like to know how to do it for any structure or data in general. -Obviously I am a beginner, that is why I would like your help on what documentation to read, what concepts should I start with, what technique should I use (maybe there is some implicit standard), what links I could review to learn how to communicate data between Scripts of the languages ​​I just mentioned. -Any help is useful to me.","If the idea is to execute the python script from the c++ process, then the easiest would be to design the python script to accept input_file and output_file as arguments and the c++ program should write the input_file, start the script and read the output_file. -For simple structures like list-of-strings, you can simply write them as text files and share, but for more complex types, you can use google-protocolbuffers to do the marshalling/unmarshalling. -if the idea is to send/receive data between two already stared process, then you can use the same protocol buffers to encode data and send/receive via sockets between each other. Check gRPC",0.0,False,1,7202 -2020-12-30 17:33:11.363,Unable to get LabJack U3 model loaded into PyCharm properly,I am trying to use a LabJack product U3 using Python and I am using PyCharm for development of my code. I am new to both Python and PyCharm FYI. In the LabJack documentation they say to run python setup.py install in the directory I down loaded there Python links for using there device. I did this and when run under straight Python console can get the import u3 to run and am able to access the U3 device. Yet when I run this in PyCharm I can not get it to run. It always tells me module not found. I have asked LabJack for help but they do not know PyCharm. I have looked on the net but I can seem to see how to get the module properly under PyCharm. Could i please get some help on how to do this properly?,First Yll download that module inside of pycharm settings if it's still not working then import module in terminal of pycharm then try to run you're python script,0.0,False,1,7203 -2020-12-31 05:11:06.240,Hyper-prparameter tuning and classification algorithm comparation,"I have a doubt about classification algorithm comparation. -I am doing a project regarding hyperparameter tuning and classification model comparation for a dataset. -The Goal is to find out the best fitted model with the best hyperparameters for my dataset. -For example: I have 2 classification models (SVM and Random Forest), my dataset has 1000 rows and 10 columns (9 columns are features) and 1 last column is lable. -First of all, I splitted dataset into 2 portions (80-10) for training (800 rows) and tesing (200rows) correspondingly. After that, I use Grid Search with CV = 10 to tune hyperparameter on training set with these 2 models (SVM and Random Forest). When hyperparameters are identified for each model, I use these hyperparameters of these 2 models to test Accuracy_score on training and testing set again in order to find out which model is the best one for my data (conditions: Accuracy_score on training set < Accuracy_score on testing set (not overfiting) and which Accuracy_score on testing set of model is higher, that model is the best model). -However, SVM shows the accuracy_score of training set is 100 and the accuracy_score of testing set is 83.56, this means SVM with tuning hyperparameters is overfitting. On the other hand, Random Forest shows the accuracy_score of training set is 72.36 and the accuracy_score of testing set is 81.23. It is clear that the accuracy_score of testing set of SVM is higher than the accuracy_score of testing set of Random Forest, but SVM is overfitting. -I have some question as below: -_ Is my method correst when I implement comparation of accuracy_score for training and testing set as above instead of using Cross-Validation? (if use Cross-Validation, how to do it? -_ It is clear that SVM above is overfitting but its accuracy_score of testing set is higher than accuracy_score of testing set of Random Forest, could I conclude that SVM is a best model in this case? -Thank you!","I would suggest splitting your data into three sets, rather than two: - -Training -Validation -Testing - -Training is used to train the model, as you have been doing. The validation set is used to evaluate the performance of a model trained with a given set of hyperparameters. The optimal set of hyperparameters is then used to generate predictions on the test set, which wasn't part of either training or hyper parameter selection. You can then compare performance on the test set between your classifiers. -The large decrease in performance on your SVM model on your validation dataset does suggest overfitting, though it is common for a classifier to perform better on the training dataset than an evaluation or test dataset.",0.0,False,1,7204 -2020-12-31 06:41:56.733,Equivalent gray value of a color given the LAB values,"I have an RGB image and I converted to Lab colorspace. Now, I want to convert the image in LAB space to grayscale one. I know L NOT = Luminance. -So, any idea how to get the equivalent gray value of a specific color in lab space? -I'm looking for a formula or algorithm to determine the equivalent gray value of a color given the LAB values.","The conversion from Luminance Y to Lightness L* is defined by the CIE 1976 Lightness Function. Put another way, L* transforms linear values into non-linear values that are perceptually uniform for the Human Visual System (HVS). With that in mind, your question is now dependent on what kind of gray you are looking for, if perceptually uniform and thus non-linear, the Lightness channel from CIE Lab* is actually that of CIE 1976 and is appropriate. If you need something linear, you would have to convert back to CIE XYZ tristimulus values and use the Y channel.",0.3869120172231254,False,1,7205 -2020-12-31 13:28:50.363,"Creating a JSON file in python, where they are not separated by commas","I'm looking to create the below JSON file in python. I do not understand how I can have multiple dictionaries that are not separated by commas so when I use the JSON library to save the dictionary to disk, I get the below JSON; -{""text"": ""Terrible customer service."", ""labels"": [""negative""], ""meta"": {""wikiPageID"": 1}} -{""text"": ""Really great transaction."", ""labels"": [""positive""], ""meta"": {""wikiPageID"": 2}} -{""text"": ""Great price."", ""labels"": [""positive""], ""meta"": {""wikiPageID"": 3}} -instead of a list of dictionaries like below; -[{""text"": ""Terrible customer service."", ""labels"": [""negative""], ""meta"": {""wikiPageID"": 1}}, -{""text"": ""Really great transaction."", ""labels"": [""positive""], ""meta"": {""wikiPageID"": 2}}, -{""text"": ""Great price."", ""labels"": [""positive""], ""meta"": {""wikiPageID"": 3}}] -The difference is, in the first example, each line is a dictionary and they are not in a list or separated by commas. -Whereas in the second example, which is what I'm able to come up with is a list of dictionaries, each dictionary separated by a comma. -I'm sorry if this a stupid question I have been breaking my head over this for weeks, and have not been able to come up with a solution. -Any help is appreciated. -And thank you in advance.","The way you want to store the Data in one file isn't possible with JSON. -Each JSOn file can only contain one Object. This means that you can either have one Object defined within curly braces, or an Array of objects as you mentioned. -If you want to store each Object as a JSON object you should use separate files each containing a single Object.",0.0,False,1,7206 -2020-12-31 21:45:40.700,save user input data in kivy and store for later use/analysis python,"I am a kivy n00b, using python, and am not sure if this is the right place to ask. -Can someone please explain how a user can input data in an Android app, and how/where it is stored (SQL table, csv, xml?). I am also confused as to how it can be extended/used for further analysis. -I think it should be held as a SQL table, but do not understand how to save/set up a SQL table in an android app, nor how to access it. Similarly, how to save/append/access a csv/xml document, nor how if these are made, how they are secure from accidental deletion, overwriting, etc -In essence, I want to save only the timestamp a user enters some data, and the corresponding values (max 4). -User input would consist of 4 variables, x1, x2, x3, x4, and I would write a SQL statement along the lines: insert into data.table timestamp, x1, x2, x3, x4, and then to access the data something along the lines of select * from data.table and then do/show stuff. -Can someone offer suggestions on what resources to read? How to set up a SQL Server table in an android app?","This works basically the same way on android as on the desktop: you have access to the local filesystem to create/edit files (at least within the app directory), so you can read and write whatever data storage format you like. -If you want to use a database, sqlite is the simplest and most obvious option.",1.2,True,1,7207 -2021-01-01 02:54:19.350,"Django: Channels and Web Socket, how to make group chats exclusive","Eg i have a chat application, -however, i realised that for my application, as long as you have the link to the chat, you can enter. how do I prevent that, and make it such that only members of the group chat can access the chat. Something like password protected the url to the chat, or perhaps something like whatsapp. Does anyone have any suggestion and reference material as to how I should build this and implement the function? Thank you!","I am in the exact same condition as you.What I am thinking of doing -is -Store group_url and the respective user_ids (which we get from django's authentication) in a table(with two columns group_url and allowed_user_ids) or in Redis. -Then when a client connects to a channel,say chat/1234 (where 1234 is the group_url),we get the id of that user using self.scope['user'].id and check them in the table. -If the user_id is in the respected group_url,we accept the connection.Else reject the connection. I am new to this too.Suggest me if you find a better approach",1.2,True,1,7208 -2021-01-01 21:31:38.310,Discord.py get user with Name#0001,"How do I get the user/member object in discord.py with only the Name#Discriminator? I searched now for a few hours and didn't find anything. I know how to get the object using the id, but is there a way to convert Name#Discriminator to the id? -The user may not be in the Server.","There's no way to do it if you aren't sure they're in the server. If you are, you can search through the servers' members, but otherwise, it wouldn't make sense. Usernames/Discriminators change all the time, while IDs remain unique, so it would become a huge headache trying to implement that. Try doing what you want by ID, or searching the server.",0.0,False,1,7209 -2021-01-03 12:30:31.743,Get embed footer from reaction message,"I want the person who used the command to be able to delete the result. I have put the user's ID in the footer of the embed, and my question is: how do I get that data from the message where the user reacted to. -reaction.message.embed.footer doesn't work. I currently don't have code as I was trying to get that ID first. -Thanks in advance!","discord.Message object has no attribute embed, but it has embeds. It returns you a list of embeds that the message has. So you can simply do: reaction.message.embeds[0].footer.",1.2,True,1,7210 -2021-01-03 19:40:29.317,How to do auto login in python with sql database?,how can I make a login form that will remember the user so that he does not have to log in next time.,"Some more information would be nice but if you want to use a database for this then you would have to create a entry for the user information last entered. -And then on reopening the programm you would check if there are any entrys and if yes load it. -But I think that writing the login information to a file on you pc would be a lot easier. So you run the steps from above just writing to a file instead of a database. -I am not sure how you would make this secure because you can't really encrypt the password because you would need a password or key of some type and that password or key would be easy to find in the source code especially in python. It would be harder to find in other compiler based programming languages but also somewhere. And if you would use a database you would have a password for that but that would also lay on the hardrive if not encrypted otherwise but there we are where we started. -So as mentioned above a database would be quite useless for a task like this because it doesn't improve anything and is a hassle for beginners to setup.",0.0,False,1,7211 -2021-01-04 08:15:55.150,Cloudwatch Alarm for Aurora Data Dump Automation to S3 Bucket,"I need your advice on something that I'm working on as a part of my work. -I'm working on automating the Aurora Dump to S3 bucket every midnight. As a part of it, I have created a ec2 instance that generates the dump and I have written a python script using boto3 which moves the dump to S3 bucket every night. -I need to intimate a list of developers if the data dump doesn't take place for some reason. -As of now, I'm posting a message to SNS topic which notifies the developers if the backup doesn't happen. But I need to do this with Cloudwatch and I'm not sure how to do it. -Your help will be much appreciated. ! Thanks!",I have created a custom metric to which I have attached a Cloudwatch alarm and it gets triggered if there's an issue in data backup process.,0.0,False,1,7212 -2021-01-04 20:54:14.400,Installations on WSL?,"I use Python Anaconda and Visual Studio Code for Data Science and Machine Learning projects. -I want to learn how to use Windows Subsystem for Linux, and I have seen that tools such as Conda or Git can be installed directly there, but I don't quite understand the difference between a common Python Anaconda installation and a Conda installation in WSL. -Is one better than the other? Or should I have both? How should I integrate WSL into my work with Anaconda, Git, and VS Code? What advantages does it have or what disadvantages? -Help please, I hate not installing my tools properly and then having a mess of folders, environment variables, etc.","If you use conda it's better to install it directly on Windows rather than in WSL. Think of WSL as a virtual machine in your current PC, but much faster than you think. -It's most useful use would be as an alternate base for docker. You can run a whole lot of stuff with Windows integration from WSL, which includes VS Code. You can lauch VS code as if it is run from within that OS, with all native extension and app support. -You can also access the entire Windows filesystem from WSL and vice versa, so integrating Git with it won't be a bad idea",1.2,True,1,7213 -2021-01-04 23:27:42.213,discord.py get all permissions a bot has,So I am developing a Bot using discord.py and I want to get all permissions the Bot has in a specific Guild. I already have the Guild Object but I don't know how to get the Permissions the Bot has. I already looked through the documentation but couln't find anything in that direction...,"From a Member object, like guild.me (a Member object similar to Bot.user, essentially a Member object representing your bot), you can get the permissions that member has from the guild_permissions attribute.",1.2,True,1,7214 2021-01-05 16:52:24.587,How can I run my Python Script in Siteground Hosting Server,"I am building my website which contains a python(.py) file, html, css and JS file. I want to know that how can I run my python script in siteground from my hosting account so that it can scrape data from a site and output a JSON file to Javascript file which can display it on the webpage.",I would use cron jobs to run jobs in the foreground,0.0,False,1,7215 2021-01-07 07:21:30.950,"Iterate through a string forwards and backwards, extracting alternating characters","I'm studying python and there's a lab I can't seem to crack. We have a line e.g. shacnidw, that has to be transformed to sandwich. I somehow need to iterate with a for loop and pick the letters with odd indexes first, followed by backward even indexes. Like pick a letter with index 1,3,5,7,8,6,4,2. It looks pretty obvious to use a list or slices, but we aren't allowed to use these functions yet. I guess the question is just how do I do it?","Programming is all about decomposing complex problems into simpler ones. Try breaking it down into smaller steps. @@ -7394,11 +59,11 @@ It doesn't matter whether you use a 32 bit or 64 bit Python interpreter to execu So to conclude, don't be afraid to download and install the 64 bit version of Python because your Python programs will all run perfectly on it!",0.3869120172231254,False,1,7224 2021-01-11 16:13:22.740,Generate an executable from python project,"I need to generate an executable from a python project containing multiple folders and files. I tried to work with library cx_Freeze, but only worked for a single file project. -Can you tell me how to do please?","use pyinstaller. just run -pip install pyinstaller and then open the folder the file is located in with shell and run pyinstaller --onefile FILE.py where file is the name of the python file that should be run when the exe is run",1.2,True,2,7225 +Can you tell me how to do please?","Running pyinstaller on your ""main"" python file should work, as PyInstaller automatically imports any dependencies (such as other python files) that you use.",0.1352210990936997,False,2,7225 2021-01-11 16:13:22.740,Generate an executable from python project,"I need to generate an executable from a python project containing multiple folders and files. I tried to work with library cx_Freeze, but only worked for a single file project. -Can you tell me how to do please?","Running pyinstaller on your ""main"" python file should work, as PyInstaller automatically imports any dependencies (such as other python files) that you use.",0.1352210990936997,False,2,7225 +Can you tell me how to do please?","use pyinstaller. just run +pip install pyinstaller and then open the folder the file is located in with shell and run pyinstaller --onefile FILE.py where file is the name of the python file that should be run when the exe is run",1.2,True,2,7225 2021-01-13 12:26:48.657,How many rows/columns are there in a tkinter Frame?,"Using tkinter we can use either widget.grid(row,col) or widget.pack() to place a widget. Since row,col corresponds to the row/col-index in a given Frame/Window - how do we know how many columns the Frame/Window consists of? E.g if I want to place a widget in the midle or to the very right","Rows and columns are just concepts, not actual things. There is effectively an infinite number of rows and columns, all with a width or height of zero until they contain a widget or are configured to have a minimum width or height. From a practical standpoint, there are as many rows and columns as there are pixels in the window. In reality, the number of rows and columns is entirely up to you. A widget can have a single row and column or it can have several. It all depends on what you add to the window. @@ -7456,9 +121,9 @@ tf.saved_model.save(model, model_path)",1.2,True,1,7233 2021-01-19 16:33:47.430,django app how to get a string from a function before,I am building a django app in which the user has to write a sentence in a text box. This sentence gets then sent to the server and received by it. After that the user has to click on continue and gets on a another html page. Here the user has to record an audio of a word he sais. The word is then turned into a string and after that sent to the server. Now I would like the function in views.py to find out if the word the user said is in the sentence the user wrote before. But the sentence is only in the first function that receives the sentence after it is sent. I know I could first store the sentence but is there also another way?,"yes, at least there is two ways first using a model to store the value. or a file maybe. second using some html magic(? I'm not sure of magic). using an input type=""hidden"". your first function receives the text, redirects user to another page but with an argument the text!, then inside that template store that text in a hidden input and by clicking the button send both voice and hide value text to the new functon.",1.2,True,1,7235 -2021-01-20 05:18:51.260,Telegram Bot delete sent photos?,"I am looking to delete a photo sent by my bot (reply_photo()), I can't find any specific reference to doing so in the documentation, and have tried delete_message(), but don't know how to delete a photo. Is it currently possible?","It's currently possible in Telegram API, not the Bot API unfortunately. It's a shame :(",0.0,False,2,7236 2021-01-20 05:18:51.260,Telegram Bot delete sent photos?,"I am looking to delete a photo sent by my bot (reply_photo()), I can't find any specific reference to doing so in the documentation, and have tried delete_message(), but don't know how to delete a photo. Is it currently possible?","You need to have the chat_id and the message_id of that message sent by bot, then you can delete using context.bot.delete_message(chat_id, message_id). Note: Bot cannot delete a message if it was sent more than 48 hours ago.",0.0,False,2,7236 +2021-01-20 05:18:51.260,Telegram Bot delete sent photos?,"I am looking to delete a photo sent by my bot (reply_photo()), I can't find any specific reference to doing so in the documentation, and have tried delete_message(), but don't know how to delete a photo. Is it currently possible?","It's currently possible in Telegram API, not the Bot API unfortunately. It's a shame :(",0.0,False,2,7236 2021-01-20 12:38:52.443,How can I run a Python script inside a webpage?,"I hope you can help me. I have a static website hosted on Heroku and I would like to implement a Python Script to be executed when a button is clicked. So, just as a reference you would have: A text field @@ -7468,7 +133,7 @@ Another text field The idea is that you enter some text in the first text field, you click the button calling the Python Script, and then print the result coming from the Python Script in the second text field. How would you implement such technology? Which services should be used to achieve the result? I think that the script should be hosted somewhere and be called via an API but I do not really know how to do it. I hope you can help me. -Thanks!","You have to use backend for your purpose. When a user clicks your button some data would be collected by your backend, handled and showed to user with the help of API. You can start with learning a little bit of Flask and learning Django later for some bigger projects.",0.1352210990936997,False,2,7237 +Thanks!","I should use Flask or Django. In Flask you simply use the: name = ""your_variable"" command in your HTML code and then you can simply use the code request.form [""your_variable""] in your python script.",0.1352210990936997,False,2,7237 2021-01-20 12:38:52.443,How can I run a Python script inside a webpage?,"I hope you can help me. I have a static website hosted on Heroku and I would like to implement a Python Script to be executed when a button is clicked. So, just as a reference you would have: A text field @@ -7478,7 +143,7 @@ Another text field The idea is that you enter some text in the first text field, you click the button calling the Python Script, and then print the result coming from the Python Script in the second text field. How would you implement such technology? Which services should be used to achieve the result? I think that the script should be hosted somewhere and be called via an API but I do not really know how to do it. I hope you can help me. -Thanks!","I should use Flask or Django. In Flask you simply use the: name = ""your_variable"" command in your HTML code and then you can simply use the code request.form [""your_variable""] in your python script.",0.1352210990936997,False,2,7237 +Thanks!","You have to use backend for your purpose. When a user clicks your button some data would be collected by your backend, handled and showed to user with the help of API. You can start with learning a little bit of Flask and learning Django later for some bigger projects.",0.1352210990936997,False,2,7237 2021-01-20 13:57:26.497,Multiprocessing in python vs number of cores,"If a run a python script where i declare 6 processes using multiprocessing, but i only have 4 CPU cores, what happens to the additional 2 processes which can find a dedicated CPU core. How are they executed? @@ -7685,6 +350,12 @@ pyinstaller --onefile test.spec",0.0,False,1,7275 2021-02-11 14:08:05.040,Node JS detect connectivity to all Node JS programs,"Let me describe it as briefly and clearly as possible: I have 10 different copies of a node JS based program running on 10 different desktops. I want to create a Node JS based (or any other technology) web app deployed on a server which will check if these 10 programs are online or not. Any suggestions as to how I can implement this? +Note: The node JS based desktop apps are running on electron.","While you can use socket.io for this there may also be a simpler way and that is to just use a post request / cron to check every X minutes if the server is reachable from 'Checking' server (that would just be the server that is doing the check) +So why not use socket.io? Well, without knowing how you node servers are setup, its hard to say if socket.io would be a good fit, this is simply because socket.io uses WSS to connect, so unless you are running it from the browser it will need additional configurations / modules setup on the server to actually use WSS (if you do go this route, you will need socket.io-client module on each system, this is important because this will allow you to connect to the socket.io server, also make sure the version of socket.io matches the socket.io-client build) +All in all, if I was building this out, I would probably just do a simple ping of each server and log it to a DB or what not but your requirements will really dictate the direction you go",0.0,False,2,7276 +2021-02-11 14:08:05.040,Node JS detect connectivity to all Node JS programs,"Let me describe it as briefly and clearly as possible: +I have 10 different copies of a node JS based program running on 10 different desktops. I want to create a Node JS based (or any other technology) web app deployed on a server which will check if these 10 programs are online or not. +Any suggestions as to how I can implement this? Note: The node JS based desktop apps are running on electron.","You can use 2 most probable ways. if you want to know immediately whether out of 10 programs, if any of them goes offline then you should use Socket.io @@ -7696,12 +367,6 @@ in brief, Ping/pong technique in which server sends Ping event on socket connect if client is not sending Pong event back in predefined time interval on getting Ping event then that client is offline or disconnected. You can periodically (say every 1/5/10 minutes etc ) call simple HTTP request and check if response status is 200 or not. If any of the 10 desktop program is offline then you will know it by response status whether it is 200 or not.",0.0,False,2,7276 -2021-02-11 14:08:05.040,Node JS detect connectivity to all Node JS programs,"Let me describe it as briefly and clearly as possible: -I have 10 different copies of a node JS based program running on 10 different desktops. I want to create a Node JS based (or any other technology) web app deployed on a server which will check if these 10 programs are online or not. -Any suggestions as to how I can implement this? -Note: The node JS based desktop apps are running on electron.","While you can use socket.io for this there may also be a simpler way and that is to just use a post request / cron to check every X minutes if the server is reachable from 'Checking' server (that would just be the server that is doing the check) -So why not use socket.io? Well, without knowing how you node servers are setup, its hard to say if socket.io would be a good fit, this is simply because socket.io uses WSS to connect, so unless you are running it from the browser it will need additional configurations / modules setup on the server to actually use WSS (if you do go this route, you will need socket.io-client module on each system, this is important because this will allow you to connect to the socket.io server, also make sure the version of socket.io matches the socket.io-client build) -All in all, if I was building this out, I would probably just do a simple ping of each server and log it to a DB or what not but your requirements will really dictate the direction you go",0.0,False,2,7276 2021-02-12 23:08:11.587,How to to manually recover the a PIP corrupted installation?,"I do not have administrative privileges' on my Windows 10 workstation. The IT department installed Python 2.7 as my request but I proceed a PIP upgrade without the ""--user"" setting, and now the already installed PIP got corrupted and I do not know how to recover it. The corrupted PIP always return syntax error on lib\site-packages\pip_internal\cli\main.py"", line 60 sys.stderr.write(f""ERROR: {exc}"") @@ -8016,12 +681,6 @@ Conda-build version: 3.20.5 Python: 3.8.5.final.0","I do not know why this was the case but here is: I could not find the file of interest. Did all methods from before in above link, once I looked in the icloud Desktop in my finder it suddenly appeared in the normal desktop directory. Idk why, but if this happens to you, check the icloud directory corresponding to the directory you are in and it may appear in the corresponding normal directory after. Lesson learned: do some version control.",0.0,False,1,7333 2021-03-17 12:08:15.460,Python exe - how can I restrict viewing source and byte code?,"I'm making a simple project where I will have a downloadable scraper on an HTML website. The scraper is made in Python and is converted to a .exe file for downloading purposes. Inside the python code, however, I included a Google app password to an email account, because the scraper sends an email and I need the server to login with an available Google account. Whilst .exe files are hard to get source code for, I've seen that there are ways to do so, and I'm wondering, how could I make it so that anyone who has downloaded the scraper.exe file cannot see the email login details that I will be using to send them an email when the scraper needs to? If possible, maybe even block them from accessing any of the .exe source code or bytecode altogether? I'm using the Python libraries bs4 and requests. -Additionally, this is off-topic, however, as it is my first time developing a downloadable file, even whilst converting the Python file to a .exe file, my antivirus picked it up as a suspicious file. This is like a 50 line web scraper and obviously doesn't have any malicious code within it. How can I make the code be less suspicious to antivirus programs?","Firstly, why is it even sending them an email? Since they'll be running the .exe, it can pop up a window and offer to save the file. If an email must be sent, it can be from the user's gmail rather than yours. - -Secondly, using your gmail account in this way may be against the terms of service. You could get your account suspended, and it may technically be a felony in the US. Consult a lawyer if this is a concern. - -To your question, there's basically no way to obfuscate the password that will be more than a mild annoyance to anyone with the least interest. At the end of the day, (a) the script runs under the control of the user, potentially in a VM or a container, potentially with network communications captured; and (b) at some point it has to decrypt and send the password. Decoding and following either the script, or the network communications that it makes will be relatively straightforward for anyone who wants to put in quite modest effort.",0.0,False,2,7334 -2021-03-17 12:08:15.460,Python exe - how can I restrict viewing source and byte code?,"I'm making a simple project where I will have a downloadable scraper on an HTML website. The scraper is made in Python and is converted to a .exe file for downloading purposes. Inside the python code, however, I included a Google app password to an email account, because the scraper sends an email and I need the server to login with an available Google account. Whilst .exe files are hard to get source code for, I've seen that there are ways to do so, and I'm wondering, how could I make it so that anyone who has downloaded the scraper.exe file cannot see the email login details that I will be using to send them an email when the scraper needs to? If possible, maybe even block them from accessing any of the .exe source code or bytecode altogether? I'm using the Python libraries bs4 and requests. Additionally, this is off-topic, however, as it is my first time developing a downloadable file, even whilst converting the Python file to a .exe file, my antivirus picked it up as a suspicious file. This is like a 50 line web scraper and obviously doesn't have any malicious code within it. How can I make the code be less suspicious to antivirus programs?","Sadly even today,there is no perfect solution to this problem. The ideal usecase is to provide this secret_password from web application,but in your case seems unlikelly since you are building a rather small desktop app. @@ -8031,6 +690,12 @@ Finally before compiling you can 'salt' your script or further obscufate it with As for your second question antiviruses and specifically windows don't like programms running without installers and unsigned. You can use inno setup to create a real life program installer. If you want to deal with UAC or other issues related to unsigned programms you can sign your programm(will cost money).",1.2,True,2,7334 +2021-03-17 12:08:15.460,Python exe - how can I restrict viewing source and byte code?,"I'm making a simple project where I will have a downloadable scraper on an HTML website. The scraper is made in Python and is converted to a .exe file for downloading purposes. Inside the python code, however, I included a Google app password to an email account, because the scraper sends an email and I need the server to login with an available Google account. Whilst .exe files are hard to get source code for, I've seen that there are ways to do so, and I'm wondering, how could I make it so that anyone who has downloaded the scraper.exe file cannot see the email login details that I will be using to send them an email when the scraper needs to? If possible, maybe even block them from accessing any of the .exe source code or bytecode altogether? I'm using the Python libraries bs4 and requests. +Additionally, this is off-topic, however, as it is my first time developing a downloadable file, even whilst converting the Python file to a .exe file, my antivirus picked it up as a suspicious file. This is like a 50 line web scraper and obviously doesn't have any malicious code within it. How can I make the code be less suspicious to antivirus programs?","Firstly, why is it even sending them an email? Since they'll be running the .exe, it can pop up a window and offer to save the file. If an email must be sent, it can be from the user's gmail rather than yours. + +Secondly, using your gmail account in this way may be against the terms of service. You could get your account suspended, and it may technically be a felony in the US. Consult a lawyer if this is a concern. + +To your question, there's basically no way to obfuscate the password that will be more than a mild annoyance to anyone with the least interest. At the end of the day, (a) the script runs under the control of the user, potentially in a VM or a container, potentially with network communications captured; and (b) at some point it has to decrypt and send the password. Decoding and following either the script, or the network communications that it makes will be relatively straightforward for anyone who wants to put in quite modest effort.",0.0,False,2,7334 2021-03-19 10:45:05.610,Modify values in numpy array based on index and value,"How do I modify a np array based on the current value and the index? When I just want to modify certain values, I use e.g. arr[arr>target_value]=0.5 but how do I only modify the values of arr > target_value where also the index is greater than a certain value?","For that very specific example you would just use indexing I believe eg arr[100:][arr[100:] > target_value]=0.5 in general it could be conceptually easier to do these two things separately. First figure out which indices you want, then check whether they satisfy whatever condition you want.",0.0,False,1,7335 @@ -8520,9 +1185,11 @@ So reverted to python 3.8. Whenever I install some package it gets installed usi Help me, how can I use python 3.9 pip and install packages in python 3.9 without changing the default version. Any help is appreciated. --> Thing I want is that when I want to install python package using -pip3 install it must install in python3.9 and not in python3.8","You don't need to install pip separately -You should be able to refer to it as such -python3.9 -m pip install",0.5457054096481145,False,3,7383 +pip3 install it must install in python3.9 and not in python3.8","Hello everyone I fixed my issue. +The problem is we cannot override default python version in Ubuntu as so many things depend on it. +So I just made an alias as : alias pip3='python3.9 -m pip' and alias for python : alias python3='/usr/bin/python3.9' +If anyone face this issue please do what I specify and you will be good to go. +Now all my packages are being installed in python3.9.",0.296905446847765,False,3,7383 2021-04-12 13:35:47.970,How to use pip for python 3.9 instead of inbuilt python 3.8 in Ubuntu?,"Today I faced a problem regarding pip3 in Ubuntu. Ubuntu comes with python 3.8 but I wanted to use latest versions of python, like 3.9 or maybe 3.10. So I installed it using 'ppa:deadsnakes' repository and also installed pip. But the problem is I want to use pip in python 3.9 instead of version 3.8. So I changed the default python version to 3.9 and everything crashed. So reverted to python 3.8. Whenever I install some package it gets installed using python 3.8. Help me, how can I use python 3.9 pip and install packages in python 3.9 without changing the default version. @@ -8534,11 +1201,9 @@ So reverted to python 3.8. Whenever I install some package it gets installed usi Help me, how can I use python 3.9 pip and install packages in python 3.9 without changing the default version. Any help is appreciated. --> Thing I want is that when I want to install python package using -pip3 install it must install in python3.9 and not in python3.8","Hello everyone I fixed my issue. -The problem is we cannot override default python version in Ubuntu as so many things depend on it. -So I just made an alias as : alias pip3='python3.9 -m pip' and alias for python : alias python3='/usr/bin/python3.9' -If anyone face this issue please do what I specify and you will be good to go. -Now all my packages are being installed in python3.9.",0.296905446847765,False,3,7383 +pip3 install it must install in python3.9 and not in python3.8","You don't need to install pip separately +You should be able to refer to it as such +python3.9 -m pip install",0.5457054096481145,False,3,7383 2021-04-13 19:27:05.077,"kivy app in adroid with buildozer, terminal loops on ""#waiting for application to start""","I made a sample app that just says hello world with kivy and I am trying to put it on an android tablet I bought. I used a virtual machine (Virtual box) and use bulldozer to loaded to the tablet. However when I run it the terminal just prints in a loop #waiting for the application to start @@ -9003,10 +1668,10 @@ More advanced options involve PDO mapping, where you can decide which parts of t Other misc useful stuff is the SAVE/LOAD features of CANopen in case the device supports them. Then you can store your configuration permanently so that your application doesn't need to send SDOs at start-up for configuration every time the system is used. Heartbeat might be useful to enable to ensure that the device is up and running on regular basis. Your application will then act as Heartbeat consumer.",0.3869120172231254,False,1,7418 2021-04-29 14:46:02.793,How to represent free functions in a Class Diagram Python,"I have a number of free functions in a couple of Python modules and I need to create a UML Class Diagram to represent my entire program. +Can I represent a free functions in a Class Diagram somehow or do I need to create a Utility Class so I can represent them in my Class Diagram?","Even though UML was conceived in a time, when object orientation was hyped, it doesn't mean that it cannot be used for functions. What many don't realize is, that Behavior in the UML is a Class. Therefore, any Behavior can be shown in a class diagram. Just put the metaclass in guillemets above the name, e.g. «activity». If you plan to describe the function with an activity diagram, that makes perfect sense. However, if you plan to describe it in (pseudo) code or in natural language, you can use «function behavior» which is defined as a behavior without side effects. Or, if it can have side effects, just use «opaque behavior».",0.0,False,2,7419 +2021-04-29 14:46:02.793,How to represent free functions in a Class Diagram Python,"I have a number of free functions in a couple of Python modules and I need to create a UML Class Diagram to represent my entire program. Can I represent a free functions in a Class Diagram somehow or do I need to create a Utility Class so I can represent them in my Class Diagram?","You will need to have some class in order to represent a ""free function"". You are quite free in how to do that. What I usually do is to create a stereotyped class. And it would be ok to use «utility» for that. Anything else would work, but of course you need to document that in your domain. Usually a stereotype is bound to a profile. But most tools allow to use freely defined stereotypes. Though that is not 100% UML compliant it is quite a common practice.",0.0,False,2,7419 -2021-04-29 14:46:02.793,How to represent free functions in a Class Diagram Python,"I have a number of free functions in a couple of Python modules and I need to create a UML Class Diagram to represent my entire program. -Can I represent a free functions in a Class Diagram somehow or do I need to create a Utility Class so I can represent them in my Class Diagram?","Even though UML was conceived in a time, when object orientation was hyped, it doesn't mean that it cannot be used for functions. What many don't realize is, that Behavior in the UML is a Class. Therefore, any Behavior can be shown in a class diagram. Just put the metaclass in guillemets above the name, e.g. «activity». If you plan to describe the function with an activity diagram, that makes perfect sense. However, if you plan to describe it in (pseudo) code or in natural language, you can use «function behavior» which is defined as a behavior without side effects. Or, if it can have side effects, just use «opaque behavior».",0.0,False,2,7419 2021-04-30 07:37:09.913,Is there a way to run django constantly on the server to update a database based on live data,"I am developing a virtual stock market application on django and came upon the following problem. In any stock market application, there is an option of limit buy, stop loss and target sell. This essentially means to buy a share if it ever touches a price which is lower than the current price, to sell a share if it touches a very low price and to sell a share if it touches a high price respectively. For this, the server needs to constantly monitor the live data coming in from the API of a particular stock and perform the action if it happens. However, during this time no one may be making any requests on the site so how do I get django to monitor the prices of the stocks every 5 seconds or so to check if the order needs to be executed or not?","The common approaches here would be to create a Django management command that performs the checks, you can then use a cronjob on your server to schedule this every minute. Alternatively, you can use an asynchronous worker, which can also be used to schedule repetitive tasks frequently. The most commonly used solution is Celery but it does have a bit of a learning curve (which tends to manifest itself in reliability) that some other solutions such as Dramatiq seem to be trying to improve upon. Celery is probably the easiest to find instructional information for though.",0.0,False,1,7420 2021-04-30 14:43:52.133,Python package repositories on CentOS/Ubuntu,"I'm wondering how does it work with python package repositories for CentOS (and also other distributions) as I can't find any article about that. Where do python packages/version come from? @@ -9110,8 +1775,8 @@ They're both should be similar For example if I use .toPandas() I will put all the data in memory, does something similar happends with createOrReplaceTempView ? or is still distributed? .toPandas() collect all data and return to driver's memory, createOrReplaceTempView is lazy",0.0,False,1,7429 -2021-05-06 00:44:41.687,how to install and use mediapipe on Raspberry Pi 4?,I followed the official mediapipe page but without any result so can someone help to install mediapipe in raspberry pi 4 in windows it is easy to install it and use it but in arm device like raspberry pi i did not find any resources.,if you use python3 you can try sudo pip3 install mediapipe-rpi4,0.1016881243684853,False,2,7430 2021-05-06 00:44:41.687,how to install and use mediapipe on Raspberry Pi 4?,I followed the official mediapipe page but without any result so can someone help to install mediapipe in raspberry pi 4 in windows it is easy to install it and use it but in arm device like raspberry pi i did not find any resources.,I ran the command sudo pip3 install media pipe-rpi4. This worked. When I try to import the module in python I get ModuleNotFoundError: No module named ‘mediapipe.python._framework_bindings’,0.0,False,2,7430 +2021-05-06 00:44:41.687,how to install and use mediapipe on Raspberry Pi 4?,I followed the official mediapipe page but without any result so can someone help to install mediapipe in raspberry pi 4 in windows it is easy to install it and use it but in arm device like raspberry pi i did not find any resources.,if you use python3 you can try sudo pip3 install mediapipe-rpi4,0.1016881243684853,False,2,7430 2021-05-06 09:08:09.680,ImportError: relocation error: R_AMD64_32: /scipy-1.6.2-py3.7-solaris-2.11-i86pc.64bit.egg/scipy/spatial/qhull.cpython-37m.so: symbol (unknown),"I am trying to run my code in Solaris environment with Python 3.7.4 [GCC 7.3.0] on sunos5. When importing necessary libraries from scipy import stats I face this issue. Does anybody know how can I fix this? Thank you in advance.","This is a library linking issue. Try the following, as it may need re-installing, or updated: pip install pyhull @@ -9150,19 +1815,6 @@ Click Administrative Tools. Click Event Viewer.",0.0,False,1,7436 2021-05-10 13:52:16.680,"how do i make python find words that look similar to a bad word, but not necessarily a proper word in english?","I'm making a cyberbullying detection discord bot in python, but sadly there are some people who may find their way around conventional English and spell a bad word in a different manner, like the n-word with 3 g's or the f word without the c. There are just too many variants of bad words some people may use. How can I make python find them all? I've tried pyenchant but it doesn't do what I want it to do. If I put suggest(""racist slur""), ""sucker"" is in the array. I can't seem to find anything that works. -Will I have to consider every possibility separately and add all the possibilities into a single dictionary? (I hope not.)","You could try looping through the string that you are moderating and putting it into an array. -For example, if you wanted to blacklist ""foo"" -x=[[""f"",""o"",""o""],["" ""], [""f"",""o"",""o"",""o""]] -then count the letters in each word to count how many of each letter is in each word: -y = [[""f"":""1"", ""o"":""2""], ["" "":""1""], [""f"":""1"", ""o"":""3""]] -then see that y[2] is very similar to y[0] (the banned word). -While this method is not perfect, it is a start. -Another thing to look in to is using a neural language interpreter that detects if a word is being used in a derogatory way. A while back, Google designed one of these. -The other answer is just that no bot is perfect. -You might just have to put these common misspellings in the blacklist. -However, the automatic approach would be awesome if you got it working with 100% accuracy.",0.0,False,2,7437 -2021-05-10 13:52:16.680,"how do i make python find words that look similar to a bad word, but not necessarily a proper word in english?","I'm making a cyberbullying detection discord bot in python, but sadly there are some people who may find their way around conventional English and spell a bad word in a different manner, like the n-word with 3 g's or the f word without the c. There are just too many variants of bad words some people may use. How can I make python find them all? -I've tried pyenchant but it doesn't do what I want it to do. If I put suggest(""racist slur""), ""sucker"" is in the array. I can't seem to find anything that works. Will I have to consider every possibility separately and add all the possibilities into a single dictionary? (I hope not.)","Unfortunately, spell checking (for different languages) alone is still an open problem that people do research on, so there is no perfect solution for this, let alone for the case when the user intentionally tries to insert some ""errors"". Fortunately, there is a conceptually limited number of ways people can intentionally change the input word in order to obtain a new word that resembles the initial one enough to be understood by other people. For example, bad actors could try to: @@ -9188,6 +1840,19 @@ c. check if this new form of the word is present in the set, if so, censor it or This solution lacks the protection against words with characters separated by one or multiple white spaces / newlines (e.g. ""killer"" -> ""k i l l e r""). Depending on how long the messages are (I believe they are generally short in chat rooms), you can try to consider each substring of the initial message with removed whitespaces, instead of each word detected by the white space separator in step 3. This will take more time, as generating each substring will take alone O(message_length^2) time.",0.0,False,2,7437 +2021-05-10 13:52:16.680,"how do i make python find words that look similar to a bad word, but not necessarily a proper word in english?","I'm making a cyberbullying detection discord bot in python, but sadly there are some people who may find their way around conventional English and spell a bad word in a different manner, like the n-word with 3 g's or the f word without the c. There are just too many variants of bad words some people may use. How can I make python find them all? +I've tried pyenchant but it doesn't do what I want it to do. If I put suggest(""racist slur""), ""sucker"" is in the array. I can't seem to find anything that works. +Will I have to consider every possibility separately and add all the possibilities into a single dictionary? (I hope not.)","You could try looping through the string that you are moderating and putting it into an array. +For example, if you wanted to blacklist ""foo"" +x=[[""f"",""o"",""o""],["" ""], [""f"",""o"",""o"",""o""]] +then count the letters in each word to count how many of each letter is in each word: +y = [[""f"":""1"", ""o"":""2""], ["" "":""1""], [""f"":""1"", ""o"":""3""]] +then see that y[2] is very similar to y[0] (the banned word). +While this method is not perfect, it is a start. +Another thing to look in to is using a neural language interpreter that detects if a word is being used in a derogatory way. A while back, Google designed one of these. +The other answer is just that no bot is perfect. +You might just have to put these common misspellings in the blacklist. +However, the automatic approach would be awesome if you got it working with 100% accuracy.",0.0,False,2,7437 2021-05-10 16:39:31.573,How to check if a vector hirogramm correlates with uniform distribution?,"I have a vector of floats V with values from 0 to 1. I want to create a histogram with some window say A==0.01. And check how close is the resulting histogram to uniform distribution getting one value from zero to one where 0 is correlating perfectly and 1 meaning not correlating at all. For me correlation here first of all means histogram shape. How one would do such a thing in python with numpy?","You can create the histogram with np.histogram. Then, you can generate the uniform histogram from the average of the previously retrieved histogram with np.mean. Then you can use a statistical test like the Pearson coefficient to do that with scipy.stats.pearsonr.",0.0,False,1,7438 2021-05-11 09:33:08.827,PyAutoGui how to write special characters,"I am new in python and I want to make writing bot with PyAutoGui. It works good but it cant write character like this ""č"", ""š"", ""ř"", ""ž"".How to write this special symbols?","Ok, after some time I finally know how to do this. I just use another plugin ""keyboard""",1.2,True,1,7439 @@ -9233,9 +1898,9 @@ extracts files on premise from db and then uploads the extracted files to the above S3 folder. Sometimes, the python script which is scheduled to run through windows task scheduler job is just starting and finishing in seconds without doing any work. -In order to send an alert notification when this happens, I am thinking of writing a lambda that is scheduled to run after like 5 minutes to see if the folder contents is deleted or not in the last few minutes to an SNS topic. Is this doable? Here the lambda trigger is not an S3 event but a scheduled event that can able to read S3 delete action.","Sure, you could do that. -An easier method might be to add a step at the end of the Windows task that basically says ""The job completed successfully"". It could upload this file to S3. -Then, the scheduled AWS Lambda function could simply check the LastModified date of that file. If it is older than one hour (or whatever), then send an alert via Amazon SNS.",1.2,True,2,7447 +In order to send an alert notification when this happens, I am thinking of writing a lambda that is scheduled to run after like 5 minutes to see if the folder contents is deleted or not in the last few minutes to an SNS topic. Is this doable? Here the lambda trigger is not an S3 event but a scheduled event that can able to read S3 delete action.","On AWS: +You can set up an event notifier on the s3 bucket supporting event type s3:ObjectRemoved:, s3:ObjectCreated:, where the event notification can be on SNS Topic +[https://docs.aws.amazon.com/AmazonS3/latest/userguide/enable-event-notifications.html]",0.1352210990936997,False,2,7447 2021-05-16 17:06:19.137,Alert if the contents of an S3 folder is not deleted by specific time,"Can anyone please suggest me how to implement this specific use case? Every morning a python job from on premise server @@ -9244,9 +1909,9 @@ extracts files on premise from db and then uploads the extracted files to the above S3 folder. Sometimes, the python script which is scheduled to run through windows task scheduler job is just starting and finishing in seconds without doing any work. -In order to send an alert notification when this happens, I am thinking of writing a lambda that is scheduled to run after like 5 minutes to see if the folder contents is deleted or not in the last few minutes to an SNS topic. Is this doable? Here the lambda trigger is not an S3 event but a scheduled event that can able to read S3 delete action.","On AWS: -You can set up an event notifier on the s3 bucket supporting event type s3:ObjectRemoved:, s3:ObjectCreated:, where the event notification can be on SNS Topic -[https://docs.aws.amazon.com/AmazonS3/latest/userguide/enable-event-notifications.html]",0.1352210990936997,False,2,7447 +In order to send an alert notification when this happens, I am thinking of writing a lambda that is scheduled to run after like 5 minutes to see if the folder contents is deleted or not in the last few minutes to an SNS topic. Is this doable? Here the lambda trigger is not an S3 event but a scheduled event that can able to read S3 delete action.","Sure, you could do that. +An easier method might be to add a step at the end of the Windows task that basically says ""The job completed successfully"". It could upload this file to S3. +Then, the scheduled AWS Lambda function could simply check the LastModified date of that file. If it is older than one hour (or whatever), then send an alert via Amazon SNS.",1.2,True,2,7447 2021-05-18 07:10:49.937,How we can implement attribute prediction and image search model as one?,"We want to implement attribute prediction and image search model in single application. Step1. Upload of image ,will give attribute details. example ,If we upload dog image, then attribute details will display like color, breed. @@ -9270,10 +1935,10 @@ It means my timestamp of metrics is datetime.today(), and I need to set the Metr What other common ways are there to generate the csrf-key on the fly? (I explicitly checked by inspecting the post request that the csrf is included in the request, but I don't understand how it gets there)","The csrf key must be somewhere in the webpage you are trying to access. The csrf key is not generated by the user, instead, it is a unique secret value generated by the server-side application and transmitted to the client.",0.0,False,1,7452 2021-05-24 09:02:59.283,Ursina : change FirstPersonController controls,"I'm trying to make a 3D game in Ursina. I managed to have a working FirstPersonController, but it is using WASD. I'm french, and thus have an AZERTY keyboard, which isn't really convenient... Do you know how I could change my controls to move the FirstPersonController ? +I'd also like to be able to use ""special"" buttons of my mouse. For exemple, the ""forward"" and ""backward"" ones, to sprint and jump !","You can either copy the code, which you're probably going to do if you need custom movement anyway, or you can rebind the keys with input_handler.rebind(to_key, from_key)",0.2655860252697744,False,2,7453 +2021-05-24 09:02:59.283,Ursina : change FirstPersonController controls,"I'm trying to make a 3D game in Ursina. I managed to have a working FirstPersonController, but it is using WASD. I'm french, and thus have an AZERTY keyboard, which isn't really convenient... Do you know how I could change my controls to move the FirstPersonController ? I'd also like to be able to use ""special"" buttons of my mouse. For exemple, the ""forward"" and ""backward"" ones, to sprint and jump !","For changing keys you need to scroll up and when you see""ursina.prefabs.first_person_controller import FirstPersonController"" then you do ctrl + click on ""first_person_controller"". Next, you scroll down until you get to the line 44-45 and you change .If it doesn't work use visual studio code.",0.0,False,2,7453 -2021-05-24 09:02:59.283,Ursina : change FirstPersonController controls,"I'm trying to make a 3D game in Ursina. I managed to have a working FirstPersonController, but it is using WASD. I'm french, and thus have an AZERTY keyboard, which isn't really convenient... Do you know how I could change my controls to move the FirstPersonController ? -I'd also like to be able to use ""special"" buttons of my mouse. For exemple, the ""forward"" and ""backward"" ones, to sprint and jump !","You can either copy the code, which you're probably going to do if you need custom movement anyway, or you can rebind the keys with input_handler.rebind(to_key, from_key)",0.2655860252697744,False,2,7453 2021-05-24 13:17:11.237,Connecting TypeScript to python,"I have tried searching everywhere but to no avail. Can anyone guide me as to how to pass data from typescript to python? The data is such that I will use it to do webscraping using beautifulSoup over in python. As I'm not putting any implementations here, any general guidance will do :)","Have you tried creating an api on python? I guess you can pass the data from typescript to python using Http in the form of json or just save the file using typescript and read it using python and edit it as you want.",0.0,False,1,7454 2021-05-25 07:08:50.393,How to Count columns from a 3x3 square,"I'm trying to create a magic Square function out of 2d list: [[8, 1, 6], [3, 5, 7], [4, 9, 2]], where It checks if all the columns adds up to 15 from a 3x3 square. I'm stuck on how to write function checks for the sum of columns if anyone can help out that will be highly appreciative. Without using numpy",My advice would be compare the two answers thus far and pick the performance and readability answer and write a test for the method ensuring people do not break it.,0.0,False,1,7455 @@ -9610,12 +2275,12 @@ Anyway using datetime should work and the chances to get a duplicate record is m 2021-06-19 13:50:42.900,"In Python, 0 == (1 or 0), returns False. Why doesn't it return True?","I'm trying to see if a function returns an integer who's value should be either a 1 or 0. 0 == (1 or 0) 0 is equal to 1 or 0, this sounds like it should be true, but it's not. -Why? And how to do what I'm looking to do correctly?","1 or 0 evaluates to 1, and since 0 is not equal to 1, the expression is false. -I suspect what you are trying to do is something like 0 == 0 or 1 == 0",0.4961739557460144,False,2,7491 +Why? And how to do what I'm looking to do correctly?","You are using the brackets. Which means you are forcing precedence on it. so python will first evaluate the expression 1 or 0 which is 1. and then it will evaluate the next part, 0 == 1 which is false.",0.0,False,2,7491 2021-06-19 13:50:42.900,"In Python, 0 == (1 or 0), returns False. Why doesn't it return True?","I'm trying to see if a function returns an integer who's value should be either a 1 or 0. 0 == (1 or 0) 0 is equal to 1 or 0, this sounds like it should be true, but it's not. -Why? And how to do what I'm looking to do correctly?","You are using the brackets. Which means you are forcing precedence on it. so python will first evaluate the expression 1 or 0 which is 1. and then it will evaluate the next part, 0 == 1 which is false.",0.0,False,2,7491 +Why? And how to do what I'm looking to do correctly?","1 or 0 evaluates to 1, and since 0 is not equal to 1, the expression is false. +I suspect what you are trying to do is something like 0 == 0 or 1 == 0",0.4961739557460144,False,2,7491 2021-06-20 08:21:34.093,Replace Vgg16 FC layer with UNet,I want to remove FC layer of VGG16 and add UNet layers. I don’t know how I can fine tune VGG16.,"I tried to fit model on my data using suggested changes. Features are now extracted and saved in dataframe and as a csv file. When I tried to give it to classifier it gives error on training on these features. Error was got 6 columns instead of 8 or got 9 columns instead of 8. The problem was in values in csv file. The command used in training classifier was on this line X_train = genfromtxt(X_train_file_path, delimiter=',') ValueError: Some errors were detected ! @@ -9670,7 +2335,13 @@ On Ubuntu 18.04, with Python 3.9.5 (installs made inside docker container). I get the following exception when trying to import gensim: ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 88 from C header, got 80 from PyObject. Not sure how i can make this work, I tried downgrading several libraries but still not achieved to make it work in Ubuntu. -Edit : it works on Python 3.8.10","It is the numpy version , downgrade numpy to 1.19.2 or lower and see.",0.2012947653214861,False,2,7499 +Edit : it works on Python 3.8.10","Tensorflow is currently not compatible with numpy 1.20. +Also, many changes happened from gensim 3.X to 4.X, which may lead to some problems. +Try installing the following setup: + +numpy 1.19.2 +gensim 3.8.3 +tensorflow 2.3.0",0.2012947653214861,False,2,7499 2021-06-23 19:24:07.863,Tensorflow 2.5.0 and Gensim 4.0.1 compatibility with numpy,"I have a compatibility issue when running : pip install numpy==1.19.4 pip install tensorflow=2.5.0 @@ -9679,13 +2350,7 @@ On Ubuntu 18.04, with Python 3.9.5 (installs made inside docker container). I get the following exception when trying to import gensim: ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 88 from C header, got 80 from PyObject. Not sure how i can make this work, I tried downgrading several libraries but still not achieved to make it work in Ubuntu. -Edit : it works on Python 3.8.10","Tensorflow is currently not compatible with numpy 1.20. -Also, many changes happened from gensim 3.X to 4.X, which may lead to some problems. -Try installing the following setup: - -numpy 1.19.2 -gensim 3.8.3 -tensorflow 2.3.0",0.2012947653214861,False,2,7499 +Edit : it works on Python 3.8.10","It is the numpy version , downgrade numpy to 1.19.2 or lower and see.",0.2012947653214861,False,2,7499 2021-06-25 00:22:25.170,How to run python script on google cloud using android studio?,"I want to run python script on google cloud using android studio ex: I have an android application which contain button and google cloud VM instance which has a python script. @@ -9802,10 +2467,6 @@ I tried using self-referencing groups, endpoints but nothing worked. Also custom I see that for Spark jobs ESConnector is available but can not find any working way to make it with Pythonshell jobs. Is there any way to allow such connection?","Solved, I was missing route to NAT gateway in private subnet.",0.0,False,1,7513 2021-07-01 18:44:09.327,Running python scripts does not create Airflow DAG?,"I ssh'd into a linux server to run Airflow. I have made the scheduler(airflow scheduler -D) and database initialized (airflow db init). However, even when trying to create the simplest of DAGs using python (I also tried using Airflow's predefined example py scripts), Airflow does not list the DAG when running the airflow dags list command. I'm sure the syntax of my py code is correct because the DAG showed up on a windows instance but my setup for airflow within Linux is somehow not correct? Also used python3 script.py to execute.","Basically the dags folder's permission's weren't allowing anything to be written into it. I just sudo'd every command or chmod the folder. Also to ensure Airflow was correctly run, I suggest using a YAML file with docker compose to streamline Airflow setup.",0.3869120172231254,False,1,7514 -2021-07-02 11:02:55.157,Can a pdf be signed multiple times using python endesive library?,"I have a single pdf with multiple copies of the same document merged into one. I want to digitally sign each and every copy, meaning the pdf must have multiple digital signatures. I'm using endesive library in Python to digitally sign the PDF. The signature is showing as valid when I sign the document once but when I'm writing the same signature multiple times, it shows that the signature is invalid (when opening the document). Is it right to digitally sign a document multiple times and if yes, how to achieve it using Python's endesive library?","There is a strong difference between digital and manual signature: with a manual signature, you add a signature on a paper sheet, and you can/must separately sign each page. -With a digital signature, you sign a whole document seen as a sequence of bytes. AFAIK, there is no way to have different signatures on different parts of the document. -What is possible is to assemble the parts into a large document and then sign the final document. That final signature will be an evidence that the signer attests that the document is valid at the time they signed it, and that it has not be tampered since that point. -A common usage for multiple signature is to prove that many human beings all agree with the content of a document. In administrative processes, an employee prepares a document, signs it to marks that they are responsible for the content, and a manager signs again to mark that the document has been controlled.",0.0,False,2,7515 2021-07-02 11:02:55.157,Can a pdf be signed multiple times using python endesive library?,"I have a single pdf with multiple copies of the same document merged into one. I want to digitally sign each and every copy, meaning the pdf must have multiple digital signatures. I'm using endesive library in Python to digitally sign the PDF. The signature is showing as valid when I sign the document once but when I'm writing the same signature multiple times, it shows that the signature is invalid (when opening the document). Is it right to digitally sign a document multiple times and if yes, how to achieve it using Python's endesive library?","In order to make multiple signatures, you must incrementally save the pdf, each time you add a signature. If you use PyMuPDF to build the document, you must save as saveIncr() @@ -9813,6 +2474,10 @@ PDF only: saves the document incrementally. This is a convenience abbreviation f write(garbage=0, clean=False, deflate=False, deflate_images=False, deflate_fonts=False, ascii=False, expand=0, pretty=False, encryption=PDF_ENCRYPT_NONE, permissions=- 1, owner_pw=None, user_pw=None) (Changed in v1.18.3) PDF only: Writes the current content of the document to a bytes object instead of to a file. Obviously, you should be wary about memory requirements. The meanings of the parameters exactly equal those in save(). Chapter Collection of Recipes contains an example for using this method as a pre-processor to pdfrw.",0.0,False,2,7515 +2021-07-02 11:02:55.157,Can a pdf be signed multiple times using python endesive library?,"I have a single pdf with multiple copies of the same document merged into one. I want to digitally sign each and every copy, meaning the pdf must have multiple digital signatures. I'm using endesive library in Python to digitally sign the PDF. The signature is showing as valid when I sign the document once but when I'm writing the same signature multiple times, it shows that the signature is invalid (when opening the document). Is it right to digitally sign a document multiple times and if yes, how to achieve it using Python's endesive library?","There is a strong difference between digital and manual signature: with a manual signature, you add a signature on a paper sheet, and you can/must separately sign each page. +With a digital signature, you sign a whole document seen as a sequence of bytes. AFAIK, there is no way to have different signatures on different parts of the document. +What is possible is to assemble the parts into a large document and then sign the final document. That final signature will be an evidence that the signer attests that the document is valid at the time they signed it, and that it has not be tampered since that point. +A common usage for multiple signature is to prove that many human beings all agree with the content of a document. In administrative processes, an employee prepares a document, signs it to marks that they are responsible for the content, and a manager signs again to mark that the document has been controlled.",0.0,False,2,7515 2021-07-02 21:47:31.657,pyqtdeploy issue with how to proceed,"I'm trying to be able to deploy a project that I made with pyqt5 using pyqtdeploy, I read the whole documentation, which sucks (sorry for the language), I even read the 1.3.2 version, and I still don't know how to make it work. I installed all the packages (using pip and the downloadable files for the demo) and run the setup, but I don't see the executable pyqtdeploy mentinned in the documentation, and the build-demo.py does not work. I'm on Windows by the way,","This is not an answer but can sort of one since it ""solve"" the problem OK I took a look at the whole code and to add to the fact that there are not the executable in it (the one mentioned in the documentation), all the imports are incorrect, and we have to either rename all the files or modify the import, I think that I'm just going to change and stick with pyinstaller, even if we can only do .exe. @@ -9828,13 +2493,13 @@ If you know how to do either of these commands please let me know down below! :) 2021-07-04 20:35:45.367,How to play MP4 with Ursina in Python,"I am new to game development. I am trying to start high and make a 3D RPG. I know the road is not gonna be easy. That is why i decided to use Ursina and python to make my game. However i wanna add a cutscene showing a Backstory. I have the video in mp4 format but i cannot seem to know how to play it inside the game with Ursina. Anyhelp will be much appreciated. -(Side question : do you think Ursina is good for a beginner in 3D gaming? If i want to publish my game on my website, isn't it better for me to learn javascript ? I read about Unity but it is too big to download for a little side project)","Well, I don't think there is a way to do that. the closest thing you can do to that is having a folder filled with all the frames of your video in .png or .jpg files, then adding a quad to the world and changing the texture of it to the next frame every fraction of a second depending on the framerate. this, however would make your computer l a g. trust me, I've tried it. it would probably be better to have a separate window with some sort of module that plays .mp4 files for playing the file. -In other words, there is no feasible way to do that.",0.0,False,2,7518 +(Side question : do you think Ursina is good for a beginner in 3D gaming? If i want to publish my game on my website, isn't it better for me to learn javascript ? I read about Unity but it is too big to download for a little side project)","From Entity Basics in the documentation: +e4 = Entity(model='cube', texture='movie_name.mp4') # set video texture",0.0,False,2,7518 2021-07-04 20:35:45.367,How to play MP4 with Ursina in Python,"I am new to game development. I am trying to start high and make a 3D RPG. I know the road is not gonna be easy. That is why i decided to use Ursina and python to make my game. However i wanna add a cutscene showing a Backstory. I have the video in mp4 format but i cannot seem to know how to play it inside the game with Ursina. Anyhelp will be much appreciated. -(Side question : do you think Ursina is good for a beginner in 3D gaming? If i want to publish my game on my website, isn't it better for me to learn javascript ? I read about Unity but it is too big to download for a little side project)","From Entity Basics in the documentation: -e4 = Entity(model='cube', texture='movie_name.mp4') # set video texture",0.0,False,2,7518 +(Side question : do you think Ursina is good for a beginner in 3D gaming? If i want to publish my game on my website, isn't it better for me to learn javascript ? I read about Unity but it is too big to download for a little side project)","Well, I don't think there is a way to do that. the closest thing you can do to that is having a folder filled with all the frames of your video in .png or .jpg files, then adding a quad to the world and changing the texture of it to the next frame every fraction of a second depending on the framerate. this, however would make your computer l a g. trust me, I've tried it. it would probably be better to have a separate window with some sort of module that plays .mp4 files for playing the file. +In other words, there is no feasible way to do that.",0.0,False,2,7518 2021-07-06 11:51:11.100,How to fetch swipe up count from Instagram story insights graph API?,"I need to fetch swipe up count which show in the insights of Instagram.As Facebook is not providing swipe up count through their graph API so how can I get that data. Scraping won't work as I already did and I want to fetch those data in python or javascript Thanks in advance for help","for now facebook is not providing this data in graph-api and it is only provided in influences in insights so for now its not possible for now to fetch but you can get by web scraping @@ -10053,15 +2718,15 @@ I hope this article is useful for you and you have answered your question",1.2,T Any thoughts?","My experience is from Visual Studio Code and not Visual Studio 2019 but I suspect they are most likely similar.. Press F1 and type ""View: Toggle Activity Bar Visibility"" and click on it when it shows up. That should make your activity bar visible again.",0.0,False,3,7552 2021-08-02 07:14:40.443,"visual studio Display, how to access activity bar?","I am in the Microsoft Visual Studio environment, most screenshots of this environment show an activity bar in the left sidebar. However, I cannot get this to appear in my display, I have sought to access this activity bar in view but cannot see any reasonable options. -Any thoughts?","On Microsoft Visual Studio 2019 if you check on the ""Debug"" menu, you can find other windows that are helpful when the system is running. -I'm not sure which window you are looking for, but there's a ""Diagnostic Tools"" that could be the one.",0.0,False,3,7552 -2021-08-02 07:14:40.443,"visual studio Display, how to access activity bar?","I am in the Microsoft Visual Studio environment, most screenshots of this environment show an activity bar in the left sidebar. However, I cannot get this to appear in my display, I have sought to access this activity bar in view but cannot see any reasonable options. Any thoughts?","Follow the steps: Open the Command Palette ( Ctrl + Shift + P ) and type Keyboard Shortcuts. Click on Toggle Activity Bar Visibility, and choose your shortcut using the keyboard! It's done!",0.0,False,3,7552 +2021-08-02 07:14:40.443,"visual studio Display, how to access activity bar?","I am in the Microsoft Visual Studio environment, most screenshots of this environment show an activity bar in the left sidebar. However, I cannot get this to appear in my display, I have sought to access this activity bar in view but cannot see any reasonable options. +Any thoughts?","On Microsoft Visual Studio 2019 if you check on the ""Debug"" menu, you can find other windows that are helpful when the system is running. +I'm not sure which window you are looking for, but there's a ""Diagnostic Tools"" that could be the one.",0.0,False,3,7552 2021-08-02 09:21:06.650,how to set correct batch_size and steps_per_epoch in keras?,"I have 20000 RGB images. I set batch_Size = 1 (due to GPU capacity). So now does it mean the model weights are changing with one-by-one pictures or it depends on the steps_per_epoch? How should I set the steps_per_epoch and epochs for using all of 20000 images to be involved in training in different epochs?","Yes, the weights are updated after each batch. The steps_per_epoch should be the number of datapoints (20000 in your case) divided by the batch size. Therefore steps_per_epoch will also be 20000 if the batch size is 1.",0.0,False,1,7553 @@ -10239,6 +2904,10 @@ In any case, if you create a model with UML and indicate something to be private 2021-08-29 15:00:01.400,enum vs enum34 in Python 3.7 causing pyinstaller errors,"I tried running PyInstaller to create a simple executable, but it generated errors which other posts here suggest its an enum vs enum34 issue. I am running Python 3.7 and for some odd reason my installation has only enum34 (1.1.6). I am NOT an IT guy, but I am the sole programmer for a startup and am terrified of breaking my environment and not being able to fix it. Is the proper procedure to simply install enum (I understand it may overwrite enum34), or uninstall enum34, then install enum? +I've also seen posts where folks had difficulties deleting enum34. Any hints on how to avoid those?",I ran into the same issue today. I just uninstalled enum34(using pip uninstall enum34) and then ran pyinstaller in the terminal and everything seemed to be working,0.0,False,2,7585 +2021-08-29 15:00:01.400,enum vs enum34 in Python 3.7 causing pyinstaller errors,"I tried running PyInstaller to create a simple executable, but it generated errors which other posts here suggest its an enum vs enum34 issue. I am running Python 3.7 and for some odd reason my installation has only enum34 (1.1.6). +I am NOT an IT guy, but I am the sole programmer for a startup and am terrified of breaking my environment and not being able to fix it. +Is the proper procedure to simply install enum (I understand it may overwrite enum34), or uninstall enum34, then install enum? I've also seen posts where folks had difficulties deleting enum34. Any hints on how to avoid those?","For the record: enum on PyPI -- preexisting third-party enum library @@ -10252,10 +2921,6 @@ enum in the stdlib -- this was added in version 3.4 and has been a part of Pytho -- Sadly, I don't know how to solve your problem.",-0.2012947653214861,False,2,7585 -2021-08-29 15:00:01.400,enum vs enum34 in Python 3.7 causing pyinstaller errors,"I tried running PyInstaller to create a simple executable, but it generated errors which other posts here suggest its an enum vs enum34 issue. I am running Python 3.7 and for some odd reason my installation has only enum34 (1.1.6). -I am NOT an IT guy, but I am the sole programmer for a startup and am terrified of breaking my environment and not being able to fix it. -Is the proper procedure to simply install enum (I understand it may overwrite enum34), or uninstall enum34, then install enum? -I've also seen posts where folks had difficulties deleting enum34. Any hints on how to avoid those?",I ran into the same issue today. I just uninstalled enum34(using pip uninstall enum34) and then ran pyinstaller in the terminal and everything seemed to be working,0.0,False,2,7585 2021-08-29 20:08:41.103,Processing thousands of files in parallel,"I have thousands of files stored in MongoDB which I need to fetch and process. Processing consists of a few steps which should be done sequentially. The whole process takes around ~2 mins per file from start to end. My question is how to do that as fast as possible while being scalable in future? Should I do it in pure python or should I maybe use Airflow + Celery (or even Celery by itself)? Are there any other ways/suggestions I could give a try? @@ -10681,6 +3346,15 @@ So I scaled every features while using Tree model like Decision Tree or LightGBM Then, the result when I scaled had worse result. I searched on the Internet, but all I earned is that Tree and Ensemble algorithm are not sensitive to variance of the data. I also bought a book ""Hands-on Machine-learning"" by O'Relly But I couldn't get enough explanation. +Can I get more detailed explanation for this?","Do not confuse trees and ensembles (which may be consist from models, that need to be scaled). +Trees do not need to scale features, because at each node, the entire set of observations is divided by the value of one of the features: relatively speaking, to the left everything is less than a certain value, and to the right - more. What difference then, what scale is chosen?",0.0,False,2,7620 +2021-09-25 05:00:45.557,Why Does Tree and Ensemble based Algorithm don't need feature scaling?,"Recently, I've been interested in Data analysis. +So I researched about how to do machine-learning project and do it by myself. +I learned that scaling is important in handling features. +So I scaled every features while using Tree model like Decision Tree or LightGBM. +Then, the result when I scaled had worse result. +I searched on the Internet, but all I earned is that Tree and Ensemble algorithm are not sensitive to variance of the data. +I also bought a book ""Hands-on Machine-learning"" by O'Relly But I couldn't get enough explanation. Can I get more detailed explanation for this?","Though I don't know the exact notations and equations, the answer has to do with the Big O Notation for the algorithms. Big O notation is a way of expressing the theoretical worse time for an algorithm to complete over extremely large data sets. For example, a simple loop that goes over every item in a one dimensional array of size n has a O(n) run time - which is to say that it will always run at the proportional time per size of the array no matter what. Say you have a 2 dimensional array of X,Y coords and you are going to loop across every potential combination of x/y locations, where x is size n and y is size m, your Big O would be O(mn) @@ -10695,15 +3369,6 @@ each object appearing only once at a given depth: O(m n d log n) and a Log n graph ... well pretty much doesn't change at all even with sufficiently large numbers of n, does it? so it doesn't matter how big your data set is, these algorithms are very efficient in what they do, but also do not scale because of the nature of a log curve on a graph (the worst increase in performance for +1 n is at the very beginning, then it levels off with only extremely minor increases to time with more and more n)",0.0,False,2,7620 -2021-09-25 05:00:45.557,Why Does Tree and Ensemble based Algorithm don't need feature scaling?,"Recently, I've been interested in Data analysis. -So I researched about how to do machine-learning project and do it by myself. -I learned that scaling is important in handling features. -So I scaled every features while using Tree model like Decision Tree or LightGBM. -Then, the result when I scaled had worse result. -I searched on the Internet, but all I earned is that Tree and Ensemble algorithm are not sensitive to variance of the data. -I also bought a book ""Hands-on Machine-learning"" by O'Relly But I couldn't get enough explanation. -Can I get more detailed explanation for this?","Do not confuse trees and ensembles (which may be consist from models, that need to be scaled). -Trees do not need to scale features, because at each node, the entire set of observations is divided by the value of one of the features: relatively speaking, to the left everything is less than a certain value, and to the right - more. What difference then, what scale is chosen?",0.0,False,2,7620 2021-09-25 15:20:46.980,python groupby to dataframe (just groupby to data no additional functions) to export to excel,"I am at a total loss as to why this is impossible to find but I really just want to be able to groupby and then export to excel. Don't need counts, or sums, or anything else and can only find examples including these functions. Tried removing those functions and the whole code just breaks. Anyways: Have a set of monthly metrics - metric name, volumes, date, productivity, and fte need. Simple calcs got the data looking nice, good to go. Currently it is grouped in 1 month sections so all metrics from Jan are one after the other etc. Just want to change the grouping so first section is individual metrics from Jan to Dec and so on for each one. @@ -10945,7 +3610,11 @@ Then when I opened VSCode up and typed: import discord I got this error mess ""discord"" is not accessedPylance Import ""discord"" could not be resolvedPylancereportMissingImports -What does this mean, and how can I fix it? I was really looking forward to coding the bot, but don't know how, now that this is messed up.",check in the bottom left corner of the VS Code window for which version of Python is it using. This issue usually occurs for me when I’m working in a virtual environment but VS Code is pointing to my global Python installation.,0.0,False,3,7651 +What does this mean, and how can I fix it? I was really looking forward to coding the bot, but don't know how, now that this is messed up.","Open an integrated Terminal in VS Code, + +run python --version, it should be python3.9.7 which is selected as python interpreter and shown in status bar. + +run pip show discord to check if its location is \..\python3.9.3\lib\site-packages. If not, reinstall it.",0.0,False,3,7651 2021-10-13 15:25:31.517,"I tried to add Python to VSCode, but it won't work","I opened command prompt (I'm on Windows) and I typed: '''none pip3 install discord @@ -10975,11 +3644,7 @@ Then when I opened VSCode up and typed: import discord I got this error mess ""discord"" is not accessedPylance Import ""discord"" could not be resolvedPylancereportMissingImports -What does this mean, and how can I fix it? I was really looking forward to coding the bot, but don't know how, now that this is messed up.","Open an integrated Terminal in VS Code, - -run python --version, it should be python3.9.7 which is selected as python interpreter and shown in status bar. - -run pip show discord to check if its location is \..\python3.9.3\lib\site-packages. If not, reinstall it.",0.0,False,3,7651 +What does this mean, and how can I fix it? I was really looking forward to coding the bot, but don't know how, now that this is messed up.",check in the bottom left corner of the VS Code window for which version of Python is it using. This issue usually occurs for me when I’m working in a virtual environment but VS Code is pointing to my global Python installation.,0.0,False,3,7651 2021-10-13 17:47:15.327,"Pyinstaller: It works, but .exe window and App Window is separated. Any ideas?","The Tkinter app works good after deployment with Pyinstaller, but it opens 2 windows: .exe and app (tkinter container). Any ideas? how to fix it?",I'm not sure I understand completely but you may want to try saving the file as .pyw vs .py before making it an executable. you may be seeing the console running.,0.0,False,1,7652 @@ -11255,8 +3920,7 @@ ERROR:saml2.client_base:XML parse error: Failed to verify signature And it seems to be a Windows problem. Does anyone know how should I implement this? The command used to verify the XML is: C:\Windows\xmlsec1.exe --verify --enabled-reference-uris empty,same-doc --enabled-key-data raw-x509-cert --pubkey-cert-pem C:\Users\me\AppData\Local\Temp\tmp8wssc6_f.pem --id-attr:ID urn:oasis:names:tc:SAML:2.0:assertion:Assertion --node-id _579304c7-f1c4-5918-83ee-4b33c5df1e00 --output C:\Users\me\AppData\Local\Temp\tmpw9lbnowc.xml C:\Users\me\AppData\Local\Temp\tmpcg9l7jik.xml And it returns b"""". -Thanks in advance.","Your question is a little vague. It seems that you have sent an authenrequest and got a response and the application on your end is throwing the signature validation error. If that is correct then you likely do not have the correct cacert from the identity provider defined in your application. -Questions about SAML and verifying XML signatures really need the original xml idealy in base64 so it is possible to try to check the signature.",0.0,False,2,7682 +Thanks in advance.","For those who may face this problem in the future: Windows OS (still don't know certainly if the problem is caused due an OS particularity, I couldn't test it in other environments), pysaml2, and django-saml2-auth don't handle self signed certificates very well. I could solve the problem by just forking pysaml2/django-saml2-auth and passing downloaded cert-files from IdP (.pem) manually.",0.0,False,2,7682 2021-11-10 16:03:50.280,Problems while implementing SSO using Django SAML2 Auth and AzureAD,"The error the application throws is: ERROR:saml2.sigver:check_sig: ERROR:saml2.response:correctly_signed_response: Failed to verify signature @@ -11265,7 +3929,8 @@ ERROR:saml2.client_base:XML parse error: Failed to verify signature And it seems to be a Windows problem. Does anyone know how should I implement this? The command used to verify the XML is: C:\Windows\xmlsec1.exe --verify --enabled-reference-uris empty,same-doc --enabled-key-data raw-x509-cert --pubkey-cert-pem C:\Users\me\AppData\Local\Temp\tmp8wssc6_f.pem --id-attr:ID urn:oasis:names:tc:SAML:2.0:assertion:Assertion --node-id _579304c7-f1c4-5918-83ee-4b33c5df1e00 --output C:\Users\me\AppData\Local\Temp\tmpw9lbnowc.xml C:\Users\me\AppData\Local\Temp\tmpcg9l7jik.xml And it returns b"""". -Thanks in advance.","For those who may face this problem in the future: Windows OS (still don't know certainly if the problem is caused due an OS particularity, I couldn't test it in other environments), pysaml2, and django-saml2-auth don't handle self signed certificates very well. I could solve the problem by just forking pysaml2/django-saml2-auth and passing downloaded cert-files from IdP (.pem) manually.",0.0,False,2,7682 +Thanks in advance.","Your question is a little vague. It seems that you have sent an authenrequest and got a response and the application on your end is throwing the signature validation error. If that is correct then you likely do not have the correct cacert from the identity provider defined in your application. +Questions about SAML and verifying XML signatures really need the original xml idealy in base64 so it is possible to try to check the signature.",0.0,False,2,7682 2021-11-11 10:13:26.840,Interval Prediction for a Time Series | Anomaly in Time Series,"I have a time series in which i am trying to detect anomalies. The thing is that with those anomalies i want to have a range for which the data points should lie to avoid being the anomaly point. I am using the ML .Net algorithm to detect anomalies and I have done that part but how to get range? If by some way I can get the range for the points in time series I can plot them and show that the points outside this range are anomalies. I have tried to calculate the range using prediction interval calculation but that doesn't work for all the data points in the time series. @@ -11512,12 +4177,12 @@ Any other python based repo can be imported either by a custom build script (tha 2021-12-01 04:40:14.133,I want to create a MYSQL table to show which users are working on which project,"I have two tables - ""Users"" and ""Projects"", i want to be able to show which users are assigned to which project. There may be multiple users assigned to the project. I was thinking of creating a 'project_users_matrix' table where a new column would be created for each user and a new row created for each project, then the cells can just show a 1 or 0 depending on if the person is working on that project. The 'cleaner' option would be to have columns 'user_1', 'user_2', 'user_3' in the project database but then there can't be an indeterminate number of users for a project. -Is there a better way to do this? It seems like there should be...",you need to create 2 more fields in project-table 1st for User_id and 2nd for Active/inactive in 1st field you need to store id of user who is working with that project and in 2nd field enter value 0/1 and provide button that if user is active on that table it shows 1. and once it done with his work.user can update it with 0.,0.0,False,2,7701 +Is there a better way to do this? It seems like there should be...","If users can participate in many projects, and projects can have many users then you have a many-to-many relationship and you need three tables: users, projects and an association table that contains user ids and projects ids only. Each active user-project combination should have a row in the association table. +If users cannot participate in multiple projects simultaneously then you have either a one-to-many relationship between projects and users, or users and projects, which can be expressed by a foreign key column on the many side.",0.2012947653214861,False,2,7701 2021-12-01 04:40:14.133,I want to create a MYSQL table to show which users are working on which project,"I have two tables - ""Users"" and ""Projects"", i want to be able to show which users are assigned to which project. There may be multiple users assigned to the project. I was thinking of creating a 'project_users_matrix' table where a new column would be created for each user and a new row created for each project, then the cells can just show a 1 or 0 depending on if the person is working on that project. The 'cleaner' option would be to have columns 'user_1', 'user_2', 'user_3' in the project database but then there can't be an indeterminate number of users for a project. -Is there a better way to do this? It seems like there should be...","If users can participate in many projects, and projects can have many users then you have a many-to-many relationship and you need three tables: users, projects and an association table that contains user ids and projects ids only. Each active user-project combination should have a row in the association table. -If users cannot participate in multiple projects simultaneously then you have either a one-to-many relationship between projects and users, or users and projects, which can be expressed by a foreign key column on the many side.",0.2012947653214861,False,2,7701 +Is there a better way to do this? It seems like there should be...",you need to create 2 more fields in project-table 1st for User_id and 2nd for Active/inactive in 1st field you need to store id of user who is working with that project and in 2nd field enter value 0/1 and provide button that if user is active on that table it shows 1. and once it done with his work.user can update it with 0.,0.0,False,2,7701 2021-12-02 11:23:01.343,Problems encountered when using RDKIT to convert SMILES to mol,"I want to get the molecules from the SMILES using rdkit in python. The SMILES I used was downloaded from the drugbank. However, when I using the function Chem.MolFromSmiles, some SMILES would report but some wouldn't: Explicit valence for atom # 0 N, 4, is greater than permitted. @@ -11664,10 +4329,10 @@ TypeError: 'map' object is not subscriptable Can you please tell me how can I resolve this problem? I am a competitive programmer but I am new to python I must have take input array by using map(int,input().split())","Use list(map(int,input().split())) instead of map(int,input().split()) to convert your input to list, because map function returns an generator which can not be indexed.",1.2,True,1,7713 -2021-12-12 04:08:53.057,Go to a specific line in terminal using Python,"I had a Python code that print 33 line in terminal, so I want to jump back to line 11 and rewrite all text in that line, how can i do that?",You will have to use a package like curses to take control of the terminal. That's what apps like vi use to do their drawing. That lets you draw strings at specific row and column position.,1.2,True,2,7714 2021-12-12 04:08:53.057,Go to a specific line in terminal using Python,"I had a Python code that print 33 line in terminal, so I want to jump back to line 11 and rewrite all text in that line, how can i do that?","This is why it's useful to have some familiarity with less. You could use 11g to jump to the beginning of the line or 11G to jump to the end of the line. Or from the command line, you could use less +11 filename.py or less +11 -N filename.py if you want to see the line numbers. You can also display a specific line of the file at a specific line of the terminal by using less +11 -j 10 filename.py.",0.0,False,2,7714 +2021-12-12 04:08:53.057,Go to a specific line in terminal using Python,"I had a Python code that print 33 line in terminal, so I want to jump back to line 11 and rewrite all text in that line, how can i do that?",You will have to use a package like curses to take control of the terminal. That's what apps like vi use to do their drawing. That lets you draw strings at specific row and column position.,1.2,True,2,7714 2021-12-13 11:24:24.413,How to train a LSTM on a sequence of dates?,"if i wanted to train an lstm to predict the next date in a sequence of dates, how would i do that since lstm require a scaled value? example of data: @@ -11853,9 +4518,9 @@ Note: even when I uninstall the module it still tries to run in vscode, and the 2021-12-20 14:32:06.050,From Python to create an alarm in Zabbix,"Could you please tell me how can I create an alarm from a Python script in a Zabbix system? I have a Python script in which a certain function is processed, and at a certain point I would like to create an alarm in the Zabbix system when a certain condition is created in the script. I also have a mail server. I was thinking of creating a separate mailbox for Zabbix, to this email address I will send a letter from Python, and the Zabbix system will receive this letter, process and create a Problem. Is such functionality possible?","Easiest is to use zabbix for this. To do that you feed values to an item in zabbix in one of the many ways and create a trigger that fires when you want it. If a script is needed to generate the values you can implement the script as a user parameter if it is short running. If it takes more than a few seconds using zabbix sender might be smarter.",1.2,True,1,7728 -2021-12-21 13:20:01.163,Is it possible to run a js or python script from a .js or .py file when a button in Flutter is pressed? (Not Flutter Web),"I want to make a flutter app that, when a button is pressed, it runs a python or a js script that I made and is saved in a .js or a .py file. Essentially, I want to run a .js or a .py file when a button in Flutter is pressed. Is that possible? And if yes, how can it be done? I am making a desktop application that has to work offline.","Yes you can using API, you make API in django or flask backend which run the python script you want after you make a request to your API via clicking the flutter button",0.2012947653214861,False,2,7729 2021-12-21 13:20:01.163,Is it possible to run a js or python script from a .js or .py file when a button in Flutter is pressed? (Not Flutter Web),"I want to make a flutter app that, when a button is pressed, it runs a python or a js script that I made and is saved in a .js or a .py file. Essentially, I want to run a .js or a .py file when a button in Flutter is pressed. Is that possible? And if yes, how can it be done? I am making a desktop application that has to work offline.","Yes, this is possible, but it requires Python and JS interpreters to be installed on the device. A way to ensure this is to ship them together with your app, although this will take a lot of extra space. From your app, you just launch the interpreter as a subprocess, for example python3 your_script.py If your script is hardcoded and not changed later by the app, you can also turn it into a standalone executable, using a tool such as py2exe. But this is basically the same thing, the interpreter just ends up included in your script.",1.2,True,2,7729 +2021-12-21 13:20:01.163,Is it possible to run a js or python script from a .js or .py file when a button in Flutter is pressed? (Not Flutter Web),"I want to make a flutter app that, when a button is pressed, it runs a python or a js script that I made and is saved in a .js or a .py file. Essentially, I want to run a .js or a .py file when a button in Flutter is pressed. Is that possible? And if yes, how can it be done? I am making a desktop application that has to work offline.","Yes you can using API, you make API in django or flask backend which run the python script you want after you make a request to your API via clicking the flutter button",0.2012947653214861,False,2,7729 2021-12-22 07:05:06.267,How to choose required cluster after k-means clustering in python opencv?,"i'm doing k-mean clustering on an image (fruit tree image) with k=4 clusters. when i display 4 clusters seperately, fruits goes to cluster1, stem goes to cluster 2, leaves goes to clster3 and background goes to cluster4. i'm further interested in fruit clutser only. the probelm is when i change image to another fruit tree image, fruit cluster goes to cluster2 or sometimes to clsuter3 or 4. my wish is to not change the cluster for fruit, means if fruit is in cluster1 it should be in cluster1 in all images of fruit tree. how can i do that? 2ndly if its not possible i want to select that cluster automatically which contains fruit. how can i do that? thanks in advance.","K-means clustering is unsupervised, meaning the algorithm does not know any labels. That is why the clusters are assigned at random to the targets. You can use a heuristic evaluation of the fruit cluster to determine which one it is. For example, based on data about the pixels (color, location, etc), and then assign it a label by hand. In any case, this step will require human intervention of some sort.",0.3869120172231254,False,1,7730 2021-12-22 11:52:37.727,Python Pandas phone numbers cleaning by eliminating consecutive repeated characters,"I have a retail dataset that consists of uncleaned mobile phone numbers. I have data like this @@ -11930,6 +4595,8 @@ the results of these functions' return directly",1.2,True,1,7736 2021-12-27 09:23:43.403,"I have an existing Django Webapp, how do I make a rest API to integrate the webapp with an android app?","I have an existing django webapp, I have two model classes in my models.py for different functionalities in my app. I have used django allauth for all of the login/logout/social sign ins. Note: I have not used django rest framework at all so far in creating my app. Now, I have to do the same for the android version of my webapp using Java. What exactly do I need to do right now to create the rest APIs and then connect them to the android app? please give some suggestions","yes you have to create new rest API for the android apps. Authentication will be token based for the rest API. storing tokens and retrieving data will be handled by the android app. The stable authentication for Django is Simplejwt",0.0,False,1,7737 +2021-12-28 01:00:06.003,discord.py: too big variable?,"I'm very new to python and programming in general, and I'm looking to make a discord bot that has a lot of hand-written chat lines to randomly pick from and send back to the user. Making a really huge variable full of a list of sentences seems like a bad idea. Is there a way that I can store the chatlines on a different file and have the bot pick from the lines in that file? Or is there anything else that would be better, and how would I do it?","You can store your data in a file, supposedly named response.txt +and retrieve it in the discord bot file as open(""response.txt"").readlines()",0.0,False,2,7738 2021-12-28 01:00:06.003,discord.py: too big variable?,"I'm very new to python and programming in general, and I'm looking to make a discord bot that has a lot of hand-written chat lines to randomly pick from and send back to the user. Making a really huge variable full of a list of sentences seems like a bad idea. Is there a way that I can store the chatlines on a different file and have the bot pick from the lines in that file? Or is there anything else that would be better, and how would I do it?","I'll interpret this question as ""how large a variable is too large"", to which the answer is pretty simple. A variable is too large when it becomes a problem. So, how can a variable become a problem? The big one is that the machien could possibly run out of memory, and an OOM killer (out-of-memory killer) or similiar will stop your program. How would you know if your variable is causing these issues? Pretty simple, your program crashes. If the variable is static (with a size fully known at compile-time or prior to interpretation), you can calculate how much RAM it will take. (This is a bit finnicky with Python, so it might be easier to load it up at runtime and figure it out with a profiler.) If it's more than ~500 megabytes, you should be concerned. Over a gigabyte, and you'll probably want to reconsider your approach[^0]. So, what do you do then? As suggested by @FishballNooodles, you can store your data line-by-line in a file and read the lines to an array. Unfortunately, the code they've provided still reads the entire thing into memory. If you use the code they're providing, you've got a few options, non-exhaustively listed below. @@ -11942,8 +4609,6 @@ Use an actual database. It's usually better not to reinvent the wheel. If you're [^0]: These numbers are actually just random. If you control the server environment on which you run the code, then you can probably come up with some more precise signposts.",1.2,True,2,7738 -2021-12-28 01:00:06.003,discord.py: too big variable?,"I'm very new to python and programming in general, and I'm looking to make a discord bot that has a lot of hand-written chat lines to randomly pick from and send back to the user. Making a really huge variable full of a list of sentences seems like a bad idea. Is there a way that I can store the chatlines on a different file and have the bot pick from the lines in that file? Or is there anything else that would be better, and how would I do it?","You can store your data in a file, supposedly named response.txt -and retrieve it in the discord bot file as open(""response.txt"").readlines()",0.0,False,2,7738 2021-12-28 02:49:15.400,How to create an virtual working environment?,"I wish to use Django to create a Web app, but first I need to create an virtual environment. Since I have a Windows system, I have used Win+R and cmd to open the teriminal, then the system shows C:\Users\HP>, I tried to create a datalog named learning_log and use terminal to switch to this datalog, so I typed in learning_log$ python -m venv 11_env, but the system shows 'learning_log$' is not recognized as an internal or external command, operable program or batch file. @@ -11986,12 +4651,16 @@ However with Brownie, I'm especially confused because the brownie docs say: pipx installs Brownie into a virtual environment and makes it available directly from the commandline. Once installed, you will never have to activate a virtual environment prior to using Brownie. I don't want to mess with the virtual env that brownie uses. -Anyways, my code runs fine and the command line tells me that brownie is installed.It's just that this warning is really annoying me. Can anyone tell me how to clear it up? Thanks!","It's happening because we install python with pipx instead of pip. pylance looks in the location our pip files are generally stored, and doesn't see brownie since we installed with pipx (which installed to it's on isolated virtual environment). So you have a few options: +Anyways, my code runs fine and the command line tells me that brownie is installed.It's just that this warning is really annoying me. Can anyone tell me how to clear it up? Thanks!","open command pallete cmd+shift+P (on mac) +type python select +click Python: Select Interpreter -Ignore it -Install brownie with pip in a virtual environment (not recommended) -If there is another suggestion, happy to hear it",0.4961739557460144,False,3,7746 +Enter interpreter path +~/.local/pipx/venvs/eth-brownie/bin/python + + +this works for me.",0.5916962662253621,False,3,7746 2021-12-31 17:08:59.943,"Import ""brownie"" could not be resolved in Pylance","Error is: Import ""brownie"" could not be resolvedPylance I know there are other SO posts that refer to this, but it seems most of them are talking about booting up a new env and installing x package into that virtual env. However with Brownie, I'm especially confused because the brownie docs say: @@ -11999,16 +4668,12 @@ However with Brownie, I'm especially confused because the brownie docs say: pipx installs Brownie into a virtual environment and makes it available directly from the commandline. Once installed, you will never have to activate a virtual environment prior to using Brownie. I don't want to mess with the virtual env that brownie uses. -Anyways, my code runs fine and the command line tells me that brownie is installed.It's just that this warning is really annoying me. Can anyone tell me how to clear it up? Thanks!","open command pallete cmd+shift+P (on mac) -type python select -click Python: Select Interpreter - - -Enter interpreter path -~/.local/pipx/venvs/eth-brownie/bin/python +Anyways, my code runs fine and the command line tells me that brownie is installed.It's just that this warning is really annoying me. Can anyone tell me how to clear it up? Thanks!","It's happening because we install python with pipx instead of pip. pylance looks in the location our pip files are generally stored, and doesn't see brownie since we installed with pipx (which installed to it's on isolated virtual environment). So you have a few options: +Ignore it +Install brownie with pip in a virtual environment (not recommended) -this works for me.",0.5916962662253621,False,3,7746 +If there is another suggestion, happy to hear it",0.4961739557460144,False,3,7746 2021-12-31 17:08:59.943,"Import ""brownie"" could not be resolved in Pylance","Error is: Import ""brownie"" could not be resolvedPylance I know there are other SO posts that refer to this, but it seems most of them are talking about booting up a new env and installing x package into that virtual env. However with Brownie, I'm especially confused because the brownie docs say: @@ -12183,10 +4848,16 @@ Scale the creation of the Docker images (I don't know how to do it) Is it a good architecture? How to scale this kind of process? -Thank you :)","It might be very difficult to discuss architecture and design questions, as they usually are heavy dependent on the context, scope, functional and non functional requirements, cost, available skills and knowledge and so on... -Personally I would prefer to stay with entirely server-less approach if possible. -For example, use a Cloud Scheduler (server less cron jobs), which sends messages to a Pub/Sub topic, on the other side of which there is a Cloud Function (or something else), which is triggered by the message. -Should it be a Cloud Function, or something else, what and how should it do - depends on you case.",0.3869120172231254,False,2,7770 +Thank you :)","As I understand, you will have a lot of simultaneous call on a custom python code trigger by an orchestrator ($Universe) and you want it on GCP platform. +Like @al-dann, I would go to serverless approach in order to reduce the cost. +As I also understand, pub sub seems to be not necessary, you will could easily trigger the function from any HTTP call and will avoid Pub Sub. +PubSub is necessary only to have some guarantee (at least once processing), but you can have the same behaviour if the $Universe validate the http request for every call (look at http response code & body and retry if not match the expected result). +If you want to have exactly once processing, you will need more tooling, you are close to event streaming (that could be a good use case as I also understand). In that case in a full GCP, I will go to pub / sub & Dataflow that can guarantee exactly once, or Kafka & Kafka Streams or Flink. +If at least once processing is fine for you, I will go http version that will be simple to maintain I think. You will have 3 serverless options for that case : + +App engine standard: scale to 0, pay for the cpu usage, can be more affordable than below function if the request is constrain to short period (few hours per day since the same hardware will process many request) +Cloud Function: you will pay per request(+ cpu, memory, network, ...) and don't have to think anything else than code but the code executed is constrain on a proprietary solution. +Cloud run: my prefered one since it's the same pricing than cloud function but you gain the portability, the application is a simple docker image that you can move easily (to kubernetes, compute engine, ...) and change the execution engine depending on cost (if the load change between the study and real world).",0.3869120172231254,False,2,7770 2022-01-23 09:57:03.087,Run & scale simple python scripts on Google Cloud Platform,"I have a simple python script that I would like to run thousands of it's instances on GCP (at the same time). This script is triggered by the $Universe scheduler, something like ""python main.py --date '2022_01'"". What architecture and technology I have to use to achieve this. PS: I cannot drop $Universe but I'm not against suggestions to use another technologies. @@ -12201,16 +4872,10 @@ Scale the creation of the Docker images (I don't know how to do it) Is it a good architecture? How to scale this kind of process? -Thank you :)","As I understand, you will have a lot of simultaneous call on a custom python code trigger by an orchestrator ($Universe) and you want it on GCP platform. -Like @al-dann, I would go to serverless approach in order to reduce the cost. -As I also understand, pub sub seems to be not necessary, you will could easily trigger the function from any HTTP call and will avoid Pub Sub. -PubSub is necessary only to have some guarantee (at least once processing), but you can have the same behaviour if the $Universe validate the http request for every call (look at http response code & body and retry if not match the expected result). -If you want to have exactly once processing, you will need more tooling, you are close to event streaming (that could be a good use case as I also understand). In that case in a full GCP, I will go to pub / sub & Dataflow that can guarantee exactly once, or Kafka & Kafka Streams or Flink. -If at least once processing is fine for you, I will go http version that will be simple to maintain I think. You will have 3 serverless options for that case : - -App engine standard: scale to 0, pay for the cpu usage, can be more affordable than below function if the request is constrain to short period (few hours per day since the same hardware will process many request) -Cloud Function: you will pay per request(+ cpu, memory, network, ...) and don't have to think anything else than code but the code executed is constrain on a proprietary solution. -Cloud run: my prefered one since it's the same pricing than cloud function but you gain the portability, the application is a simple docker image that you can move easily (to kubernetes, compute engine, ...) and change the execution engine depending on cost (if the load change between the study and real world).",0.3869120172231254,False,2,7770 +Thank you :)","It might be very difficult to discuss architecture and design questions, as they usually are heavy dependent on the context, scope, functional and non functional requirements, cost, available skills and knowledge and so on... +Personally I would prefer to stay with entirely server-less approach if possible. +For example, use a Cloud Scheduler (server less cron jobs), which sends messages to a Pub/Sub topic, on the other side of which there is a Cloud Function (or something else), which is triggered by the message. +Should it be a Cloud Function, or something else, what and how should it do - depends on you case.",0.3869120172231254,False,2,7770 2022-01-24 09:14:32.213,How Python handles asynchronous REST requests,"Scenario: Lets say I have a REST API written in Python (using Flask maybe) that has a global variable stored. The API has two endpoints, one that reads the variable and returns it and the other one that writes it. Now, I have two clients that at the same time call both endpoints (one the read, one the write). I know that in Python multiple threads will not actually run concurrently (due to the GIL), but there are some I/O operations that behave as asynchronously, would this scenario cause any conflict? And how does it behave, I'm assuming that the request that ""wins the race"" will hold the other request (is that right)?","In short: You should overthink your rest api design and implement some kind of fifo queue. You have to endpoints (W for writing and R for reading). Lets say the global variable has some value V0 in the beginning. If the clients A reads from R while at the same time client B writes to W. Two things can happen. @@ -12328,13 +4993,13 @@ I have successfully used python-can in the past to talk to other can devices and Hardware connection is fine too, because I can receive non-VCU messages from the vehicle. I can also receive VCU messages after I restart the canbus. What could be causing the bus to freeze? And is there a way to prevent it? (By setting some config in the socket-can layer itself?) Please note that restarting the bus will not fix the problem as the vehicle cannot recover once it goes into fault without a restart. -Any help will be appreciated!","Ok, it turns out it was a hardware problem. The length of CAN cables was a bit too much. The bus receives a lot of data transmission when the vehicle is turned on and the CAN cable was flooded with data. I still don't know the mechanics of the fault but decreasing the cable length made it all work.",0.0,False,2,7785 +Any help will be appreciated!","The cable length could be the reason, but take care about the bus topology and especially where the CAN terminations are located.",0.0,False,2,7785 2022-02-07 22:21:06.180,Canbus freezes - how to ignore error frames?,"I am trying to communicate with a vehicle control unit (VCU) over can. I have figured out the commands (index, data and frequency) and can verify the functionality through PCanView on Windows. Now I am using Nvidia Xavier system with python-can library to send the same commands, and I can verify the commands with candump. However when I power the vehicle engine on while sending these commands, the canbus freezes (this is when the VCU starts expecting the can commands I am sending, it goes into fault state if it doesn't receive the data it expects) I have successfully used python-can in the past to talk to other can devices and I am confident about the correctness of the code itself. Hardware connection is fine too, because I can receive non-VCU messages from the vehicle. I can also receive VCU messages after I restart the canbus. What could be causing the bus to freeze? And is there a way to prevent it? (By setting some config in the socket-can layer itself?) Please note that restarting the bus will not fix the problem as the vehicle cannot recover once it goes into fault without a restart. -Any help will be appreciated!","The cable length could be the reason, but take care about the bus topology and especially where the CAN terminations are located.",0.0,False,2,7785 +Any help will be appreciated!","Ok, it turns out it was a hardware problem. The length of CAN cables was a bit too much. The bus receives a lot of data transmission when the vehicle is turned on and the CAN cable was flooded with data. I still don't know the mechanics of the fault but decreasing the cable length made it all work.",0.0,False,2,7785 2022-02-08 00:55:54.980,How to Measure Similarity or Difference of Meaning Between Words?,"Say you have two random words ('yellow' and 'ambient' or 'goose' and 'kettle'). What tech could be used to rate how similar or different they are in meaning as informed by popular usage? For example, from 0 to 1 where antonyms are 0 and synonyms are 1, 'yellow' and 'ambient' might be 0.65 similar. Note: I'm not talking about how close the two strings are to each other, but rather an approximation of how similar their meanings are.","I do not really understand what you exactly mean with similarity especially if you want to talk about meaning. You would need a dataset to denote meaning unto words. A popular example of this would be sentiment analysis. If you got a lot of textual data, say tweets from twitter, you might want to know if the data is mostly positive or negative. To do this you would find a dataset of similar nature who has labelled the data already into categories. Then you can use this data to classify the texts into categories (e.g with a Naive Bayes classifier). In this way you can denote meaning on texts computationally. This would allow general evaluations but also evaluations on an input to input basis on how well they scored across different categories of meaning. @@ -12360,14 +5025,14 @@ I want to know that how can I run this command so that it runs on whole django p When I start a jupyter notebook and import turtle everything works just fine. When I start vscode from conda and use the same python environment, I get an import error from vscode saying: importerror: cannot import name Turtle. Why is this happening and how could I start turtle in vscode? -Thank you!",Use pip install PythonTurtle in the vs code terminal,0.0,False,2,7791 +Thank you!","Solved! Shayan solved the problem in the comments... I was so stupid to name my file turtle.py... no comment +I renamed the file and works really great! +Thank you Shayan!",0.0,False,2,7791 2022-02-10 16:21:01.733,Installed turtle package not recognized by vscode,"I have an issue with working with the turtle package in python. I have Anaconda installed on my computer. Within Anaconda I have installed the turtle package with pip command (there is no conda install option for turtle as to my knowledge). When I start a jupyter notebook and import turtle everything works just fine. When I start vscode from conda and use the same python environment, I get an import error from vscode saying: importerror: cannot import name Turtle. Why is this happening and how could I start turtle in vscode? -Thank you!","Solved! Shayan solved the problem in the comments... I was so stupid to name my file turtle.py... no comment -I renamed the file and works really great! -Thank you Shayan!",0.0,False,2,7791 +Thank you!",Use pip install PythonTurtle in the vs code terminal,0.0,False,2,7791 2022-02-11 04:14:23.947,AzureML: TabularDataset.to_pandas_dataframe() hangs when parquet file is empty,"I have created a Tabular Dataset using Azure ML python API. Data under question is a bunch of parquet files (~10K parquet files each of size of 330 KB) residing in Azure Data Lake Gen 2 spread across multiple partitions. When I try to load the dataset using the API TabularDataset.to_pandas_dataframe(), it continues forever (hangs), if there are empty parquet files included in the Dataset. If the tabular dataset doesn't include those empty parquet files, TabularDataset.to_pandas_dataframe() completes within few minutes. By empty parquet file, I mean that the if I read the individual parquet file using pandas (pd.read_parquet()), it results in an empty DF (df.empty == True). I discovered the root cause while working on another issue mentioned [here][1]. @@ -12376,9 +5041,10 @@ Update The issue has been fixed in the following version: azureml-dataprep : 3.0.1 -azureml-core : 1.40.0","Thanks for reporting it. -This is a bug in handling of the parquet files with columns but empty row set. This has been fixed already and will be included in next release. -I could not repro the hang on multiple files, though, so if you could provide more info on that would be nice.",1.2,True,2,7792 +azureml-core : 1.40.0","You can use the on_error='null' parameter to handle the null values. +Your statement will look like this: +TabularDataset.to_pandas_dataframe(on_error='null', out_of_range_datetime='null') +Alternatively, you can check the size of the file before passing it to to_pandas_dataframe method. If the filesize is 0, either write some sample data into it using python open keyword or ignore the file, based on your requirement.",0.0,False,2,7792 2022-02-11 04:14:23.947,AzureML: TabularDataset.to_pandas_dataframe() hangs when parquet file is empty,"I have created a Tabular Dataset using Azure ML python API. Data under question is a bunch of parquet files (~10K parquet files each of size of 330 KB) residing in Azure Data Lake Gen 2 spread across multiple partitions. When I try to load the dataset using the API TabularDataset.to_pandas_dataframe(), it continues forever (hangs), if there are empty parquet files included in the Dataset. If the tabular dataset doesn't include those empty parquet files, TabularDataset.to_pandas_dataframe() completes within few minutes. By empty parquet file, I mean that the if I read the individual parquet file using pandas (pd.read_parquet()), it results in an empty DF (df.empty == True). I discovered the root cause while working on another issue mentioned [here][1]. @@ -12387,10 +5053,9 @@ Update The issue has been fixed in the following version: azureml-dataprep : 3.0.1 -azureml-core : 1.40.0","You can use the on_error='null' parameter to handle the null values. -Your statement will look like this: -TabularDataset.to_pandas_dataframe(on_error='null', out_of_range_datetime='null') -Alternatively, you can check the size of the file before passing it to to_pandas_dataframe method. If the filesize is 0, either write some sample data into it using python open keyword or ignore the file, based on your requirement.",0.0,False,2,7792 +azureml-core : 1.40.0","Thanks for reporting it. +This is a bug in handling of the parquet files with columns but empty row set. This has been fixed already and will be included in next release. +I could not repro the hang on multiple files, though, so if you could provide more info on that would be nice.",1.2,True,2,7792 2022-02-11 14:40:25.060,"How to train AI to create familiar sounding, randomly generated names?","I am very new to python and I'd like to ask for an advice on how to, where to start, what to learn. I've got this fantasy name generator (joining randomly picked letters), which every now and then creates a name which is acceptable, what I'd like to do though is to train an AI to generate names which aren't lets say just consonants, ultimately being able to generate human, elvish, dwarfish etc names. I'd appreciate any advice in this matter. @@ -12454,10 +5119,10 @@ I have been playing with a lot of different params but still lost on how to fix Because ideally kafka does rebalance if it does not recieve the acknowledgement of the message read before session timeout happens. If it is set as false then either set it to true or make sure to commit to kafka once you are done processing the message",0.0,False,1,7804 2022-02-18 20:11:22.670,How do to use a python script in Unity?,"I'm trying to run a face detection model in Unity. It gets input from the webcam, then spits out a face. But trying to make this work with C# has been an absolute nightmare. And despite all my suffering, I still haven't been able to make it work! If I could use python, I'd be able to get it done easily. So, obviously, I want to find a way to get a python script working in Unity. But IronPython is the only thing I've been able to find, and it's outdated. -I need either knowledge of how to make IronPython work in spite of being outdated, or some other method. Please.","Unity not supported python, But you Can write Python Code and run it by Socket programing, Create Server with python and send data,in C# Connect to server and use data sended with python.",0.0,False,2,7805 +I need either knowledge of how to make IronPython work in spite of being outdated, or some other method. Please.",You can just run your python script on playtime and let it create some data in files. Then read the files using C# and display data in Unity.,-0.1352210990936997,False,2,7805 2022-02-18 20:11:22.670,How do to use a python script in Unity?,"I'm trying to run a face detection model in Unity. It gets input from the webcam, then spits out a face. But trying to make this work with C# has been an absolute nightmare. And despite all my suffering, I still haven't been able to make it work! If I could use python, I'd be able to get it done easily. So, obviously, I want to find a way to get a python script working in Unity. But IronPython is the only thing I've been able to find, and it's outdated. -I need either knowledge of how to make IronPython work in spite of being outdated, or some other method. Please.",You can just run your python script on playtime and let it create some data in files. Then read the files using C# and display data in Unity.,-0.1352210990936997,False,2,7805 +I need either knowledge of how to make IronPython work in spite of being outdated, or some other method. Please.","Unity not supported python, But you Can write Python Code and run it by Socket programing, Create Server with python and send data,in C# Connect to server and use data sended with python.",0.0,False,2,7805 2022-02-18 20:44:44.673,Converting a String to a Usable Date,"I am in the process of converting a JSON into a dateframe. One of the items in the JSON is a date in the form of a string. I am calling the item with the following entry: markets_json['events'][i]['periods']['num_0']['cutoff'] i is a position number in a list that is being generated from a ```for`` loop. The other fields are dictionary keys. It returns a string that looks something like this: @@ -13399,16 +6064,16 @@ The bot suggest you the text to read based on you your vocabulary. (when you mar For example you are user A, you know 500 words, and you want to get the text from the bot database where you know at least 75% of words or at least 90% of words. Right now I have the database of user words and texts. How should I approach indexing, that whould tell me how many words user know from each text? Obviously, I can compare the list of user words with the list of words from each text at every bot start. But I'm not sure if it is the most efficient way. Each time indexing 100+ texts feels like a strange idea. -Could you please suggest me where can I read about similar problems? Or how can i search it? I don't even know how to google it...","You don't need to process every text at every bot start. -Process every text once. -Then write the results of all the processing to a file. When the bot starts, recover the data by reading that data file.",0.0,False,2,7906 +Could you please suggest me where can I read about similar problems? Or how can i search it? I don't even know how to google it...","You can use database like Elasticsearch that allows you to do full-text search. And when you'll query with user words, it will also give you confidence value, with which you can decide which text has better matching.",0.0,False,2,7906 2022-05-09 14:14:56.783,How to approach a search\comparison problem for the large database?,"I'm just learning learning python and don't know many things. At the moment I'm building a telegram bot that can help you to find appropriate text to read in foreign lang. The core function I want to implement is: The bot suggest you the text to read based on you your vocabulary. (when you mark a text as ""read"", all the words are added to your dictionary. like that bot collects info) For example you are user A, you know 500 words, and you want to get the text from the bot database where you know at least 75% of words or at least 90% of words. Right now I have the database of user words and texts. How should I approach indexing, that whould tell me how many words user know from each text? Obviously, I can compare the list of user words with the list of words from each text at every bot start. But I'm not sure if it is the most efficient way. Each time indexing 100+ texts feels like a strange idea. -Could you please suggest me where can I read about similar problems? Or how can i search it? I don't even know how to google it...","You can use database like Elasticsearch that allows you to do full-text search. And when you'll query with user words, it will also give you confidence value, with which you can decide which text has better matching.",0.0,False,2,7906 +Could you please suggest me where can I read about similar problems? Or how can i search it? I don't even know how to google it...","You don't need to process every text at every bot start. +Process every text once. +Then write the results of all the processing to a file. When the bot starts, recover the data by reading that data file.",0.0,False,2,7906 2022-05-10 17:03:37.690,how do I find and get text from a text file in python?,"I have a text file with lines like this one: Cubo: 100% (left_x: 744 top_y: 395 width: 167 height: 181) I would like to assign the appropiate int for each one of the variables, something like: left_x = 744, top_y = 395, width = 167, height = 181 but without having to do it manually.","You can open a file by using the open method. For example, you could do @@ -14163,12 +6828,12 @@ is there a similar module to get the same result? I have seen some people make suggestions to clear console but they all seem to depend on which OS I am running, since my desktop is windows and laptop is mac I want something that can work on both.","Did you try ""import os"" at the beginning of code and using ""os.system('clear')""? This should work on any OS or code editor/IDE.",-0.3869120172231254,False,1,7988 2022-07-24 01:22:20.000,poetry show command - what do the red listed packages mean?,"When I run poetry show - most of my packages are blue but a few are red? What do these two colors mean? -I think red means the package is not @latest ?","Yes, red indicates that you have an outdated package and blue is up to date",-0.2012947653214861,False,2,7989 -2022-07-24 01:22:20.000,poetry show command - what do the red listed packages mean?,"When I run poetry show - most of my packages are blue but a few are red? What do these two colors mean? I think red means the package is not @latest ?","Black: Not required package Red: Not installed / It needs an immediate semver-compliant upgrade Yellow: It needs an upgrade but has potential BC breaks so is not urgent Green: Already latest",0.3869120172231254,False,2,7989 +2022-07-24 01:22:20.000,poetry show command - what do the red listed packages mean?,"When I run poetry show - most of my packages are blue but a few are red? What do these two colors mean? +I think red means the package is not @latest ?","Yes, red indicates that you have an outdated package and blue is up to date",-0.2012947653214861,False,2,7989 2022-07-25 22:10:53.000,Clicking all buttons that have the same text with selenium,"I was trying to automate the process off adding things to a list by clicking the add button and cant figure out how to get selenium to click on every button that has the text ""add"" on it but not the other buttons. My end goal is to click add on every anime on the page from my anime list and after every click click the submit button, then once a page is finished go to the next then next letter.","findElements in Selenium returns you the list of web elements that match the locator value, unlike findElement, which returns only a single web element. If there are no matching elements within the web page, findElements returns an empty list. After that, you can iterate through your list and do the actions.",0.0,False,1,7990 2022-07-26 02:21:57.000,Checking if a line was already drawn in matplotlib and deleting it,"So, I would like to know if there is a way to delete an line already plotted using matplotlib. But here is the thing: @@ -14199,10 +6864,7 @@ I'm new to StackOverflow and newish to python so forgive me if there's any error I don't have any code yet because I have 0 idea how I would go about doing this. The best idea I have would be to randomly assign values to each key, and somehow tell the program where each key is physically located in proximity to each other, but this seems inefficient. I'm not looking for a way to input this into a textbox or on a website somewhere, I only want to generate the random text in a string through a function. -You don't have to write the whole thing but I would greatly appreciate any help or ideas how to do this, thank you.","I would just open a file and spam your keyboard for 10 minutes or so. This will likely generate a huge data set that perfectly matches what you want. -Next to generate random like strings you can select random short(ish) chunks from the file and concatenate them together to achieve the desired spam strings. -That might look something like -"""".join([example_string[i:i+random.randrange(3,5)] for i in [random.randrange(0,n) for _ in range(10)]])",0.0,False,2,7994 +You don't have to write the whole thing but I would greatly appreciate any help or ideas how to do this, thank you.","It sounds like you want some sort of implicit or explicit Markov chain. For each key, you assign a set of probabilities for what the next key is. Start with a random key, and then move to the next key according to the assigned probabilities.",0.0,False,2,7994 2022-07-27 17:08:01.000,"How could I make a function that generates realistic ""keyboard-like"" spam? (In python)","My idea is to create a function that takes an input of a number and outputs a string of that length, which consists of ""keyboard-like"" spam. I know how to generate a completely random string of characters, however, I'm trying to make it look as if it's real ""spam"" from someone typing on a keyboard. For example: @@ -14215,7 +6877,10 @@ I'm new to StackOverflow and newish to python so forgive me if there's any error I don't have any code yet because I have 0 idea how I would go about doing this. The best idea I have would be to randomly assign values to each key, and somehow tell the program where each key is physically located in proximity to each other, but this seems inefficient. I'm not looking for a way to input this into a textbox or on a website somewhere, I only want to generate the random text in a string through a function. -You don't have to write the whole thing but I would greatly appreciate any help or ideas how to do this, thank you.","It sounds like you want some sort of implicit or explicit Markov chain. For each key, you assign a set of probabilities for what the next key is. Start with a random key, and then move to the next key according to the assigned probabilities.",0.0,False,2,7994 +You don't have to write the whole thing but I would greatly appreciate any help or ideas how to do this, thank you.","I would just open a file and spam your keyboard for 10 minutes or so. This will likely generate a huge data set that perfectly matches what you want. +Next to generate random like strings you can select random short(ish) chunks from the file and concatenate them together to achieve the desired spam strings. +That might look something like +"""".join([example_string[i:i+random.randrange(3,5)] for i in [random.randrange(0,n) for _ in range(10)]])",0.0,False,2,7994 2022-08-01 04:40:45.000,Anaconda-Navigator.app missing after installation on M1 macOS Monterey,"I installed Anaconda, but it did not include the GUI app, Anaconda-Navigator app in the Applications folder. What do I need to do to get the GUI app? Details: Computer: 2021 14-inch MacBook Pro, M1 Max @@ -14323,15 +6988,15 @@ conda create -n meep -c conda-forge pymeep pymeep-extras spyder",0.0,False,1,800 I was having some troubles with pip, so I reinstalled Python. After the reinstall pip began to work, but Pycharm, my IDE, could no longer find Python. When I reinstalled Python it created a new folder for itself (Python310), but Pycharm kept looking in the old folder (Python39). I couldn't figure out how to get Pycharm to look in the new folder. Even deleting and reinstalling it did nothing. So, I renamed Python310 to Python39 and changed the PATH. Now Pycharm can find Python. But pip has developed a new and exciting error. When I try to use it I get the following message: Fatal error in launcher: Unable to create process using '""C:\Users\user\AppData\Local\Programs\Python\Python310\python.exe"" ""C:\Users\user\AppData\Local\Programs\Python\Python39\Scripts\pip.exe"" install numpy': The system cannot find the file specified. -If I read this correctly pip is still trying to look in Python310. Would you please tell me what I need to do to get pip to looking in the right place?","Option 1: -delete and reinstall again. and then when creating a project it should prompt you to pick a basic interpreter, choose python310 or whatever version you're using. -Option 2: -use a different IDE.",-0.1352210990936997,False,2,8009 +If I read this correctly pip is still trying to look in Python310. Would you please tell me what I need to do to get pip to looking in the right place?","try to uninstall all of the existing python versions. and install it again. using any of application allow you to delete most of files, so to prevent error when re-install .",-0.1352210990936997,False,2,8009 2022-08-14 23:50:41.000,pip looking in wrong folder,"My issue requires some backstory. I was having some troubles with pip, so I reinstalled Python. After the reinstall pip began to work, but Pycharm, my IDE, could no longer find Python. When I reinstalled Python it created a new folder for itself (Python310), but Pycharm kept looking in the old folder (Python39). I couldn't figure out how to get Pycharm to look in the new folder. Even deleting and reinstalling it did nothing. So, I renamed Python310 to Python39 and changed the PATH. Now Pycharm can find Python. But pip has developed a new and exciting error. When I try to use it I get the following message: Fatal error in launcher: Unable to create process using '""C:\Users\user\AppData\Local\Programs\Python\Python310\python.exe"" ""C:\Users\user\AppData\Local\Programs\Python\Python39\Scripts\pip.exe"" install numpy': The system cannot find the file specified. -If I read this correctly pip is still trying to look in Python310. Would you please tell me what I need to do to get pip to looking in the right place?","try to uninstall all of the existing python versions. and install it again. using any of application allow you to delete most of files, so to prevent error when re-install .",-0.1352210990936997,False,2,8009 +If I read this correctly pip is still trying to look in Python310. Would you please tell me what I need to do to get pip to looking in the right place?","Option 1: +delete and reinstall again. and then when creating a project it should prompt you to pick a basic interpreter, choose python310 or whatever version you're using. +Option 2: +use a different IDE.",-0.1352210990936997,False,2,8009 2022-08-15 14:09:37.000,How do I duplicate the same window in VSCODE,I tried duplicating a the same window but it just went. I made this server.py file that needs to be run twice if we need 2 players. Anyone know how to duplicate windows in vscode?,"You could open up a new terminal window with cntrl + shift + backtick, and run your python server there (probably on a different port).",1.2,True,1,8010 2022-08-15 17:18:29.000,ImportError: No module named mysql.connector when using cron-jb,"on my hosting provider, I try to make a cron job running a .py-file. The cron-job starts but I always get this error message ""ImportError: No module named mysql.connector"". @@ -14882,8 +7547,8 @@ The Bloomberg Terminal's delivery point is localhost:8194. No Bloomberg Terminal So, bottom line, the API library is available and you can develop against it. Problem is, the first few lines of creating a session object and connecting to the end point will fail unless you have a Bloomberg product. There's no sandbox, sadly. Pricing depends on product, and unfortunately you'll also need to consider your application use-case. As an example, if you're writing a systematic trading application, then the licensing of the Bloomberg (Professional) Terminal will not permit that, however, a B-PIPE will include a licence that will permit that (plus hefty exchange fees if not OTC). Good luck.",0.3869120172231254,False,1,8063 -2022-11-02 16:12:13.000,Using temporary files and folders in Web2py app,"I am relatively new to web development and very new to using Web2py. The application I am currently working on is intended to take in a CSV upload from a user, then generate a PDF file based on the contents of the CSV, then allow the user to download that PDF. As part of this process I need to generate and access several intermediate files that are specific to each individual user (these files would be images, other pdfs, and some text files). I don't need to store these files in a database since they can be deleted after the session ends, but I am not sure the best way or place to store these files and keep them separate based on each session. I thought that maybe the subfolders in the sessions folder would make sense, but I do not know how to dynamically get the path to the correct folder for the current session. Any suggestions pointing me in the right direction are appreciated!","If the information is not confidential in similar circumstances, I directly write the temporary files under /tmp.",0.0,False,2,8064 2022-11-02 16:12:13.000,Using temporary files and folders in Web2py app,"I am relatively new to web development and very new to using Web2py. The application I am currently working on is intended to take in a CSV upload from a user, then generate a PDF file based on the contents of the CSV, then allow the user to download that PDF. As part of this process I need to generate and access several intermediate files that are specific to each individual user (these files would be images, other pdfs, and some text files). I don't need to store these files in a database since they can be deleted after the session ends, but I am not sure the best way or place to store these files and keep them separate based on each session. I thought that maybe the subfolders in the sessions folder would make sense, but I do not know how to dynamically get the path to the correct folder for the current session. Any suggestions pointing me in the right direction are appreciated!","I was having this error ""TypeError: expected string or Unicode object, NoneType found"" and I had to store just a link in the session to the uploaded document in the db or maybe the upload folder in your case. I would store it to upload to proceed normally, and then clear out the values and the file if not 'approved'?",0.0,False,2,8064 +2022-11-02 16:12:13.000,Using temporary files and folders in Web2py app,"I am relatively new to web development and very new to using Web2py. The application I am currently working on is intended to take in a CSV upload from a user, then generate a PDF file based on the contents of the CSV, then allow the user to download that PDF. As part of this process I need to generate and access several intermediate files that are specific to each individual user (these files would be images, other pdfs, and some text files). I don't need to store these files in a database since they can be deleted after the session ends, but I am not sure the best way or place to store these files and keep them separate based on each session. I thought that maybe the subfolders in the sessions folder would make sense, but I do not know how to dynamically get the path to the correct folder for the current session. Any suggestions pointing me in the right direction are appreciated!","If the information is not confidential in similar circumstances, I directly write the temporary files under /tmp.",0.0,False,2,8064 2022-11-03 16:12:57.000,I launched a django server and now I can't stop the process,"Can anyone please tell me how to stop my server? I did like the first few things you need to do in order to commence a django webpage, url pattern, request, HttpResponce etc. and I ran my server but that rocket is still showing on my screen despite trying to kill, pkill, ctrl+pause, ctrl+C. I'm so done with this... I looked up on The Internet how to stop my django server. Nothing worked. On top of that when I ran it I got a ""ModuleNotFoundError"" but the rocket is still showing when I type in the numbers...","Without a screenshot, we can only guess the issue. @@ -15360,9 +8025,9 @@ There was no installation folder for the python 3.10.8 in the programs folder, i So I uninstalled and reinstalled the newer version (3.10.8) to a custom location/folder. I opened the Python\Scripts folder of the new installation in the command line. I was able to install pandas with pip install pandas",1.2,True,1,8097 2022-12-07 17:20:10.000,How to find a substring in a text file using python?,"By using python I need to know how to find a substring in a text file. -I tried using in and not in function in python to find a substring from a text file but i am not clear about it","Finding the index of the string in the text file using readline() In this method, we are using the readline() function, and checking with the find() function, this method returns -1 if the value is not found and if found it returns 0.",0.0,False,2,8098 -2022-12-07 17:20:10.000,How to find a substring in a text file using python?,"By using python I need to know how to find a substring in a text file. I tried using in and not in function in python to find a substring from a text file but i am not clear about it","finding the index of the string in the text file using readline()In this method,we are using the readline()function,and checking with the find()function,this method returns-1 if the values is not found and if found it returns o",0.0,False,2,8098 +2022-12-07 17:20:10.000,How to find a substring in a text file using python?,"By using python I need to know how to find a substring in a text file. +I tried using in and not in function in python to find a substring from a text file but i am not clear about it","Finding the index of the string in the text file using readline() In this method, we are using the readline() function, and checking with the find() function, this method returns -1 if the value is not found and if found it returns 0.",0.0,False,2,8098 2022-12-08 14:26:34.000,Embedded linux start python from crontab with terminal access and subprocess permissions,"I have an embedded linux system that I need to run a python script whenever it boots. The python script needs to have a terminal interface so the user can interact and see outputs. The script also spawns another process to transfer large amounts of data over SPI, this was written in C. I've managed to get the script to start on launch and have terminal access by adding @reboot /usr/bin/screen -d -m python3 /scripts/my_script.py @@ -15443,14 +8108,14 @@ Does anyone know how to solve this problem? I was working fine on 2022/12/16, but today it is not working.","Running into a similiar issue. So far, I am able to tell that google.cloud actions will not run if I have shapley files installed. When I delete the shapley files on my computer I am able to run google.cloud methods",0.0,False,1,8110 2022-12-17 12:53:51.000,How to develop a rest api without using serializer in Django Rest Framework?,"I want to create a Student Register and Login Api without using serializer in django Rest Framework. So I want to know how I make CRUD operation for those api using ApiView -Any one please solve this",If you are using DRF then you must need to create the serializers. But you can create the API without using DRF and serialzers.,0.0,False,2,8111 -2022-12-17 12:53:51.000,How to develop a rest api without using serializer in Django Rest Framework?,"I want to create a Student Register and Login Api without using serializer in django Rest Framework. -So I want to know how I make CRUD operation for those api using ApiView Any one please solve this","I don't think you can, or even should but first, you need to understand what a serializer actually does: Serializers allow complex data such as querysets and model instances to be converted to native Python datatypes that can then be easily rendered into JSON, XML, or other content types. Serializers also provide deserialization, allowing parsed data to be converted back into complex types, after first validating the incoming data So don't be scared by serializers. Just take your time and learn how to use them using the many tutorials available online.",0.0,False,2,8111 +2022-12-17 12:53:51.000,How to develop a rest api without using serializer in Django Rest Framework?,"I want to create a Student Register and Login Api without using serializer in django Rest Framework. +So I want to know how I make CRUD operation for those api using ApiView +Any one please solve this",If you are using DRF then you must need to create the serializers. But you can create the API without using DRF and serialzers.,0.0,False,2,8111 2022-12-18 20:45:31.000,how to map jina yaml configurations on Pycharm?,"I am using Jina ai for my neural search project, and setting up the jina in my pycharm. What will be the yaml configurations and json schema will be? I am trying to find setting up resources, unable to get the proper setup guide for pycharm.","Click menu Preferences -> JSON Schema mappings;