diff --git "a/validation.csv" "b/validation.csv" --- "a/validation.csv" +++ "b/validation.csv" @@ -1,7532 +1,3740 @@ Q_CreationDate,Title,Question,Answer,Score,Is_accepted,N_answers,Q_Id -2018-03-26 05:57:47.947,Text Categorization Test NLTK python,"I have using nltk packages and train a model using Naive Bayes. I have save the model to a file using pickle package. Now i wonder how can i use this model to test like a random text not in the dataset and the model will tell if the sentence belong to which categorize? -Like my idea is i have a sentence : "" Ronaldo have scored 2 goals against Egypt"" And pass it to the model file and return categorize ""sport"".","Just saving the model will not help. You should also save your VectorModel (like tfidfvectorizer or countvectorizer what ever you have used for fitting the train data). You can save those the same way using pickle. Also save all those models you used for pre-processing the train data like normalization/scaling models, etc. For the test data repeat the same steps by loading the pickle models that you saved and transform the test data in train data format that you used for model building and then you will be able to classify.",1.2,True,1,5420 -2018-03-26 12:36:28.430,"How does Python internally distinguish ""from package import module"" between ""from module import function""","If I understand correctly, the python syntax from ... import ... can be used in two ways - -from package-name import module-name -from module-name import function-name - -I would like to know a bit of how Python internally treats the two different forms. Imagine, for example, that the interpreter gets ""from A import B"", does the interpreter actually try to determine whether A is a package-name/ module-name, or does it internally treat packages and modules as the same class of objects (something like Linux treats files and directories very similarly)?","First of all, a module is a python file that contains classes and functions. when you say From A Import B python searches for A(a module) in the standard python library and then imports B(the function or class) which is the module if it finds A. If it doesn't it goes out and starts searching in the directory were packages are stored and searches for the package name( A ) and then if it finds it, it imports the Module name(B). If it fails in the past 2 processes it returns an error. -Hope this helps.",-0.3869120172231254,False,1,5421 -2018-03-26 14:38:24.260,What is a good crawling speed rate?,I'm crawling web pages to create a search engine and have been able to crawl close to 9300 pages in 1 hour using Scrapy. I'd like to know how much more can I improve and what value is considered as a 'good' crawling speed.,"I'm no expert but I would say that your speed is pretty slow. I just went to google, typed in the word ""hats"", pressed enter and: about 650,000,000 results (0.63 seconds). That's gonna be tough to compete with. I'd say that there's plenty of room to improve.",-0.1352210990936997,False,2,5422 -2018-03-26 14:38:24.260,What is a good crawling speed rate?,I'm crawling web pages to create a search engine and have been able to crawl close to 9300 pages in 1 hour using Scrapy. I'd like to know how much more can I improve and what value is considered as a 'good' crawling speed.,It really depends but you can always check your crawling benchmarks for your hardware by typing scrapy bench on your command line,0.0,False,2,5422 -2018-03-27 05:06:56.187,Verify mountpoint in the remote server,"os.path.ismount() will verify whether the given path is mounted on the local linux machine. Now I want to verify whether the path is mounted on the remote machine. Could you please help me how to achieve this. -For example: my dev machine is : xx:xx:xxx -I want to verify whether the '/path' is mounted on yy:yy:yyy. -How can achieve this by using os.path.ismount() function","If you have access to both machines, then one way could be to leverage python's sockets. The client on the local machine would send a request to the server on the remote machine, then the server would do os.path.ismount('/path') and send back the return value to the client.",0.0,False,1,5423 -2018-03-27 22:23:28.003,How to parse a c/c++ header with llvmlite in python,"I'd like to parse a c and/or c++ header file in python using llvmlite. Is this possible? And if so, how do I create an IR representation of the header's contents?","llvmlite is a python binding for LLVM, which is independent from C or C++ or any other language. To parse C or C++, one option is to use the python binding for libclang.",0.0,False,1,5424 -2018-03-28 05:18:16.253,Are framework and libraries the more important bit of coding?,"Coding is entirely new to me. -Right now, I am teaching myself Python. As of now, I am only going over algorithms. I watched a few crash courses online about the language. Based on that, I don't feel like I am able to code any sort of website or software which leads me wonder if the libraries and frameworks of any programming language are the most important bit? -Should I spend more time teaching myself how to code with frameworks and libraries? -Thanks","First of all, you should try to be comfortable with every Python mechanisms (classes, recursion, functions... everything you usually find in any book or complete tutorial). It could be useful for any problem you want to solve. -Then, you should start your own project using the suitable libraries and frameworks. You must set a clear goal, do you want to build a website or a software ? You won't use the same libraries/framework for any purpose. Some of them are really often used so you could start by reading their documentation. -Anyhow, to answer your question, framework and libraries are not the most important bit of coding. They are just your tools, whereas the way you think to solve problems and build your algorithms is your art. -The most important thing to be a painter is not knowing how to use a brush (even if, of course, it's really useful)",1.2,True,1,5425 -2018-03-29 07:20:01.590,Keras rename model and layers,"1) I try to rename a model and the layers in Keras with TF backend, since I am using multiple models in one script. -Class Model seem to have the property model.name, but when changing it I get ""AttributeError: can't set attribute"". -What is the Problem here? -2) Additionally, I am using sequential API and I want to give a name to layers, which seems to be possibile with Functional API, but I found no solution for sequential API. Does anonye know how to do it for sequential API? -UPDATE TO 2): Naming the layers works, although it seems to be not documented. Just add the argument name, e.g. model.add(Dense(...,...,name=""hiddenLayer1""). Watch out, Layers with same name share weights!","for 1), I think you may build another model with right name and same structure with the exist one. then set weights from layers of the exist model to layers of the new model.",-0.1794418372930847,False,2,5426 -2018-03-29 07:20:01.590,Keras rename model and layers,"1) I try to rename a model and the layers in Keras with TF backend, since I am using multiple models in one script. -Class Model seem to have the property model.name, but when changing it I get ""AttributeError: can't set attribute"". -What is the Problem here? -2) Additionally, I am using sequential API and I want to give a name to layers, which seems to be possibile with Functional API, but I found no solution for sequential API. Does anonye know how to do it for sequential API? -UPDATE TO 2): Naming the layers works, although it seems to be not documented. Just add the argument name, e.g. model.add(Dense(...,...,name=""hiddenLayer1""). Watch out, Layers with same name share weights!","To rename a keras model in TF2.2.0: -model._name = ""newname"" -I have no idea if this is a bad idea - they don't seem to want you to do it, but it does work. To confirm, call model.summary() and you should see the new name.",0.4247838355242418,False,2,5426 -2018-03-29 16:33:58.400,Heroku Python import local functions,"I'm developing a chatbot using heroku and python. I have a file fetchWelcome.py in which I have written a function. I need to import the function from fetchWelcome into my main file. -I wrote ""from fetchWelcome import fetchWelcome"" in main file. But because we need to mention all the dependencies in the requirement file, it shows error. I don't know how to mention user defined requirement. -How can I import the function from another file into the main file ? Both the files ( main.py and fetchWelcome.py ) are in the same folder.","If we need to import function from fileName into main.py, write ""from .fileName import functionName"". Thus we don't need to write any dependency in requirement file.",0.0,False,1,5427 -2018-03-29 17:21:53.613,How to choose RandomState in train_test_split?,"I understand how random state is used to randomly split data into training and test set. As Expected, my algorithm gives different accuracy each time I change it. Now I have to submit a report in my university and I am unable to understand the final accuracy to mention there. Should I choose the maximum accuracy I get? Or should I run it with different RandomStates and then take its average? Or something else?","For me personally, I set random_state to a specific number (usually 42) so if I see variation in my programs accuracy I know it was not caused by how the data was split. -However, this can lead to my network over fitting on that specific split. I.E. I tune my network so it works well with that split, but not necessarily on a different split. Because of this, I think it's best to use a random seed when you submit your code so the reviewer knows you haven't over fit to that particular state. -To do this with sklearn.train_test_split you can simply not provide a random_state and it will pick one randomly using np.random.",0.2012947653214861,False,1,5428 -2018-03-30 10:14:52.273,"Python application freezes, only CTRL-C helps","I have a Python app that uses websockets and gevent. It's quite a big application in my personal experience. -I've encountered a problem with it: when I run it on Windows (with 'pipenv run python myapp'), it can (suddenly but very rarily) freeze, and stop accepting messages. If I then enter CTRL+C in cmd, it starts reacting to all the messages, that were issued when it was hanging. -I understand, that it might block somewhere, but I don't know how to debug theses types of errors, because I don't see anything in the code, that could do it. And it happens very rarily on completely different stages of the application's runtime. -What is the best way to debug it? And to actually see what goes behind the scenes? My logs show no indication of a problem. -Could it be an error with cmd and not my app?","Your answer may be as simple as adding timeouts to some of your spawns or gevent calls. Gevent is still single threaded, and so if an IO bound resource hangs, it can't context switch until it's been received. Setting a timeout might help bypass these issues and move your app forward?",0.0,False,1,5429 -2018-03-30 14:45:10.337,How to compare date (yyyy-mm-dd) with year-Quarter (yyyyQQ) in python,"I am writing a sql query using pandas within python. In the where clause I need to compare a date column (say review date 2016-10-21) with this value '2016Q4'. In other words if the review dates fall in or after Q4 in 2016 then they will be selected. Now how do I convert the review date to something comparable to 'yyyyQ4' format. Is there any python function for that ? If not, how so I go about writing one for this purpose ?","Once you are able to get the month out into a variable: mon -you can use the following code to get the quarter information: -for mon in range(1, 13): - print (mon-1)//3 + 1, -print -which would return: - -for months 1 - 3 : 1 -for months 4 - 6 : 2 -for months 7 - 9 : 3 -for months 10 - 12 : 4",1.2,True,1,5430 -2018-03-31 04:10:29.847,Measurement for intersection of 2 irregular shaped 3d object,"I am trying to implement an objective function that minimize the overlap of 2 irregular shaped 3d objects. While the most accurate measurement of the overlap is the intersection volume, it's too computationally expensive as I am dealing with complex objects with 1000+ faces and are not convex. -I am wondering if there are other measurements of intersection between 3d objects that are much faster to compute? 2 requirements for the measurement are: 1. When the measurement is 0, there should be no overlap; 2. The measurement should be a scalar(not a boolean value) indicating the degree of overlapping, but this value doesn't need to be very accurate. -Possible measurements I am considering include some sort of 2D surface area of intersection, or 1D penetration depth. Alternatively I can estimate volume with a sample based method that sample points inside one object and test the percentage of points that exist in another object. But I don't know how computational expensive it is to sample points inside a complex 3d shape as well as to test if a point is enclosed by such a shape. -I will really appreciate any advices, codes, or equations on this matter. Also if you can suggest any libraries (preferably python library) that accept .obj, .ply...etc files and perform 3D geometry computation that will be great! I will also post here if I find out a good method. -Update: -I found a good python library called Trimesh that performs all the computations mentioned by me and others in this post. It computes the exact intersection volume with the Blender backend; it can voxelize meshes and compute the volume of the co-occupied voxels; it can also perform surface and volumetric points sampling within one mesh and test points containment within another mesh. I found surface point sampling and containment testing(sort of surface intersection) and the grid approach to be the fastest.","A sample-based approach is what I'd try first. Generate a bunch of points in the unioned bounding AABB, and divide the number of points in A and B by the number of points in A or B. (You can adapt this measure to your use case -- it doesn't work very well when A and B have very different volumes.) To check whether a given point is in a given volume, use a crossing number test, which Google. There are acceleration structures that can help with this test, but my guess is that the number of samples that'll give you reasonable accuracy is lower than the number of samples necessary to benefit overall from building the acceleration structure. -As a variant of this, you can check line intersection instead of point intersection: Generate a random (axis-aligned, for efficiency) line, and measure how much of it is contained in A, in B, and in both A and B. This requires more bookkeeping than point-in-polyhedron, but will give you better per-sample information and thus reduce the number of times you end up iterating through all the faces.",1.2,True,2,5431 -2018-03-31 04:10:29.847,Measurement for intersection of 2 irregular shaped 3d object,"I am trying to implement an objective function that minimize the overlap of 2 irregular shaped 3d objects. While the most accurate measurement of the overlap is the intersection volume, it's too computationally expensive as I am dealing with complex objects with 1000+ faces and are not convex. -I am wondering if there are other measurements of intersection between 3d objects that are much faster to compute? 2 requirements for the measurement are: 1. When the measurement is 0, there should be no overlap; 2. The measurement should be a scalar(not a boolean value) indicating the degree of overlapping, but this value doesn't need to be very accurate. -Possible measurements I am considering include some sort of 2D surface area of intersection, or 1D penetration depth. Alternatively I can estimate volume with a sample based method that sample points inside one object and test the percentage of points that exist in another object. But I don't know how computational expensive it is to sample points inside a complex 3d shape as well as to test if a point is enclosed by such a shape. -I will really appreciate any advices, codes, or equations on this matter. Also if you can suggest any libraries (preferably python library) that accept .obj, .ply...etc files and perform 3D geometry computation that will be great! I will also post here if I find out a good method. -Update: -I found a good python library called Trimesh that performs all the computations mentioned by me and others in this post. It computes the exact intersection volume with the Blender backend; it can voxelize meshes and compute the volume of the co-occupied voxels; it can also perform surface and volumetric points sampling within one mesh and test points containment within another mesh. I found surface point sampling and containment testing(sort of surface intersection) and the grid approach to be the fastest.","By straight voxelization: -If the faces are of similar size (if needed triangulate the large ones), you can use a gridding approach: define a regular 3D grid with a spacing size larger than the longest edge and store one bit per voxel. -Then for every vertex of the mesh, set the bit of the cell it is included in (this just takes a truncation of the coordinates). By doing this, you will obtain the boundary of the object as a connected surface. You will obtain an estimate of the volume by means of a 3D flood filling algorithm, either from an inside or an outside pixel. (Outside will be easier but be sure to leave a one voxel margin around the object.) -Estimating the volumes of both objects as well as intersection or union is straightforward with this machinery. The cost will depend on the number of faces and the number of voxels.",0.0,False,2,5431 -2018-03-31 08:19:33.750,How executed code block Science mode in Pycharm,"Like Spyder, you can execute code block. how can i do in Pycharm in science mode. in spyder you use -# In[] -How can i do this in pycharm","you can just import numpy to actvate science mode. -import numpy as np",0.2012947653214861,False,2,5432 -2018-03-31 08:19:33.750,How executed code block Science mode in Pycharm,"Like Spyder, you can execute code block. how can i do in Pycharm in science mode. in spyder you use -# In[] -How can i do this in pycharm","pycharm use code cell. you can do with this -'#%% '",1.2,True,2,5432 -2018-03-31 10:41:21.540,Building WSN topology integrated with SDN controller (mininet-wifi),"In mininet-wifi examples, I found a sample (6LowPAN.py) that creates a simple topology contains 3 nodes. -Now, I intend to create another topology as follows: - -1- Two groups of sensor nodes such that each group connects to a 'Sink - node' -2- Connect each 'Sink node' to an 'ovSwitch' -3- Connect the two switches to a 'Controller' - -Is that doable using mininet-wifi? Any tips how to do it?? -Many thanks in advance :)","Yes, you can do this with 6LowPAN.py. You then add switches and controller into the topology with their links.",0.3869120172231254,False,1,5433 -2018-04-01 01:28:54.353,Neural Network - Input Normalization,"It is a common practice to normalize input values (to a neural network) to speed up the learning process, especially if features have very large scales. -In its theory, normalization is easy to understand. But I wonder how this is done if the training data set is very large, say for 1 million training examples..? If # features per training example is large as well (say, 100 features per training example), 2 problems pop up all of a sudden: -- It will take some time to normalize all training samples -- Normalized training examples need to be saved somewhere, so that we need to double the necessary disk space (especially if we do not want to overwrite the original data). -How is input normalization solved in practice, especially if the data set is very large? -One option maybe is to normalize inputs dynamically in the memory per mini batch while training.. But normalization results will then be changing from one mini batch to another. Would it be tolerable then? -There is maybe someone in this platform having hands on experience on this question. I would really appreciate if you could share your experiences. -Thank you in advance.","A large number of features makes it easier to parallelize the normalization of the dataset. This is not really an issue. Normalization on large datasets would be easily GPU accelerated, and it would be quite fast. Even for large datasets like you are describing. One of my frameworks that I have written can normalize the entire MNIST dataset in under 10 seconds on a 4-core 4-thread CPU. A GPU could easily do it in under 2 seconds. Computation is not the problem. While for smaller datasets, you can hold the entire normalized dataset in memory, for larger datasets, like you mentioned, you will need to swap out to disk if you normalize the entire dataset. However, if you are doing reasonably large batch sizes, about 128 or higher, your minimums and maximums will not fluctuate that much, depending upon the dataset. This allows you to normalize the mini-batch right before you train the network on it, but again this depends upon the network. I would recommend experimenting based on your datasets, and choosing the best method.",1.2,True,1,5434 -2018-04-01 15:08:50.007,Finding the eyeD3 executable,"I just installed the abcde CD utility but it's complaining that it can't find eyeD3, the Python ID3 program. This appears to be a well-known and unresolved deficiency in the abcde dependencies, and I'm not a Python programmer, so I'm clueless. -I have the Python 2.7.12 came with Mint 18, and something called python3 (3.5.2). If I try to install eyeD3 with pip (presumably acting against 2.7.12), it says it's already installed (in /usr/lib/python2.7/dist-packages/eyeD3). I don't know how to force pip to install under python3. -If I do a find / -name eyeD3, the only other thing it turns up is /usr/share/pyshared/eyeD3. But both of those are only directories, and both just contain Python libraries, not executables. -There isn't any other file called eyeD3 anywhere on disk. -Does anyone know what it's supposed to be called, where it's supposed to live, and how I can install it? -P","I don't know how to force pip to install under python3. - -python3 -m pip install eyeD3 will install it for Python3.",0.2012947653214861,False,2,5435 -2018-04-01 15:08:50.007,Finding the eyeD3 executable,"I just installed the abcde CD utility but it's complaining that it can't find eyeD3, the Python ID3 program. This appears to be a well-known and unresolved deficiency in the abcde dependencies, and I'm not a Python programmer, so I'm clueless. -I have the Python 2.7.12 came with Mint 18, and something called python3 (3.5.2). If I try to install eyeD3 with pip (presumably acting against 2.7.12), it says it's already installed (in /usr/lib/python2.7/dist-packages/eyeD3). I don't know how to force pip to install under python3. -If I do a find / -name eyeD3, the only other thing it turns up is /usr/share/pyshared/eyeD3. But both of those are only directories, and both just contain Python libraries, not executables. -There isn't any other file called eyeD3 anywhere on disk. -Does anyone know what it's supposed to be called, where it's supposed to live, and how I can install it? -P","Gave up...waste of my time and everyone else's sorry. -What I apparently needed was the eyed3 (lowercase 'd') non-python utility.",0.0,False,2,5435 -2018-04-02 22:28:00.080,python pair multiple field entries from csv,"Trying to take data from a csv like this: -col1 col2 -eggs sara -bacon john -ham betty -The number of items in each column can vary and may not be the same. Col1 may have 25 and col2 may have 3. Or the reverse, more or less. -And loop through each entry so its output into a text file like this -breakfast_1 -breakfast_item eggs -person sara -breakfast_2 -breakfast_item bacon -person sara -breakfast_3 -breakfast_item ham -person sara -breakfast_4 -breakfast_item eggs -person john -breakfast_5 -breakfast_item bacon -person john -breakfast_6 -breakfast_item ham -person john -breakfast_7 -breakfast_item eggs -person betty -breakfast_8 -breakfast_item bacon -person betty -breakfast_9 -breakfast_item ham -person betty -So the script would need to add the ""breakfast"" number and loop through each breakfast_item and person. -I know how to create one combo but not how to pair up each in a loop? -Any tips on how to do this would be very helpful.","First, get a distinct of all breakfast items. -A pseudo code like below -Iterate through each line -Collect item and person in 2 different lists -Do a set on those 2 lists -Say persons, items - -Counter = 1 -for person in persons: - for item in items: - Print ""breafastitem"", Counter - Print person, item",0.0,False,1,5436 -2018-04-03 07:25:09.530,How to find out Windows network interface name in Python?,"Windows command netsh interface show interface shows all network connections and their names. A name could be Wireless Network Connection, Local Area Network or Ethernet etc. -I would like to change an IP address with netsh interface ip set address ""Wireless Network Connection"" static 192.168.1.3 255.255.255.0 192.168.1.1 1 with Python script, but I need a network interface name. -Is it possible to have this information like we can have a hostname with socket.gethostname()? Or I can change an IP address with Python in other way?","I don't know of a Python netsh API. But it should not be hard to do with a pair of subprocess calls. First issue netsh interface show interface, parse the output you get back, then issue your set address command. -Or am I missing the point?",0.6730655149877884,False,1,5437 -2018-04-03 11:57:26.050,how do I install my modual onto my local copy of python on windows?,"I'm reading headfirst python and have just completed the section where I created a module for printing nested list items, I've created the code and the setup file and placed them in a file labeled ""Nester"" that is sitting on my desktop. The book is now asking for me to install this module onto my local copy of Python. The thing is, in the example he is using the mac terminal, and I'm on windows. I tried to google it but I'm still a novice and a lot of the explanations just go over my head. Can someone give me clear thorough guide?.","On Windows systems, third-party modules (single files containing one or more functions or classes) and third-party packages (a folder [a.k.a. directory] that contains more than one module (and sometimes other folders/directories) are usually kept in one of two places: c:\\Program Files\\Python\\Lib\\site-packages\\ and c:\\Users\\[you]\\AppData\\Roaming\\Python\\. -The location in Program Files is usually not accessible to normal users, so when PIP installs new modules/packages on Windows it places them in the user-accessible folder in the Users location indicated above. You have direct access to that, though by default the AppData folder is ""hidden""--not displayed in the File Explorer list unless you set FE to show hidden items (which is a good thing to do anyway, IMHO). You can put the module you're working on in the AppData\\Roaming\\Python\\ folder. -You still need to make sure the folder you put it in is in the PATH environment variable. PATH is a string that tells Windows (and Python) where to look for needed files, in this case the module you're working on. Google ""set windows path"" to find how to check and set your path variable, then just go ahead and put your module in a folder that's listed in your path. -Of course, since you can add any folder/directory you want to PATH, you could put your module anywhere you wanted--including leaving it on the Desktop--as long as the location is included in PATH. You could, for instance, have a folder such as Documents\\Programming\\Python\\Lib to put your personal modules in, and use Documents\\Programming\\Python\\Source for your Python programs. You'd just need to include those in the PATH variable. -FYI: Personally, I don't like the way python is (by default) installed on Windows (because I don't have easy access to c:\\Program Files), so I installed Python in a folder off the drive root: c:\Python36. In this way, I have direct access to the \\Lib\\site-packages\\ folder.",0.0,False,1,5438 -2018-04-03 19:38:45.600,Django : how to give user/group permission to view model instances for a specified period of time,"I am fairly new to Django and could not figure out by reading the docs or by looking at existing questions. I looked into Django permissions and authentication but could not find a solution. -Let's say I have a Detail View listing all instances of a Model called Item. For each Item, I want to control which User can view it, and for how long. In other words, for each User having access to the Item, I want the right/permission to view it to expire after a specified period of time. After that period of time, the Item would disapear from the list and the User could not access the url detailing the Item. -The logic to implement is pretty simple, I know, but the ""per user / per object"" part confuses me. Help would be much appreciated!","Information about UserItemExpiryDate has to be stored in a separate table (Model). I would recommend using your coding in Django. -There are few scenarios to consider: -1) A new user is created, and he/she should have access to items. -In this case, you add entries to UserItemExpiry with new User<>Item combination (as key) and expiry date. Then, for logged in user you look for items from Items that has User<>Item in UserItemExpiry in the future. -2) A new item is created, and it has to be added to existing users. -In such case, you add entries to UserItemExpiry with ALL users<> new Item combination (as key) and expiry date. And logic for ""selecting"" valid items is the same as in point 1. -Best of luck, -Radek Szwarc",1.2,True,1,5439 -2018-04-04 13:46:20.373,how to read text from excel file in python pandas?,"I am working on a excel file with large text data. 2 columns have lot of text data. Like descriptions, job duties. -When i import my file in python df=pd.read_excel(""form1.xlsx""). It shows the columns with text data as NaN. -How do I import all the text in the columns ? -I want to do analysis on job title , description and job duties. Descriptions and Job Title are long text. I have over 150 rows.","Try converting the file from .xlsx to .CSV -I had the same problem with text columns so i tried converting to CSV (Comma Delimited) and it worked. Not very helpful, but worth a try.",0.2012947653214861,False,1,5440 -2018-04-04 17:20:14.923,make a web server in localhost with flask,"I want to know that if I can make a web server with Flask in my pc like xampp apache (php) for after I can access this page in others places across the internet. Or even in my local network trough the wifi connection or lan ethernet. Is it possible ? I saw some ways to do this, like using ""uwsgi"".. something like this... but I colud never do it. -OBS: I have a complete application in Flask already complete, with databases and all things working. The only problem is that I don't know how to start the server and access by the others pc's.","Yes, you can. -Just like you said, you can use uwsgi to run your site efficiently. There are other web servers like uwsgi: I usually use Gunicorn. But note that Flask can run without any of these, it will simply be less efficient (but if it is just for you then it should not be a problem). -You can find tutorials on the net with a few keywords like ""serving flask app"". -If you want to access your site from the internet (outside of your local network), you will need to configure your firewall and router/modem to accept connections on port 80 (HTTP) or 443 (HTTPS). -Good luck :)",0.3869120172231254,False,1,5441 -2018-04-04 18:48:40.377,Python - How do I make a window along with widgets without using modules like Tkinter?,"I have been wanting to know how to make a GUI without using a module on Python, I have looked into GUI's in Python but everything leads to Tkinter or other Python GUI modules. The reason I do not want to use Tkinter is because I want to understand how to do it myself. I have looked at the Tkinter modules files but it imports like 4 other Modules. -I don't mind the modules like system, os or math just not modules which I will use and not understand. If you do decide to answer my question please include as much detail and information on the matter. Thanks -- Darrian Penman","You cannot write a GUI in Python without importing either a GUI module or importing ctypes. The latter would require calling OS-specific graphics primitives, and would be far worse than doing the same thing in C. (EDIT: see Roland comment below for X11 systems.) -The python-coded tkinter mainly imports the C-coded _tkinter, which interfaces to the tcl- and C- coded tk GUI package. There are separate versions of tcl/tk for Windows, *nix, and MacOS.",1.2,True,2,5442 -2018-04-04 18:48:40.377,Python - How do I make a window along with widgets without using modules like Tkinter?,"I have been wanting to know how to make a GUI without using a module on Python, I have looked into GUI's in Python but everything leads to Tkinter or other Python GUI modules. The reason I do not want to use Tkinter is because I want to understand how to do it myself. I have looked at the Tkinter modules files but it imports like 4 other Modules. -I don't mind the modules like system, os or math just not modules which I will use and not understand. If you do decide to answer my question please include as much detail and information on the matter. Thanks -- Darrian Penman","For the same reason that you can't write to a database without using a database module, you can't create GUIs without a GUI module. There simply is no way to draw directly on the screen in a cross-platform way without a module. -Writing GUIs is very complex. These modules exist to reduce the complexity.",0.2012947653214861,False,2,5442 -2018-04-04 21:19:39.857,Regular Expression in python how to find paired words,"I'm doing the cipher for python. I'm confused on how to use Regular Expression to find a paired word in a text dictionary. -For example, there is dictionary.txt with many English words in it. I need to find word paired with ""th"" at the beginning. Like they, them, the, their ..... -What kind of Regular Expression should I use to find ""th"" at the beginning? -Thank you!","^(th\w*) - -gives you all results where the string begins with th . If there is more than one word in the string you will only get the first. - -(^|\s)(th\w*) - -wil give you all the words begining with th even if there is more than one word begining with th",0.0,False,1,5443 -2018-04-04 22:36:48.643,pycharm ctrl+v copies the item in console instead paste when highlighted,"This has been a very annoying problem for me and I couldn't find any keymaps or settings that could cause this behavior. -Setup: - -Pycharm Professional 2018.1 installed on redhat linux -I remote into the linux machine using mobaX and launch pycharm with window forwarding - -Scenario 1: -I open a browser on windows, copy some text, go to editor or console, paste it somewhere without highlighting any text, hit ctrl+v, it pastes fine -Scenario 2: -I open a browser on windows, copy some text, go to editor or console, highlight some text there, hit ctrl+v in attempt to replace the highlighted text with what's in my clipboard. The text didn't change. I leave pycharm and paste somewhere else, the text in clipboard has now become the text I highlighted. -Edit: -ok I just realized this: as soon as I highlight the text, it gets copied...I've turned this feature off for terminal, but couldn't find a global settings for the editor etc. Anyone know how?","I figured it out: it's caused by the copy-on-select setting of my linux system. To turn it off, go to mobax-settings-configurations-x11-clipboard-disable 'copy on select'",1.2,True,1,5444 -2018-04-05 01:37:04.467,"In Keras, how to send each item in a batch through a model?","I have a model that starts with a Conv2D layer and so it must take input of shape (samples, rows, cols, channels) (and the model must ultimately output a shape of (1)). However, for my purposes one full unit of input needs to be some (fixed) number of samples, so the overall input shape sent into this model when given a batch of input ends up being (batch_size, samples, rows, cols, channels) (which is expected and correct, but...). How do I send each item in the batch through this model so that I end up with an output of shape (batch_size, 1)? -What I have tried so far: -I tried creating an inner model containing the Conv2D layer et al then wrapping the entire thing in a TimeDistributed wrapper, followed by a Dense(units=1) layer. This compiled, but resulted in an output shape of (batch_size, samples, 1). I feel like I am missing something simple...","At the moment you are returning a 3D array. -Add a Flatten() layer to convert the array to 2D, and then add a Dense(1). This should output (batch_size, 1).",0.1352210990936997,False,1,5445 -2018-04-05 06:45:15.460,How to add report_tensor_allocations_upon_oom to RunOptions in Keras,"I'm trying to train a neural net on a GPU using Keras and am getting a ""Resource exhausted: OOM when allocating tensor"" error. The specific tensor it's trying to allocate isn't very big, so I assume some previous tensor consumed almost all the VRAM. The error message comes with a hint that suggests this: - -Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. - -That sounds good, but how do I do it? RunOptions appears to be a Tensorflow thing, and what little documentation I can find for it associates it with a ""session"". I'm using Keras, so Tensorflow is hidden under a layer of abstraction and its sessions under another layer below that. -How do I dig underneath everything to set this option in such a way that it will take effect?","OOM means out of memory. May be it is using more memory at that time. -Decrease batch_size significantly. I set to 16, then it worked fine",0.2012947653214861,False,1,5446 -2018-04-05 12:22:11.997,Django multilanguage text and saving it on mysql,"I have a problem with multilanguage and multi character encoded text. -Project use OpenGraph and it will save in mysql database some information from websites. But database have problem with character encoding. I tryed encoding them to byte. That is problem, becouse in admin panel text show us bute and it is not readable. -Please help me. How can i save multilanguage text in database and if i need encode to byte them how can i correctly decode them in admin panel and in views",You should encode all data as UTF-8 which is unicode.,0.0,False,1,5447 -2018-04-05 18:38:56.977,How to Install requests[security] in virtualenv in IntelliJ,I'm using python 2.7.10 virtualenv when running python codes in IntelliJ. I need to install requests[security] package. However I'm not sure how to add that [security] option/config when installing requests package using the Package installer in File > Project Structure settings window.,"Was able to install it by doing: - -Activating the virtualenv in the 'Terminal' tool window: -source /bin/activate -Executing a pip install requests[security]",0.0,False,1,5448 -2018-04-06 07:42:50.770,Use HermiT in Python,"We have an ontology but we need to use the reasoner HermiT to infer the sentiment of a given expression. We have no idea how to use and implement a reasoner in python and we could not find a good explanation on the internet. We found that we can use sync_reasoner() for this, but what does this do exactly? And do we have to call the reasoner manually everytime or does it happen automatically?","You do not need to implement the reasoner. The sync_reasoner() function already calls HermiT internally and does the reasoning for you. -A reasoner will reclassify individuals and classes for you which means it creates a parent-child hierarchy of classes and individuals. When you load an ontology only explicit parent-child relations are represented. However, when you call the reasoner, the parent-child hierarchy is updated to include inferred relations as well. -An example of this is provided in Owlready2-0.5/doc/intro.rst. Before calling sync_reasoner() calling test_pizza.__class__ prints onto.Pizza, which is explicit information. However, after calling sync_reasoner() calling test_pizza.__class__ prints onto.NonVegetarianPizza, which is the inferred information.",1.2,True,1,5449 -2018-04-06 17:50:58.693,Saving data to MacOS python application,"I am using Pyinstaller to create my Python app from a set of scripts. This script uses a library that saves downloaded data to the '~/' directory (using the os.join function). -I was wondering how to edit the code in the library so that when it runs, it saves data to inside the app (like in the package, the Contents/Resources maybe)?","I was wondering how to edit the code in the library so that when it runs, it saves data to inside the app - -Don't do that. This isn't a standard practice in macOS applications, and will fail in some standard system configurations. For example, it will fail if the application is used by a non-administrator user, or if the application is run from a read-only disk image or network share. -More importantly, it'll also make it difficult or impossible to sign the application bundle with a developer certificate.",1.2,True,1,5450 -2018-04-09 02:59:03.377,How are PyTorch's tensors implemented?,"I am building my own Tensor class in Rust, and I am trying to make it like PyTorch's implementation. -What is the most efficient way to store tensors programmatically, but, specifically, in a strongly typed language like Rust? Are there any resources that provide good insights into how this is done? -I am currently building a contiguous array, so that, given dimensions of 3 x 3 x 3, my array would just have 3^3 elements in it, which would represent the tensor. However, this does make some of the mathematical operations and manipulations of the array harder. -The dimension of the tensor should be dynamic, so that I could have a tensor with n dimensions.","Contiguous array -The commonly used way to store such data is in a single array that is laid out as a single, contiguous block within memory. More concretely, a 3x3x3 tensor would be stored simply as a single array of 27 values, one after the other. -The only place where the dimensions are used is to calculate the mapping between the (many) coordinates and the offset within that array. For example, to fetch the item [3, 1, 1] you would need to know if it is a 3x3x3 matrix, a 9x3x1 matrix, or a 27x1x1 matrix - in all cases the ""storage"" would be 27 items long, but the interpretation of ""coordinates"" would be different. If you use zero-based indexing, the calculation is trivial, but you need to know the length of each dimension. -This does mean that resizing and similar operations may require copying the whole array, but that's ok, you trade off the performance of those (rare) operations to gain performance for the much more common operations, e.g. sequential reads.",1.2,True,1,5451 -2018-04-10 09:23:35.627,Pycharm - Cannot find declaration to go to,"I changed my project code from python 2.7 to 3.x. -After these changes i get a message ""cannot find declaration to go to"" when hover over any method and press ctrl -I'm tryinig update pycharm from 2017.3 to 18.1, remove directory .idea but my issue still exist. -Do you have any idea how can i fix it?","Right click on the folders where you believe relevant code is located ->Mark Directory as-> Sources Root -Note that the menu's wording ""Sources Root"" is misleading: the indexing process is not recursive. You need to mark all the relevant folders.",1.2,True,5,5452 -2018-04-10 09:23:35.627,Pycharm - Cannot find declaration to go to,"I changed my project code from python 2.7 to 3.x. -After these changes i get a message ""cannot find declaration to go to"" when hover over any method and press ctrl -I'm tryinig update pycharm from 2017.3 to 18.1, remove directory .idea but my issue still exist. -Do you have any idea how can i fix it?","I had a case where the method was implemented in a base class and Pycharm couldn't find it. -I solved it by importing the base class into the module I was having trouble with.",0.0814518047658113,False,5,5452 -2018-04-10 09:23:35.627,Pycharm - Cannot find declaration to go to,"I changed my project code from python 2.7 to 3.x. -After these changes i get a message ""cannot find declaration to go to"" when hover over any method and press ctrl -I'm tryinig update pycharm from 2017.3 to 18.1, remove directory .idea but my issue still exist. -Do you have any idea how can i fix it?",What worked for me was right-click on the folder that has the manage.py > Mark Directory as > Source Root.,0.3869120172231254,False,5,5452 -2018-04-10 09:23:35.627,Pycharm - Cannot find declaration to go to,"I changed my project code from python 2.7 to 3.x. -After these changes i get a message ""cannot find declaration to go to"" when hover over any method and press ctrl -I'm tryinig update pycharm from 2017.3 to 18.1, remove directory .idea but my issue still exist. -Do you have any idea how can i fix it?","The solution for me: remember to add an interpreter to the project, it usually says in the bottom right corner if one is set up or not. Just an alternate solution than the others. -This happened after reinstalling PyCharm and not fully setting up the ide.",0.0814518047658113,False,5,5452 -2018-04-10 09:23:35.627,Pycharm - Cannot find declaration to go to,"I changed my project code from python 2.7 to 3.x. -After these changes i get a message ""cannot find declaration to go to"" when hover over any method and press ctrl -I'm tryinig update pycharm from 2017.3 to 18.1, remove directory .idea but my issue still exist. -Do you have any idea how can i fix it?","I had same issue and invalidating cache or reinstalling the app didn't help. -As it turned out the problem was next: for some reasons *.py files were registered as a text files, not python ones. After I changed it, code completion and other IDE features started to work again. -To change file type go Preferences -> Editor -> File types",0.3869120172231254,False,5,5452 -2018-04-11 09:36:40.700,VScode run code selection,"I just made the transition from Spyder to VScode for my python endeavours. Is there a way to run individual lines of code? That's how I used to do my on-the-spot debugging, but I can't find an option for it in VScode and really don't want to keep setting and removing breakpoints. -Thanks.","In my ver of VSCode (1.25), shift+enter will run selection. Note that you will want to have your integrated terminal running python.",0.2401167094949473,False,2,5453 -2018-04-11 09:36:40.700,VScode run code selection,"I just made the transition from Spyder to VScode for my python endeavours. Is there a way to run individual lines of code? That's how I used to do my on-the-spot debugging, but I can't find an option for it in VScode and really don't want to keep setting and removing breakpoints. -Thanks.","I'm still trying to figure out how to make vscode do what I need (interactive python plots), but I can offer a more complete answer to the question at hand than what has been given so far: -1- Evaluate current selection in debug terminal is an option that is not enabled by default, so you may want to bind the 'editor.debug.action.selectionToRepl' action to whatever keyboard shortcut you choose (I'm using F9). As of today, there still appears to be no option to evaluate current line while debugging, only current selection. -2- Evaluate current line or selection in python terminal is enabled by default, but I'm on Windows where this isn't doing what I would expect - it evaluates in a new runtime, which does no good if you're trying to debug an existing runtime. So I can't say much about how useful this option is, or even if it is necessary since anytime you'd want to evaluate line-by-line, you'll be in debug mode anyway and sending to debug console as in 1 above. The Windows issue might have something to do with the settings.json entry -""terminal.integrated.inheritEnv"": true, -not having an affect in Windows as of yet, per vscode documentation.",0.0,False,2,5453 -2018-04-11 12:21:36.483,How to run a django project without manage.py,"Basically I downloaded django project from SCM, Usually I run the project with with these steps - -git clone repository -extract -change directory to project folder -python manage.py runserver - -But this project does not contains manage.py , how to run this project in my local machine??? -br","Most likely, this is not supposed to be a complete project, but a plugin application. You should create your own project in the normal way with django-admin.py startproject and add the downloaded app to INSTALLED_APPS.",0.4701041941942874,False,1,5454 -2018-04-11 21:08:57.980,Python - Subtracting the Elements of Two Arrays,"I am new to Python programming and stumbled across this feature of subtracting in python that I can't figure out. I have two 0/1 arrays, both of size 400. I want to subtract each element of array one from its corresponding element in array 2. -For example say you have two arrays A = [0, 1, 1, 0, 0] and B = [1, 1, 1, 0, 1]. -Then I would expect A - B = [0 - 1, 1 - 1, 1 - 1, 0 - 0, 0 - 1] = [-1, 0, 0, 0, -1] -However in python I get [255, 0, 0, 0, 255]. -Where does this 255 come from and how do I get -1 instead? -Here's some additional information: -The real variables I'm working with are Y and LR_predictions. -Y = array([[0, 0, 0, ..., 1, 1, 1]], dtype=uint8) -LR_predictions = array([0, 1, 1, ..., 0, 1, 0], dtype=uint8) -When I use either Y - LR_predictions or numpy.subtract(Y, LR_predictions) -I get: array([[ 0, 255, 255, ..., 1, 0, 1]], dtype=uint8) -Thanks",I can't replicate this but it looks like the numbers are 8 bit and wrapping some how,0.0,False,1,5455 -2018-04-12 00:14:16.700,"How do I save a text file in python, to my File Explorer?","I've been using Python for a few months, but I'm sort of new to Files. I would like to know how to save text files into my Documents, using "".txt"".",If you do not like to overwrite existing file then use a or a+ mode. This just appends to existing file. a+ is able to read the file as well,0.0,False,1,5456 -2018-04-12 19:41:16.037,Send data from Python backend to Highcharts while escaping quotes for date,"I would highly appreciate any help on this. I'm constructing dynamic highcharts at the backend and would like to send the data along with html to the frontend. -In highcharts, there is a specific field to accept Date such as: -x:Date.UTC(2018,01,01) -or x:2018-01-01. However, when I send dates from the backend, it is always surrounded by quotes,so it becomes: x:'Date.UTC(2018,01,01)' -and x:'2018-01-01', which does not render the chart. Any suggestions on how to escape these quotes?","Highcharts expects the values on datetime axes to be timestamps (number of miliseconds from 01.01.1970). Date.UTC is a JS function that returns a timestamp as Number. Values surrounded by apostrophes are Strings. -I'd rather suggest to return a timestamp as a String from backend (e.g. '1514764800000') and then convert it to Number in JS (you can use parseInt function for that.)",0.0,False,1,5457 -2018-04-13 03:56:24.947,Google Cloud - What products for time series data cleaning?,"I have around 20TB of time series data stored in big query. -The current pipeline I have is: -raw data in big query => joins in big query to create more big query datasets => store them in buckets -Then I download a subset of the files in the bucket: -Work on interpolation/resampling of data using Python/SFrame, because some of the time series data have missing times and they are not evenly sampled. -However, it takes a long time on a local PC, and I'm guessing it will take days to go through that 20TB of data. - -Since the data are already in buckets, I'm wondering what would the best Google tools for interpolation and resampling? -After resampling and interpolation I might use Facebook's Prophet or Auto ARIMA to create some forecasts. But that would be done locally. - -There's a few services from Google that seems are like good options. - -Cloud DataFlow: I have no experience in Apache Beam, but it looks like the Python API with Apache Beam have missing functions compared to the Java version? I know how to write Java, but I'd like to use one programming language for this task. -Cloud DataProc: I know how to write PySpark, but I don't really need any real time processing or stream processing, however spark has time series interpolation, so this might be the only option? -Cloud Dataprep: Looks like a GUI for cleaning data, but it's in beta. Not sure if it can do time series resampling/interpolation. - -Does anyone have any idea which might best fit my use case? -Thanks","I would use PySpark on Dataproc, since Spark is not just realtime/streaming but also for batch processing. -You can choose the size of your cluster (and use some preemptibles to save costs) and run this cluster only for the time you actually need to process this data. Afterwards kill the cluster. -Spark also works very nicely with Python (not as nice as Scala) but for all effects and purposes the main difference is performance, not reduced API functionality. -Even with the batch processing you can use the WindowSpec for effective time serie interpolation -To be fair: I don't have a lot of experience with DataFlow or DataPrep, but that's because out use case is somewhat similar to yours and Dataproc works well for that",1.2,True,1,5458 -2018-04-16 06:29:31.313,Nested list comprehension to flatten nested list,"I'm quite new to Python, and was wondering how I flatten the following nested list using list comprehension, and also use conditional logic. -nested_list = [[1,2,3], [4,5,6], [7,8,9]] -The following returns a nested list, but when I try to flatten the list by removing the inner square brackets I get errors. -odds_evens = [['odd' if n % 2 != 0 else 'even' for n in l] for l in nested_list]","To create a flat list, you need to have one set of brackets in comprehension code. Try the below code: -odds_evens = ['odd' if n%2!=0 else 'even' for n in l for l in nested_list] -Output: -['odd', 'odd', 'odd', 'even', 'even', 'even', 'odd', 'odd', 'odd']",-0.1016881243684853,False,1,5459 -2018-04-17 09:44:50.407,Password protect a Python Script that is Scheduled to run daily,"I have a python script that is scheduled to run at a fixed time daily -If I am not around my colleague will be able to access my computer to run the script if there is any error with the windows task scheduler -I like to allow him to run my windows task scheduler but also to protect my source code in the script... is there any good way to do this, please? -(I have read methods to use C code to hide it but I am only familiar with Python) -Thank you","Compile the source to the .pyc bytecode, and then move the source somewhere inaccessible. - -Open a terminal window in the directory containing your script -Run python -m py-compile (you should get a yourfile.pyc file) -Move somewhere secure -your script can now be run as python - -Note that is is not necessarily secure as such - there are ways to decompile the bytecode - but it does obfuscate it, if that is your requirement.",1.2,True,1,5460 -2018-04-18 09:29:24.593,Scrapy - order of crawled urls,"I've got an issue with scrapy and python. -I have several links. I crawl data from each of them in one script with the use of loop. But the order of crawled data is random or at least doesn't match to the link. -So I can't match url of each subpage with the outputed data. -Like: crawled url, data1, data2, data3. -Data 1, data2, data3 => It's ok, because it comes from one loop, but how can I add to the loop current url or can I set the order of link's list? Like first from the list is crawled as first, second is crawled as second...",time.sleep() - would it be a solution?,0.0,False,2,5461 -2018-04-18 09:29:24.593,Scrapy - order of crawled urls,"I've got an issue with scrapy and python. -I have several links. I crawl data from each of them in one script with the use of loop. But the order of crawled data is random or at least doesn't match to the link. -So I can't match url of each subpage with the outputed data. -Like: crawled url, data1, data2, data3. -Data 1, data2, data3 => It's ok, because it comes from one loop, but how can I add to the loop current url or can I set the order of link's list? Like first from the list is crawled as first, second is crawled as second...","Ok, It seems that the solution is in settings.py file in scrapy. -DOWNLOAD_DELAY = 3 -Between requests. -It should be uncommented. Defaultly it's commented.",-0.1352210990936997,False,2,5461 -2018-04-18 20:24:57.843,gcc error when installing pyodbc,"I am installing pyodbc on Redhat 6.5. Python 2.6 and 2.7.4 are installed. I get the following error below even though the header files needed for gcc are in the /usr/include/python2.6. -I have updated every dev package: yum groupinstall -y 'development tools' -Any ideas on how to resolve this issue would be greatly appreciated??? -Installing pyodbc... -Processing ./pyodbc-3.0.10.tar.gz -Installing collected packages: pyodbc - Running setup.py install for pyodbc ... error - Complete output from command /opt/rh/python27/root/usr/bin/python -u -c ""import setuptools, tokenize;file='/tmp/pip-JAGZDD-build/setup.py';exec(compile(getattr(tokenize, 'open', open)(file).read().replace('\r\n', '\n'), file, 'exec'))"" install --record /tmp/pip-QJasL0-record/install-record.txt --single-version-externally-managed --compile: - running install - running build - running build_ext - building 'pyodbc' extension - creating build - creating build/temp.linux-x86_64-2.7 - creating build/temp.linux-x86_64-2.7/tmp - creating build/temp.linux-x86_64-2.7/tmp/pip-JAGZDD-build - creating build/temp.linux-x86_64-2.7/tmp/pip-JAGZDD-build/src - gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DPYODBC_VERSION=3.0.10 -DPYODBC_UNICODE_WIDTH=4 -DSQL_WCHART_CONVERT=1 -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.8.sdk/usr/include -I/opt/rh/python27/root/usr/include/python2.7 -c /tmp/pip-JAGZDD-build/src/cnxninfo.cpp -o build/temp.linux-x86_64-2.7/tmp/pip-JAGZDD-build/src/cnxninfo.o -Wno-write-strings - In file included from /tmp/pip-JAGZDD-build/src/cnxninfo.cpp:8: - ** -**/tmp/pip-JAGZDD-build/src/pyodbc.h:41:20: error: Python.h: No such file or directory /tmp/pip-JAGZDD-build/src/pyodbc.h:42:25: error: floatobject.h: No such file or directory /tmp/pip-JAGZDD-build/src/pyodbc.h:43:24: error: longobject.h: No such file or directory /tmp/pip-JAGZDD-build/src/pyodbc.h:44:24: error: boolobject.h: No such file or directory /tmp/pip-JAGZDD-build/src/pyodbc.h:45:27: error: unicodeobject.h: No such file or directory /tmp/pip-JAGZDD-build/src/pyodbc.h:46:26: error: structmember.h: No such file or directory -** - In file included from /tmp/pip-JAGZDD-build/src/pyodbc.h:137, - from /tmp/pip-JAGZDD-build/src/cnxninfo.cpp:8: - /tmp/pip-JAGZDD-build/src/pyodbccompat.h:61:28: error: stringobject.h: No such file or directory - /tmp/pip-JAGZDD-build/src/pyodbccompat.h:62:25: error: intobject.h: No such file or directory - /tmp/pip-JAGZDD-build/src/pyodbccompat.h:63:28: error: bufferobject.h: No such file or directory - In file included from /tmp/pip-JAGZDD-build/src/cnxninfo.cpp:8: - /tmp/pip-JAGZDD-build/src/pyodbc.h: In function ‘void _strlwr(char*)’: - /tmp/pip-JAGZDD-build/src/pyodbc.h:92: error: ‘tolower’ was not declared in this scope - In file included from /tmp/pip-JAGZDD-build/src/pyodbc.h:137, - from /tmp/pip-JAGZDD-build/src/cnxninfo.cpp:8: - /tmp/pip-JAGZDD-build/src/pyodbccompat.h: At global scope: - /tmp/pip-JAGZDD-build/src/pyodbccompat.h:71: error: expected initializer before ‘*’ token - /tmp/pip-JAGZDD-build/src/pyodbccompat.h:81: error: ‘Text_Buffer’ declared as an ‘inline’ variable - /tmp/pip-JAGZDD-build/src/pyodbccompat.h:81: error: ‘PyObject’ was not declared in this scope - /tmp/pip-JAGZDD-build/src/pyodbccompat.h:81: error: ‘o’ was not declared in this scope - /tmp/pip-JAGZDD-build/src/pyodbccompat.h:82: error: expected ‘,’ or ‘;’ before ‘{’ token - /tmp/pip-JAGZDD-build/src/pyodbccompat.h:93: error: ‘Text_Check’ declared as an ‘inline’ variable - /tmp/pip-JAGZDD-build/src/pyodbccompat.h:93: error: ‘PyObject’ was not declared in this scope - /tmp/pip-JAGZDD-build/src/pyodbccompat.h:93: error: ‘o’ was not declared in this scope - /tmp/pip-JAGZDD-build/src/pyodbccompat.h:94: error: expected ‘,’ or ‘;’ before ‘{’ token - /tmp/pip-JAGZDD-build/src/pyodbccompat.h:104: error: ‘PyObject’ was not declared in this scope - /tmp/pip-JAGZDD-build/src/pyodbccompat.h:104: error: ‘lhs’ was not declared in this scope - /tmp/pip-JAGZDD-build/src/pyodbccompat.h:104: error: expected primary-expression before ‘const’ - /tmp/pip-JAGZDD-build/src/pyodbccompat.h:104: error: initializer expression list treated as compound expression - /tmp/pip-JAGZDD-build/src/pyodbccompat.h:109: error: ‘Text_Size’ declared as an ‘inline’ variable - /tmp/pip-JAGZDD-build/src/pyodbccompat.h:109: error: ‘PyObject’ was not declared in this scope - /tmp/pip-JAGZDD-build/src/pyodbccompat.h:109: error: ‘o’ was not declared in this scope - /tmp/pip-JAGZDD-build/src/pyodbccompat.h:110: error: expected ‘,’ or ‘;’ before ‘{’ token - /tmp/pip-JAGZDD-build/src/pyodbccompat.h:118: error: ‘TextCopyToUnicode’ declared as an ‘inline’ variable - /tmp/pip-JAGZDD-build/src/pyodbccompat.h:118: error: ‘Py_UNICODE’ was not declared in this scope - /tmp/pip-JAGZDD-build/src/pyodbccompat.h:118: error: ‘buffer’ was not declared in this scope - /tmp/pip-JAGZDD-build/src/pyodbccompat.h:118: error: ‘PyObject’ was not declared in this scope - /tmp/pip-JAGZDD-build/src/pyodbccompat.h:118: error: ‘o’ was not declared in this scope - /tmp/pip-JAGZDD-build/src/pyodbccompat.h:118: error: initializer expression list treated as compound expression - /tmp/pip-JAGZDD-build/src/pyodbccompat.h:119: error: expected ‘,’ or ‘;’ before ‘{’ token - error: command 'gcc' failed with exit status 1",The resolution was to re-install Python2.7,0.0,False,1,5462 -2018-04-18 21:14:22.503,Count number of nodes per level in a binary tree,"I've been searching for a bit now and haven't been able to find anything similar to my question. Maybe i'm just not searching correctly. Anyways this is a question from my exam review. Given a binary tree, I need to output a list such that each item in the list is the number of nodes on a level in a binary tree at the items list index. What I mean, lst = [1,2,1] and the 0th index is the 0th level in the tree and the 1 is how many nodes are in that level. lst[1] will represent the number of nodes (2) in that binary tree at level 1. The tree isn't guaranteed to be balanced. We've only been taught preorder,inorder and postorder traversals, and I don't see how they would be useful in this question. I'm not asking for specific code, just an idea on how I could figure this out or the logic behind it. Any help is appreciated.","The search ordering doesn't really matter as long as you only count each node once. A depth-first search solution with recursion would be: - -Create a map counters to store a counter for each level. E.g. counters[i] is the number of nodes found so far at level i. Let's say level 0 is the root. -Define a recursive function count_subtree(node, level): Increment counters[level] once. Then for each child of the given node, call count_subtree(child, level + 1) (the child is at a 1-deeper level). -Call count_subtree(root_node, 0) to count starting at the root. This will result in count_subtree being run exactly once on each node because each node only has one parent, so counters[level] will be incremented once per node. A leaf node is the base case (no children to call the recursive function on). -Build your final list from the values of counters, ordered by their keys ascending. - -This would work with any kind of tree, not just binary. Running time is O(number of nodes in tree). Side note: The depth-first search solution would be easier to divide and run on parallel processors or machines than a similar breadth-first search solution.",0.3869120172231254,False,1,5463 -2018-04-19 08:06:38.817,Going back to previous line in Spyder,"I am using the Spyder editor and I have to go back and forth from the piece of code that I am writing to the definition of the functions I am calling. I am looking for shortcuts to move given this issue. I know how to go to the function definition (using Ctrl + g), but I don't know how to go back to the piece of code that I am writing. Is there an easy way to do this?","(Spyder maintainer here) You can use the shortcuts Ctrl+Alt+Left and Ctrl+Alt+Right to move to the previous/next cursor position, respectively.",1.2,True,1,5464 -2018-04-19 12:15:18.167,clean up python versions mac osx,"I tried to run a python script on my mac computer, but I ended up in troubles as it needed to install pandas as a dependency. -I tried to get this dependency, but to do so I installed different components like brew, pip, wget and others including different versions of python using brew, .pkg package downloaded from python.org. -In the end, I was not able to run the script anyway. -Now I would like to sort out the things and have only one version of python (3 probably) working correctly. -Can you suggest me the way how to get the overview what I have installed on my computer and how can I clean it up? -Thank you in advance","Use brew list to see what you've installed with Brew. And Brew Uninstall as needed. Likewise, review the logs from wget to see where it installed things. Keep in mind that MacOS uses Python 2.7 for system critical tasks; it's baked-into the OS so don't touch it. -Anything you installed with pip is saved to the /site-packages directory of the Python version in which you installed it so it will disappear when you remove that version of Python. -The .pkg files installed directly into your Applications folder and can be deleted safely like any normal app.",0.999329299739067,False,1,5465 -2018-04-19 20:49:21.823,Python/Flask: only one user can call a endpoint at one time,"I have a API build using Python/Flask, and I have a endpoint called /build-task that called by the system, and this endpoint takes about 30 minutes to run. -My question is that how do I lock the /build-task endpoint when it's started and running already? So so other user, or system CANNOT call this endpoint.","You have some approaches for this problem: -1 - You can create a session object, save a flag in the object and check if the endpoint is already running and respond accordingly. -2 - Flag on the database, check if the endpoint is already running and respond accordingly.",0.3869120172231254,False,1,5466 -2018-04-19 22:13:32.410,"After delay() is called on a celery task, it takes more than 5 to 10 seconds for the tasks to even start executing with redis as the server","I have Redis as my Cache Server. When I call delay() on a task,it takes more than 10 tasks to even start executing. Any idea how to reduce this unnecessary lag? -Should I replace Redis with RabbitMQ?","It's very difficult to say what the cause of the delay is without being able to inspect your application and server logs, but I can reassure you that the delay is not normal and not an effect specific to either Celery or using Redis as the broker. I've used this combination a lot in the past and execution of tasks happens in a number of milliseconds. -I'd start by ensuring there are no network related issues between your client creating the tasks, your broker (Redis) and your task consumers (celery workers). -Good luck!",1.2,True,1,5467 -2018-04-21 12:34:46.780,add +1 hour to datetime.time() django on forloop,"I have code like this, I want to check in the time range that has overtime and sum it. -currently, am trying out.hour+1 with this code, but didn't work. - - - overtime_all = 5 - overtime_total_hours = 0 - out = datetime.time(14, 30) - - while overtime_all > 0: - overtime200 = object.filter(time__range=(out, out.hour+1)).count() - overtime_total_hours = overtime_total_hours + overtime200 - overtime_all -=1 - - print overtime_total_hours - - -how to add 1 hour every loop?...","Timedelta (from datetime) can be used to increment or decrement a datatime objects. Unfortunately, it cannot be directly combined with datetime.time objects. -If the values that are stored in your time column are datetime objects, you can use them (e.g.: my_datetime + timedelta(hours=1)). If they are time objects, you'll need to think if they represent a moment in time (in that case, they should be converted to datetime objects) or a duration (in that case, it's probably easier to store it as an integer representing the total amount of minutes, and to perform all operations on integers).",1.2,True,2,5468 -2018-04-21 12:34:46.780,add +1 hour to datetime.time() django on forloop,"I have code like this, I want to check in the time range that has overtime and sum it. -currently, am trying out.hour+1 with this code, but didn't work. - - - overtime_all = 5 - overtime_total_hours = 0 - out = datetime.time(14, 30) - - while overtime_all > 0: - overtime200 = object.filter(time__range=(out, out.hour+1)).count() - overtime_total_hours = overtime_total_hours + overtime200 - overtime_all -=1 - - print overtime_total_hours - - -how to add 1 hour every loop?...","I found the solution now, and this is work. - - - overtime_all = 5 - overtime_total_hours = 0 - out = datetime.time(14, 30) - - while overtime_all > 0: - overtime200 = object.filter(time__range=(out,datetime.time(out.hour+1, 30))).count() - overtime_total_hours = overtime_total_hours + overtime200 - overtime_all -=1 - - print overtime_total_hours - -i do change out.hour+1 to datetime.time(out.hour+1, 30) its work fine now, but i dont know maybe there more compact/best solution. -thank you guys for your answer.",0.2012947653214861,False,2,5468 -2018-04-22 02:32:28.877,k-means clustering multi column data in python,"I Have data-set for which consist 2000 lines in a text file. -Each line represents x,y,z (3D coordinates location) of 20 skeleton joint points of human body (eg: head, shoulder center, shoulder left, shoulder right,......, elbow left, elbow right). I want to do k-means clustering of this data. -Data is separated by 'spaces ', each joint is represented by 3 values (Which represents x,y,z coordinates). Like head and shoulder center represented by -.0255... .01556600 1.3000... .0243333 .010000 .1.3102000 .... -So basically I have 60 columns in each row, which which represents 20 joints and each joins consist of three points. -My question is how do I format or use this data for k-means clustering,","You don't need to reformat anything. -Each row is a 60 dimensional vector of continous values with a comparable scale (coordinates), as needed for k-means. -You can just run k-means on this. -But assuming that the measurements were taken in sequence, you may observe a strong correlation between rows, so I wouldn't expect the data to cluster extremely well, unless you set up the use to do and hold certain poses.",1.2,True,1,5469 -2018-04-22 11:28:39.070,How to get the quantity of products in specified date in odoo 10,"I want to create table in odoo 10 with the following columns: quantity_in_the_first_day_of_month,input_quantity,output_quantity,quantity_in_the_last_day_of_the_month. -but i don't know how to get the quantity of the specified date","You can join the sale order and sale order line to get specified date. -select - sum(sol.product_uom_qty) -from - sale_order s,sale_order_line sol -where - sol.order_id=s.id and - DATE(s.date_order) = '2018-01-01'",0.0,False,1,5470 -2018-04-24 04:53:51.450,How do CPU cores get allocated to python processes in multiprocessing?,"Let's say I am running multiple python processes(not threads) on a multi core CPU (say 4). GIL is process level so GIL within a particular process won't affect other processes. -My question here is if the GIL within one process will take hold of only single core out of 4 cores or will it take hold of all 4 cores? -If one process locks all cores at once, then multiprocessing should not be any better than multi threading in python. If not how do the cores get allocated to various processes? - -As an observation, in my system which is 8 cores (4*2 because of - hyperthreading), when I run a single CPU bound process, the CPU usage - of 4 out of 8 cores goes up. - -Simplifying this: -4 python threads (in one process) running on a 4 core CPU will take more time than single thread doing same work (considering the work is fully CPU bound). Will 4 different process doing that amount of work reduce the time taken by a factor of near 4?",Process to CPU/CPU core allocation is handled by the Operating System.,0.0,False,1,5471 -2018-04-24 13:49:41.587,"How to read back the ""random-seed"" from a saved model of Dynet","I have a model already trained by dynet library. But i forget the --dynet-seed parameter when training this model. -Does anyone know how to read back this parameter from the saved model? -Thank you in advance for any feedback.","You can't read back the seed parameter. Dynet model does not save the seed parameter. The obvious reason is, it is not required at testing time. Seed is only used to set fixed initial weights, random shuffling etc. for different experimental runs. At testing time no parameter initialisation or shuffling is required. So, no need to save seed parameter. -To the best of my knowledge, none of the other libraries like tensorflow, pytorch etc. save the seed parameter as well.",1.2,True,1,5472 -2018-04-24 20:57:16.490,Django/Python - Serial line concurrency,"I'm currently working on gateway with an embedded Linux and a Webserver. The goal of the gateway is to retrieve data from electrical devices through a RS485/Modbus line, and to display them on a server. -I'm using Nginx and Django, and the web front-end is delivered by ""static"" files. Repeatedly, a Javascript script file makes AJAX calls that send CGI requests to Nginx. These CGI requests are answered with JSON responses thanks to Django. The responses are mostly data that as been read on the appropriate Modbus device. -The exact path is the following : -Randomly timed CGI call -> urls.py -> ModbusCGI.py (import an other script ModbusComm.py)-> ModbusComm.py create a Modbus client and instantly try to read with it. -Next to that, I wanted to implement a Datalogger, to store data in a DB at regular intervals. I made a script that also import the ModbusComm.py script, but it doesn't work : sometime multiple Modbus frames are sent at the same time (datalogger and cgi scripts call the same function in ModbusComm.py ""files"" at the same time) which results in an error. -I'm sure this problem would also occur if there are a lot of users on the server (CGI requests sent at the same time). Or not ? (queue system already managed for CGI requests? I'm a bit lost) -So my goal would be to make a queue system that could handle calls from several python scripts => make them wait while it's not their turn => call a function with the right arguments when it's their turn (actually using the modbus line), and send back the response to the python script so it can generate the JSON response. -I really don't know how to achieve that, and I'm sure there are better way to do this. -If I'm not clear enough, don't hesitate to make me aware of it :)","I had the same problem when I had to allow multiple processes to read some Modbus (and not only Modbus) data through a serial port. I ended up with a standalone process (“serial port server”) that exclusively works with a serial port. All other processes work with that port through that standalone process via some inter processes communication mechanism (we used Unix sockets). -This way when an application wants to read a Modbus register it connects to the “serial port server”, sends its request and receives the response. All the actual serial port communication is done by the “serial port server” in sequential way to ensure consistency.",0.0,False,1,5473 -2018-04-24 22:23:26.923,Make Python 3 default on Mac OS?,"I would like to ask if it is possible to make Python 3 a default interpreter on Mac OS 10 when typing python right away from the terminal? If so, can somebody help how to do it? I'm avoiding switching between the environments. -Cheers","You can do that by changing alias, typing in something like $ alias python=python3 in the terminal. -If you want the change to persist open ~/.bash_profile using nano and then add alias python=python3. CTRL+O to save and CTRL+X to close. -Then type $ source ~./bash_profile in the terminal.",0.2012947653214861,False,1,5474 -2018-04-25 00:38:39.330,can't import more than 50 contacts from csv file to telegram using Python3,"Trying to Import 200 contacts from CSV file to telegram using Python3 Code. It's working with first 50 contacts and then stop and showing below: -telethon.errors.rpc_error_list.FloodWaitError: A wait of 101 seconds is required -Any idea how I can import all list without waiting?? Thanks!!","You can not import a large number of people in sequential. ُThe telegram finds you're sperm. -As a result, you must use ‍sleep between your requests",0.0,False,1,5475 -2018-04-25 07:54:39.583,Grouping tests in pytest: Classes vs plain functions,"I'm using pytest to test my app. -pytest supports 2 approaches (that I'm aware of) of how to write tests: - -In classes: - - -test_feature.py -> class TestFeature -> def test_feature_sanity - - -In functions: - - -test_feature.py -> def test_feature_sanity - -Is the approach of grouping tests in a class needed? Is it allowed to backport unittest builtin module? -Which approach would you say is better and why?","There are no strict rules regarding organizing tests into modules vs classes. It is a matter of personal preference. Initially I tried organizing tests into classes, after some time I realized I had no use for another level of organization. Nowadays I just collect test functions into modules (files). -I could see a valid use case when some tests could be logically organized into same file, but still have additional level of organization into classes (for instance to make use of class scoped fixture). But this can also be done just splitting into multiple modules.",1.2,True,2,5476 -2018-04-25 07:54:39.583,Grouping tests in pytest: Classes vs plain functions,"I'm using pytest to test my app. -pytest supports 2 approaches (that I'm aware of) of how to write tests: - -In classes: - - -test_feature.py -> class TestFeature -> def test_feature_sanity - - -In functions: - - -test_feature.py -> def test_feature_sanity - -Is the approach of grouping tests in a class needed? Is it allowed to backport unittest builtin module? -Which approach would you say is better and why?","Typically in unit testing, the object of our tests is a single function. That is, a single function gives rise to multiple tests. In reading through test code, it's useful to have tests for a single unit be grouped together in some way (which also allows us to e.g. run all tests for a specific function), so this leaves us with two options: - -Put all tests for each function in a dedicated module -Put all tests for each function in a class - -In the first approach we would still be interested in grouping all tests related to a source module (e.g. utils.py) in some way. Now, since we are already using modules to group tests for a function, this means that we should like to use a package to group tests for a source module. -The result is one source function maps to one test module, and one source module maps to one test package. -In the second approach, we would instead have one source function map to one test class (e.g. my_function() -> TestMyFunction), and one source module map to one test module (e.g. utils.py -> test_utils.py). -It depends on the situation, perhaps, but the second approach, i.e. a class of tests for each function you are testing, seems more clear to me. Additionally, if we are testing source classes/methods, then we could simply use an inheritance hierarchy of test classes, and still retain the one source module -> one test module mapping. -Finally, another benefit to either approach over just a flat file containing tests for multiple functions, is that with classes/modules already identifying which function is being tested, you can have better names for the actual tests, e.g. test_does_x and test_handles_y instead of test_my_function_does_x and test_my_function_handles_y.",0.9999092042625952,False,2,5476 -2018-04-25 08:16:18.483,How to calculate a 95 credible region for a 2D joint distribution?,"Suppose we have a joint distribution p(x_1,x_2), and we know x_1,x_2,p. Both are discrete, (x_1,x_2) is scatter, its contour could be drawn, marginal as well. I would like to show the area of 95% quantile (a scale of 95% data will be contained) of the joint distribution, how can I do that?","If you are interested in finding a pair x_1, x_2 of real numbers such that -P(X_1<=x_1, X_2<=x_2) = 0.95 and your distribution is continuous then there will be infinitely many of these pairs. You might be better of just fixing one of them and then finding the other",0.0,False,2,5477 -2018-04-25 08:16:18.483,How to calculate a 95 credible region for a 2D joint distribution?,"Suppose we have a joint distribution p(x_1,x_2), and we know x_1,x_2,p. Both are discrete, (x_1,x_2) is scatter, its contour could be drawn, marginal as well. I would like to show the area of 95% quantile (a scale of 95% data will be contained) of the joint distribution, how can I do that?","As the other points out, there are infinitely many solutions to this problem. A practical one is to find the approximate center of the point cloud and extend a circle from there until it contains approximately 95% of the data. Then, find the convex hull of the selected points and compute its area. -Of course, this will only work if the data is sort of concentrated in a single area. This won't work if there are several clusters.",0.2012947653214861,False,2,5477 -2018-04-25 12:20:28.077,queires and advanced operations in influxdb,"Recently started working on influxDB, can't find how to add new measurements or make a table of data from separate measurements, like in SQL we have to join table or so. -The influxdb docs aren't that clear. I'm currently using the terminal for everything and wouldn't mind switching to python but most of it is about HTTP post schemes in the docs, is there any other alternative? -I would prefer influxDB in python if the community support is good","The InfluxDB query language does not support joins across measurements. -It instead needs to be done client side after querying data. Querying, without join, data from multiple measurements can be done with one query.",1.2,True,1,5478 -2018-04-26 22:40:24.603,Run external python file with Mininet,"I try to write a defense system by using mininet + pox. -I have l3_edited file to calculate entropy. I understand when a host attacked. -I have my myTopo.py file that create a topo with Mininet. -Now my question: -I want to change hosts' ips when l3_edited detect an attack. Where should I do it? -I believe I should write program and run it in mininet. (not like custom topo but run it after create mininet, in command line). If it's true, how can I get hosts' objest? If I can get it, I can change their IPs. -Or should I do it on my myTopo.py ??? Then, how can I run my defense code, when I detect an attack?","If someone looking for answer... -You can use your custom topology file to do other task. Multithread solved my problem.",1.2,True,1,5479 -2018-04-27 12:58:39.440,Select columns periodically on pandas DataFrame,"I'm working on a Dataframe with 1116 columns, how could I select just the columns in a period of 17 ? -More clearly select the 12th, 29th,46th,63rd... columns","df.iloc[:,[i*17 for i in range(0,65)]]",0.0,False,1,5480 -2018-04-27 15:23:38.983,How to create different Python Wheel distributions for Ubuntu and RedHat,"I have a Cython-based package which depends on other C++ SO libraries. Those libraries are binary different between Ubuntu (dev) and RedHat (prod). So the SO file generated by Cython has to be different as well. If I use Wheel to package it the file name is same for both environments: -package-version-cp27-cp27mu-linux_x86_64.whl -So if I upload it to pypi it will conflict with RedHat based distribution of the same package. I have to upload it to pypi because the project is then PEX-ed (via Pants) and PEX tries to download from pypi and fails if it does not find it with the following exception. -Exception caught: 'pex.resolver.Unsatisfiable' -Any ideas how to resolve it? -Thx.","I found a solution by using a different PyPi instance. So our DEV Ubuntu environment and PROD RedHat just use two different PyPi sources. -To do that I had to make two configurations ~/.pypic and ~/.pip/pip.conf to upload.",0.0,False,1,5481 -2018-04-28 20:06:07.330,Why use zappa/chalice in serverless python apps?,"I am new to python and thought it would be great to have my very first python project running on AWS infrastructure. Given my previous node.js experience with lambdas, I thought that every function would have its own code and the app is only glued together by the persistence layer, everything else are decoupled separate functions. -In Python lambdas there are serverless microframeworks like Chalice or Zappa that seem to be an accepted practice. For me though it feels like they are hacking around the concept of serverless approach. You still have a full-blown app build on let's say Flask, or even Django, and that app is served through lambda. There is still one application that has all the routing, configs, boilerplate code, etc instead of small independent functions that just do their job. I currently do not see how and if this makes like any easier. - -What is the benefit / reason for having the whole code base served through lambdas as opposed to individual functions? -Is there an execution time penalty if using flask/django/whatever else with serverless apps? -If this depends on the particular project, what would be the guidance when to use framework, and when to use individual functions?","Benefits. You can use known concept, and adopt it in serverless. -Performance. The smaller code is the less ram it takes. It must be loaded, processed, and so on. Just to process single request? For me that was always too much. -Let's say you have diango project, that is working on elastic beanstalk, and you need some lamdas to deal with limited problems. Now. Do you want to have two separate configurations? What about common functions? - -Serverless looks nice, but... let's assume that you have permissions, so your app, for every call will pull that stuff. Perhaps you have it cached - in redis, as a hole permissions for user... Other option is dynamodb, which is even more expensive. Yes there is nice SLA, but API is quite strange, also if you plan keeping more data there... the more data you have the slower it work - for same money. In other words - if you put more data, fetching will cost more - if you want same speed.",0.0,False,1,5482 -2018-04-29 13:47:46.340,How to preprocess audio data for input into a Neural Network,"I'm currently developing a keyword-spotting system that recognizes digits from 0 to 9 using deep neural networks. I have a dataset of people saying the numbers(namely the TIDIGITS dataset, collected at Texas Instruments, Inc), however the data is not prepared to be fed into a neural network, because not all the audio data have the same audio length, plus some of the files contain several digits being spoken in sequence, like ""one two three"". -Can anyone tell me how would I transform these wav files into 1 second wav files containing only the sound of one digit? Is there any way to automatically do this? Preparing the audio files individually would be time expensive. -Thank you in advance!","I would split each wav by the areas of silence. Trim the silence from beginning and end. Then I'd run each one through a FFT for different sections. Smaller ones at the beginning of the sound. Then I'd normalise the frequencies against the fundamental. Then I'd feed the results into the NN as a 3d array of volumes, frequencies and times.",0.2012947653214861,False,1,5483 -2018-04-29 20:58:03.330,How would i generate a random number in python without duplicating numbers,"I was wondering how to generate a random 4 digit number that has no duplicates in python 3.6 -I could generate 0000-9999 but that would give me a number with a duplicate like 3445, Anyone have any ideas -thanks in advance","Generate a random number -check if there are any duplicates, if so go back to 1 -you have a number with no duplicates +2020-05-16 20:10:07.260,Pandas :Record count inserted by Python TO_SQL funtion,"I am using Python to_sql function to insert data in a database table from Pandas dataframe. +I am able to insert data in database table but I want to know in my code how many records are inserted . +How to know record count of inserts ( i do not want to write one more query to access database table to get record count)? +Also, is there a way to see logs for this function execution. like what were the queries executed etc.","There is no way to do this, since python cannot know how many of the records being inserted were already in the table.",0.0,False,1,6756 +2020-05-18 08:41:40.830,Understanding the sync method from the python shelve library,"The python documentation says this about the sync method: + +Write back all entries in the cache if the shelf was opened with + writeback set to True. Also empty the cache and synchronize the + persistent dictionary on disk, if feasible. This is called + automatically when the shelf is closed with close(). + +I am really having a hard time understanding this. +How does accessing data from cache differ from accessing data from disk? +And does emptying the cache affect how we can access the data stored +in a shelve?","For whoever is using the data in the Shelve object, it is transparent whether the data is cached or is on disk. If it is not on the cache, the file is read, the cache filled, and the value returned. Otherwise, the value as it is on the cache is used. +If the cache is emptied on calling sync, that means only that on the next value fetched from the same Shelve instance, the file will be read again. Since it is all automatic, there is no difference. The documentation is mostly describing how it is implemented. +If you are trying to open the same ""shelve"" file with two concurrent apps, or even two instances of shelve on the same program, chances are you are bound to big problems. Other than that, it just behaves as a ""persistent dictionary"" and that is it. +This pattern of writing to disk and re-reading from a single file makes no difference for a workload of a single user in an interactive program. For a Python program running as a server with tens to thousands of clients, or even a single big-data processing script, where this could impact actual performance, Shelve is hardly a usable thing anyway.",0.0,False,1,6757 +2020-05-18 09:36:32.503,How two Django applications use same database for authentication,"previously we implemented one django application call it as ""x"" and it have own database and it have django default authentication system, now we need to create another related django application call it as ""y"", but y application did n't have database settings for y application authentication we should use x applications database and existing users in x application, so is it possible to implement like this?, if possible give the way how can we use same database for two separated django applications for authentication system. +Sorry for my english +Thanks for spending time for my query","So, to achieve this. In your second application, add User model in the models.py and remember to keep managed=False in the User model's Meta class. +Inside your settings.py have the same DATABASES configuration as of your first application. +By doing this, you can achieve the User model related functionality with ease in your new application.",0.0,False,1,6758 +2020-05-18 12:02:43.860,The real difference between MEDIA_ROOT (media files) and STATIC_ROOT (static files) in python django and how to use them correctly,"The real difference between MEDIA_ROOT and STATIC_ROOT in python django and how to use them correctly? +I just was looking for the answer and i'm still confused about it, in the end of the day i got two different answers: +- First is that the MEDIA_ROOT is for storing images and mp3 files maybe and the STATIC_ROOT for the css, js... and so on. +-Second answer is that they were only using MEDIA_ROOT in the past for static files, and it caused some errors so eventually we are only using STATIC_ROOT. +is one of them right if not be direct and simple please so everybody can understand and by how to use them correctly i mean what kind of files to put in them exactly","Understanding the real difference between MEDIA_ROOT and STATIC_ROOT can be confusing sometimes as both of them are related to serving files. +To be clear about their differences, I could point out their uses and types of files they serve. + +STATIC_ROOT, STATIC_URL and STATICFILES_DIRS are all used to serve the static files required for the website or application. Whereas, MEDIA_URL and MEDIA_ROOT are used to serve the media files uploaded by a user. + +As you can see that the main difference lies between media and static files. So, let's differentiate them. + +Static files are files like CSS, JS, JQuery, scss, and other images(PNG, JPG, SVG, etc. )etc. which are used in development, creation and rendering of your website or application. Whereas, media files are those files that are uploaded by the user while using the website. + +So, if there is a JavaScript file named main.js which is used to give some functionalities like show popup on button click then it is a STATIC file. Similarly, images like website logo, or some static images displayed in the website that the user can't change by any action are also STATIC files. +Hence, files(as mentioned above) that are used during the development and rendering of the website are known as STATIC files and are served by STATIC_ROOT, STATIC_URL or STATICFILES_DIRS(during deployment) in Django. +Now for the MEDIA files: any file that the user uploads, for example; a video, or image or excel file, etc. during the normal usage of the website or application are called MEDIA files in Django. +MEDIA_ROOT and MEDIA_URL are used to point out the location of MEDIA files stored in your application. +Hope this makes you clear.",1.2,True,1,6759 +2020-05-18 22:30:37.343,Python not starting: IDLE's subprocess didn't make connection,"When I try to open Python it gives me an error saying: +IDLE's subprocess didn't make connection. See the 'startup failure' section of the IDLE doc online +I am not sure how to get it to start. I am on the most recent version of windows, and on the most recent version of python.",Open cmd and type python to see if python was installed. If so fix you IDE. If not download and reinstall python.,0.0,False,2,6760 +2020-05-18 22:30:37.343,Python not starting: IDLE's subprocess didn't make connection,"When I try to open Python it gives me an error saying: +IDLE's subprocess didn't make connection. See the 'startup failure' section of the IDLE doc online +I am not sure how to get it to start. I am on the most recent version of windows, and on the most recent version of python.","I figured it out, thanks. All I needed to do was uninstall random.py.",0.0,False,2,6760 +2020-05-19 04:10:54.070,Python backend -Securing REST APIs With Client Certificates,"We have a small website with API connected using AJAX. +We do not ask for usernames and passwords or any authentication like firebase auth. +So it's like open service and we want to avoid the service to be misused. +OAuth 2 is really effective when we ask for credentials to the user. +Can you suggest the security best practice and how it can be implemented in this context using python? +Thanks","Use a firewall +Allow for third-party identity providers if possible + Separate the concept of user identity and user account",0.3869120172231254,False,1,6761 +2020-05-19 13:54:18.343,How to add pylint for Django in vscode manually?,"I have created a Django project in vscode. Generally, vscode automatically prompts me to install pylint but this time it did not (or i missed it). Even though everything is running smoothly, I am still shown import errors. How do I manually install pytlint for this project? +Also,in vscode i never really create a 'workspace'. I just create and open folders and that works just fine. +ps. Im using pipenv. dont know how much necessary that info was.","Hi you must active your venv at the first then install pylint (pip install pylint) +In vscode: ctrl+shift+P then type linter (choose ""python:select linter"") now you can choose your linter (pylint) +I hope it helps you",0.3869120172231254,False,1,6762 +2020-05-19 20:35:03.707,Can I execute 1 python script by 3 different caller process at same time with respective arguments,"I have situation in centos where 3 different/Independent caller will try to execute same python script with respective command line args. eg: python main.py arg1, python main.py arg2, python main.py arg3 at same time. +My question is - Is it possible in the first place or I need to copy that python script, 3 times with 3 different names to be called by each process. +If it is possible then how it should be done so that these 3 processes will not interfare and python script execution will be independent from each other.","All the python processes will run entirely isolated from each other, even if executing the same source file. +If they interact with any external resource other than process memory (such as files on disk), then you may need to take measures to make sure the processes don't interfere (by making sure each instance uses a different filename, for example).",0.3869120172231254,False,1,6763 +2020-05-19 20:36:14.493,How to interpose RabbitMQ between REST client and (Python) REST server?,"If I develop a REST service hosted in Apache and a Python plugin which services GET, PUT, DELETE, PATCH; and this service is consumed by an Angular client (or other REST interacting browser technology). Then how do I make it scale-able with RabbitMQ (AMQP)? +Potential Solution #1 + +Multiple Apache's still faces off against the browser's HTTP calls. +Each Apache instance uses an AMQP plugin and then posts message to a queue +Python microservices monitor a queue and pull a message, service it and return response +Response passed back to Apache plugin, in turn Apache generates the HTTP response + +Does this mean the Python microservice no longer has any HTTP server code at all. This will change that component a lot. Perhaps best to decide upfront if you want to use this pattern as it seems it would be a task to rip out any HTTP server code. +Other potential solutions? I am genuinely puzzled as to how we're supposed to take a classic REST server component and upgrade it to be scale-able with RabbitMQ/AMQP with minimal disruption.","I would recommend switching wsgi to asgi(nginx can help here), Im not sure why you think rabbitmq is the solution to your problem, as nothing you described seems like that would be solved by using this method. +asgi is not supported by apache as far as I know, but it allows the server to go do work, and while its working it can continue to service new requests that come in. (gross over simplification) +If for whatever reason you really want to use job workers (rabbitmq, etc) then I would suggest returning to the user a ""token"" (really just the job_id) and then they can call with that token, and it will report back either the current job status or the result",1.2,True,1,6764 +2020-05-20 07:41:11.573,Create package with dependencies,"Do you know how to create package from my python application to be installable on Windows without internet connection? I want, for example, to create tar.gz file with my python script and all dependencies. Then install such package on windows machine with python3.7 already installed. I tried setuptools but i don't see possibility to include dependencies. Can you help me?",Their are several Java tutorials on how to make installers that are offline. You have your python project and just use a preprogrammed Java installer to then put all of the 'goodies' inside of. Then you have an installer for windows. And its an executable.,-0.3869120172231254,False,1,6765 +2020-05-20 08:14:01.817,Debug function not appearing in the menu bar in VS Code. I am using it for Python,"I am new at learning Python and i am trying to trying to set up the environment on VS code. However, the Debug icon and function is not on the menu bar. Please how do I rectify this?",right click on the menu bar. you can select which menus are active. it's also called run i believe.,0.0,False,1,6766 +2020-05-20 10:08:58.617,How can i solve AttributeError: module 'dis' has no attribute 'COMPILER_FLAG_NAMES' in anaconda3/envs/untitled/lib/python3.7/inspect.py,"i am trying implement from scipy.spatial import distance as dist library however it gives me File ""/home/afeyzadogan/anaconda3/envs/untitled/lib/python3.7/inspect.py"", line 56, in + for k, v in dis.COMPILER_FLAG_NAMES.items(): +AttributeError: module 'dis' has no attribute 'COMPILER_FLAG_NAMES' +error how can i solve it? +''' +for k, v in dis.COMPILER_FLAG_NAMES.items(): + mod_dict[""CO_"" + v] = k +'''","We ran across this issue in our code with the same exact AttributeError. +Turns out it was a totally unrelated file in the current directory called dis.py.",0.3869120172231254,False,1,6767 +2020-05-20 13:37:42.400,save a figure with a precise pixels size with savefig,"How can I save a plot in a 750x750 px using savefig? +The only useful parameter is DPI, but I don't understand how can I use it for setting a precise size","I added plt.tight_layout() before savefig(), and it solved the trimming issue I had. Maybe it will help yours as well. +I also set the figure size at the begining rcParams['figure.figsize'] = 40, 12(you can set your own width and height)",0.0,False,1,6768 +2020-05-20 19:33:34.343,Call function when new result has been returned from API,"There is an API that I am using from another company that returns the ID-s of the last 100 purchases that have been made in their website. +I have a function change_status(purchase_id) that I would like to call whenever a new purchase has been made. I know a workaround on how to do it, do a while True loop, keep an index last_modified_id for the last modified status of a purchase and loop all purchases from the latest to the earliest and stop once the current id is the same as last_modified_id and then put a sleeper for 10 seconds after each iteration. +Is there a better way on how to do it using events in python? Like calling the function change_status(purchase_id) when the result of that API has been changed. I have been searching around for a few days but could not find about about an event and an API. Any suggestion or idea helps. Posting what I have done is usually good in stackoverflow, but I don't have anything about events. The loop solution is totally different from the events solution. +Thank you","The only way to do this is to keep calling the API and watching for changes from the previous response, unless... +The API provider might have an option to call your API when something is updated on their side. It is a similar mechanism to push notifications. If they provide a method to do that, you can create an endpoint on your side to do whatever you need to do when a new purchase is made, and provide them the endpoint. However, as far as I know, most API providers do not do this, and the first method is your only option. +Hope this helps!",1.2,True,1,6769 +2020-05-20 19:55:21.393,Tips to practice matplotlib,"I've been studying python for data science for about 5 months now. But I get really stucked when it comes to matplotlib. There's always so many options to do anything, and I can't see a well defined path to do anything. Does anyone have this problem too and knows how to deal with it?","in programming in general "" There's always so many options to do anything"". +i recommend to you that read library and understand their functions and classes in a glance, then go and solve some problems from websites or give a real project if you can. if your code works do not worry and go ahead. +after these try and error you have a lot of real idea about various problems and you recognize difference between these options and pros and cons of them. like me three years ago.",0.0,False,2,6770 +2020-05-20 19:55:21.393,Tips to practice matplotlib,"I've been studying python for data science for about 5 months now. But I get really stucked when it comes to matplotlib. There's always so many options to do anything, and I can't see a well defined path to do anything. Does anyone have this problem too and knows how to deal with it?","I think your question is stating that you are bored and do not have any projects to make. If that is correct, there are many datasets available on sites like Kaggle that have open-source datasets for practice programmers.",0.0,False,2,6770 +2020-05-21 08:14:07.720,OnetoOne (primary_key=Tue) to ForeignKey in Django,"I have a OnetoOne field with primary_key=True in a model. +Now I want to change that to a ForeignKey but cannot since there is no 'id'. +From this: + +user = models.OneToOneField(User, primary_key=True, on_delete=models.CASCADE) + +To this: + +user1 = models.ForeignKey(User, related_name='questionnaire', on_delete=models.CASCADE) + +Showing this while makemigrations: + +You are trying to add a non-nullable field 'id' to historicaluserquestionnaire without a default; we can't do that (the database needs something to populate existing rows). + Please select a fix: + 1) Provide a one-off default now (will be set on all existing rows with a null value for this column) + 2) Quit, and let me add a default in models.py + +So how to do that? +Thanks!","The problem is that your trying to remove the primary key, but Django is then going to add a new primary key called ""id"". This is non-nullable and unique, so you can't really provide a one-off default. +The easiest solution is to just create a new model and copy your table over in a SQL migration, using the old user_id to populate the id field. Be sure to reset your table sequence to avoid collisions.",0.1352210990936997,False,1,6771 +2020-05-23 16:28:37.970,Deploy python flask project into a website,"So I recently finished my python project, grabbing values from an API and put it into my website. +Now I have no clue how I actually start the website (finding a host) and making it accessible to other people, I thought turning to here might find the solution. +I have done a good amount of research, tried ""pythonanywhere"" and ""google app engine"" but seem to not really find a solution. +I was hoping to be able to use ""hostinger"" as a host, as they have a good price and a good host. Contacted them but they said that they couldn't, though I could upload it to a VPS (which they have). Would it work for me to upload my files to this VPS and therefor get it to a website? or should I use another host?","A VPS would work, but you'll need to understand basic linux server admin to get things setup properly. +Sounds like you don't have any experience with server admin, so something like App Engine would be great for you. There are a ton of tutorials on the internet for deploying flask to GAE.",0.0,False,1,6772 +2020-05-24 19:16:38.693,"How can i change dtype from object to float64 in a column, using python?","I extracted some data from investing but columns values are all dtype = object, so i cant work with them... +how should i convert object to float? +(2558 6.678,08 2557 6.897,23 2556 7.095,95 2555 7.151,21 2554 7.093,34 ... 4 4.050,38 3 4.042,63 2 4.181,13 1 4.219,56 0 4.223,33 Name: Alta, Length: 2559, dtype: object) +What i want is : +2558 6678.08 2557 6897.23 2556 7095.95 2555 7151.21 2554 7093.34 ... 4 4050.38 3 4042.63 2 4181.13 1 4219.56 0 4223.33 Name: Alta, Length: 2559, dtype: float +Tried to use the a function which would replace , for . +def clean(x): x = x.replace(""."", """").replace("","",""."") +but it doesnt work cause dtype is object +Thanks!","That is because there is a comma between the value +Because a float cannot have a comma, you need to first replace the comma and then convert it into float +result[col] = result[col].str.replace("","","""").astype(float)",0.0,False,1,6773 +2020-05-25 14:54:36.873,Secure password store for Python CGI (Windows+IIS+Windows authentification),"I need to develope a python cgi script for a server run on Windows+IIS. The cgi script is run from a web page with Windows authentification. It means the script is run under different users from Windows active directory. +I need to use login/passwords in the script and see no idea how to store the passwords securely, because keyring stores data for a certain user only. Is there a way how to access password data from keyring for all active OS users? +I also tried to use os.environ variables, but they are stored for one web session only.",The only thing I can think of here is to run your script as a service account (generic AD account that is used just for this service) instead of using windows authentication. Then you can log into the server as that service account and setup the Microsoft Credential Manager credentials that way.,0.3869120172231254,False,1,6774 +2020-05-26 05:06:08.297,How do i add a PATH variable in the user variables of the environment variables?,"I have a path variable in the system variables but how do i add a path variable in the user variables section since i don't have any at the moment. +If there isn't a path variable in the user variables will it affect in any way? +How much will values of the path variables differ from the one in environment variables to the one in user variables if there is only one user present?","to add a new variable in users variable + +click one new button below the user variables. + +2.Then a pop window will appear asking you to type new variable name and its value, click ok after entering name and value. +Thats how you can add a new variable in user variables. +You should have a path variable in user variables also because ,for example while installing python you have a choice to add python path to variables here the path will be added in user variable 'path'.",0.0,False,1,6775 +2020-05-26 18:44:36.240,Best way to load a Pillow Image object from binary data in Python?,I have a program that modifies PNG files with Python's Pillow library. I was wondering how I could load binary data into a PNG image from PIL's Image object. I receive the PNG over a network as binary data (e.g. the data looks like b'\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR...'). What is the best way to accomplish this task?,I'd suggest receiving the data into a BytesIO object from the io standard library package. You can then treat that as a file-like object for the purposes of Pillow.,0.3869120172231254,False,1,6776 +2020-05-27 01:06:20.607,Clear all text in separate file,"I want to know how to delete/clear all text in a file inside another python file, I looked through stack overflow and could not find a answer, all help appreciated. Thanks!","Try: open('yourfile.txt', 'w').close()",0.1352210990936997,False,1,6777 +2020-05-27 08:12:04.827,Loss function and data format for training a ''categorical input' to 'categorical output' model?,"I am trying to train a model for autonomous driving that converts input from the front camera, to a bird's eye view image. +The input and output, both are segmentation masks with shape (96, 144) where each pixel has a range from 0 to 12 (each number represents a different class). +Now my question is how should i preprocess my data and which loss function should i use for the model (I am trying to use a Fully convolutional Network). +I tried to convert input and outputs to shape (96, 144, 13) using keras' to_categorical utility so each channel has 0s and 1s of representing a specific mask of a category. I used binary_crossentropy ad sigmoid activation for last layer with this and the model seemed to learn and loss started reducing. +But i am still unsure if this is the correct way or if there are any better ways. +what should be the: + +input and ouptput data format +activation of last layer +loss function","I found the solution, use categorical crossentropy with softmax activation at last layer. Use the same data format as specified in the question.",1.2,True,1,6778 +2020-05-27 12:05:54.603,how to compile python kivy app for ios on Windows 10 using buildozer?,"I succesfully compiled app for android, and now I want to compile python kivy app for ios using buildozer. My operation system is Windows 10, so I don't know how to compile file for ios. I downloaded ubuntu console from microsoft store, that helped me to compile apk file. How to compile file for ios? I hope you help me...",You can only deploy to iOS if you're working on a MacOS machine.,0.0,False,1,6779 +2020-05-27 12:06:03.667,How to copy and paste dataframe rows into a web page textarea field,"I have a dataframe with a single column ""Cntr_Number"" with x no of rows. +What i trying to achieve is using selenium to copy and paste the data into the web page textarea. +The constraint is that the web page text area only accept 20 rows of data per submission. +So how can i impplment it using while loop or other method. + +Copy and paste the first 20 rows of data and click on the ""Submit"" +button +Copy and paste the next 20 rows of data and click on the +""Submit"" button + +repeat the cycle until the last row. +Sorry i dont have any sample code to show but this is what I'm trying to achieve. +Appreciate if could have some sample code on how to do the implmentation.","The better approach will be capture all the the data in a List, Later while pasting it you can check the length of the list, and later iterate through the list and paste the data 20 at a time in the text area. I hope this will solve your problem.",0.3869120172231254,False,1,6780 +2020-05-27 12:11:19.710,"Convert the string ""%Y-%M-%D"" to ""YYYY-MM-DD"" for use in openpyxl NamedStyle number_format","TLDR: This is not a question about how to change the way a date is converted to a string, but how to convert between the two format types - This being ""%Y"" and ""YYYY"", the first having a % and the second having 4 x Y. +I have the following date format ""%Y-%M-%D"" that is used throughout an app. I now need to use this within a openpyxl NamedStyle as the number_format option. I cant use it directly as it doesn't like the format, it needs to be in ""YYYY-MM-DD"" (Excel) format. + +Do these two formats have names? (so I can Google a little more) +Short of creating a lookup table for each combination of %Y or %M to Y and M is there a conversion method? Maybe in openpyxl? I'd prefer not to use an additional library just for this! + +TIA!","Sounds like you are looking for a mapping between printf-style and Excel formatting. Individual date formats don't have names. And, due to the way Excel implements number formats I can't think of an easy way of covering all the possibilities. NamedStyles generally refer to a collection of formatting options such as font, border and not just number format.",0.3869120172231254,False,1,6781 +2020-05-27 14:20:48.347,How do iterators know what item comes next?,"As far as I understood it, iterators use lazy evaluation, meaning that they don't actually save each item in memory, but just contain the instructions on how to generate the next item. +However, let's say I have some list [1,2,3,4,5] and convert it into an iterator doing a = iter([1,2,3,4,5]). +Now, if iterators are supposed to save memory space because as said they contain the instructions on how to generate the next item that is requested, how do they do it in this example? How is the iterator a we created supposed to know what item comes next, without saving the entire list to memory?","Just think for a moment about this scenario ... You have a file of over a million elements, loading the memory of the whole list of elements would be really expensive. By using an iterator, you can avoid making the program heavy by opening the file once and extracting only one element for the computation. You would save a lot of memory.",0.0,False,1,6782 +2020-05-27 15:21:31.810,How do modules installation work in Python?,"[On a mac] +I know I can get packages doing pip install etc. +But I'm not entirely sure how all this works. +Does it matter which folder my terminal is in when I write this command? +What happens if I write it in a specific folder? +Does it matter if I do pip/pip3? +I'm doing a project, which had a requirements file. +So I went to the folder the requirements txt was in and did pip install requirements, but there was a specific tensorflow version, which only works for python 3.7. So I did """"""python3.7 -m pip install requirements"""""" and it worked (i'm not sure why). Then I got jupyter with brew and ran a notebook which used one of the modules in the requirements file, but it says there is no such module. +I suspect packages are linked to specific versions of python and I need to be running that version of python with my notebook, but I'm really not sure how. Is there some better way to be setting up my environment than just blindley pip installing stuff in random folders? +I'm sorry if this is not a well formed question, I will fix it if you let me know how.","There may be a difference between pip and pip3, depending on what you have installed on your system. pip is likely the pip used for python2 while pip3 is used for python3. +The easiest way to tell is to simply execute python and see what version starts. python will run typically run the older version 2.x python and python3 is required to run python version 3.x. If you install into the python2 environment (using pip install or python -m pip install the libraries will be available to the python version that runs when you execute python. To install them into a python3 environment, use pip3 or python3 -m pip install. +Basically, pip is writing module components into a library path, where import can find them. To do this for ALL users, use python3 or pip3 from the command line. To test it out, or use it on an individual basis, use a virtual environment as @Abhishek Verma said.",0.0,False,1,6783 +2020-05-27 16:15:33.287,How to display text on gmaps in Jupyter Python Notebook?,"Background: I'm using the gmaps package in Jupyter Python notebook. I have 2 points A (which is a marker) and B (which is a symbol) which is connected by a line. +Question: I want to somehow display text on this line that represents the distance between A and B. I have already calculated the distance between A and B but cannot display the text on the map. Is there any way to display text on the line?",I found that gmaps doesn't have this feature so I switched to folium package which has labels and popups to display text on hover and clicking the line.,1.2,True,1,6784 +2020-05-28 11:22:01.147,Python ValueError if running on different laptop,"I've just built a function that is working fine on my laptop (Mac, but I'm working on a Windows virtual machine of the office laptop), but when I pass it to a colleague o'mine, it raises a ValueError: +""You are trying to merge on object and int64 columns. If you wish to proceed you should use pd.concat"" +The line of the code that raises the error is a simple merge that on my laptop works perfectly: +df = pd.merge(df1, df2, on = ""x"", how = ""outer) +The input files are exactly the same (taken directly from the same remote folder). +I totally don't know how to fix the problem, and I don't understand why on my laptop it works (even if I open a new script or I restart the kernel, so no stored variables around) and in the one of my colleague it doesn't. +Thanks for your help!","my guess (a wild guess) is that the data from the 2 tab-separated CSV files (i.e., TSV files) is somehow converted using different locales on your computer and your colleague's computer. +Check if you have locale-dependent operations that could cause a number with the ""wrong"" decimal separator not to be recognized as a number. +This should not happen in pd.read_csv() because the decimal parameter has a well-defined default value of ""."". +But from an experience I had with timestamps in another context, one timestamp with a ""bad"" format can cause the whole column to be of the wrong type. So if just one number of just one of the two files, in the column you are merging on, has a decimal separator, and this decimal separator is only recognized as such on your machine, only on your machine the join will succeed (I'm supposing that pandas can join numeric columns even if they are of different type).",0.0,False,1,6785 +2020-05-28 19:55:54.440,"Can terraform run ""apply"" for multiple infrastructure/workspace in parallel?","We have one terraform instance and script which could create infra in azure. We would like to use same scripts to create/update/destroy isolated infra for each one of our customers on azure . We have achieved this by assigning one workspace for each client,different var files and using backend remote state files on azure. +Our intend is to create a wrapper python program that could create multiple threads and trigger terraform apply in parallel for all workspaces. This seems to be not working as terraform runs for one workspace at a time. +Any suggestions/advice on how we can achieve parallel execution of terraform apply for different workspaces?","It's safe to run multiple Terraform processes concurrently as long as: + +They all have totally distinct backend configurations, both in terms of state storage and in terms of lock configuration. (If they have overlapping lock configuration then they'll mutex each other, effectively serializing the operations in spite of you running multiple copies.) +They work with an entirely disjoint set of remote objects, including those represented by both managed resources (resource blocks) and data resources (data blocks). + +Most remote APIs do not support any sort of transaction or mutex concept directly themselves, so Terraform cannot generally offer fine-grained mutual exclusion for individual objects. However, multiple runs that work with entirely separate remote objects will not interact with one another. +Removing a workspace (using terraform workspace delete) concurrently with an operation against that workspace will cause undefined behavior, because it is likely to delete the very objects Terraform is using to track the operation. +There is no built-in Terraform command for running multiple operations concurrently, so to do so will require custom automation that wraps Terraform.",0.9950547536867304,False,1,6786 +2020-05-28 20:40:14.903,How do you request device connection string in azure using python and iotHub library?,I am wondering how can you get device connection string from IotHub using python in azure? any ideas? the device object produced by IoTHubRegisterManager.Create_device_with_sas(...) doesn't seem to contain the property connection string.,"You can get a device connection string from the device registry. However, it is not recommended that you do that on a device. The reason being is that you will need the IoT hub connection string to authenticate with your hub so that you can read the device registry. If your device is doing that and it is compromised then the perpetrator now has your IoT hub connection string and could cause all kinds of mayhem. You should specifically provide each device instance with its connection string. +Alternatively, you could research the Azure DPS service which will provide you with device authentication details in a secure manner.",0.0,False,1,6787 +2020-05-29 21:43:13.640,I am not allowed to run a python executable on other pcs,"I was doing a game in tkinter, then I make it executable with PyInstaller and sent it to my friends so they can run it and tell me how it feels. +It seems that they could download the file, but can't open it because windows forbade them telling that it's not secure and not letting them choose to assume the risk or something. +They tried to run as administrator and still nothing changed. +What should I do or what I should add to my code so that windows can open it without problem and why windows opens other executable files without saying that(current error that my executable gets)?","compress it as a .zip file and then it will most probably work +or install NSIS and create a windows installer for it.",0.0,False,1,6788 +2020-05-30 06:09:20.403,how to implement csrf without csrf token in django,"In django, if I want to use csrf token, I need to imbed a form with csrf token in django template. However as a backend-engineer I am co-working with a front-end engineer whose code is not available for me. So I caanot use the template. In this case, if I want still the csrf function. what should I do?","you should ask the coworker to embed the csrf token in the form he is sending you +you can get it from document.Cookies if he doesnt want to or cannot use the {% csrf %} tag",0.0,False,1,6789 +2020-05-30 08:51:11.993,How to analyze crawl results,"I crawled and saved the user's website usage lists. +I want to analyze the results of the crawl, but I wonder how there is a way. +First of all, what I thought was Word Cloud. +I am looking for a way to track user's personal preferences with user's computer history. +I want a way to visualize personal tendencies, etc. at a glance. Or I'm looking for a way to find out if there's no risk of suicide or addiction as a result of the search. +thank you.","If you want to visualize data and make analysis on it matplotlib would be good start , again it depends a lot on your data. Matplotlib and seaborn are plotting libraries that are good for representing quantitative data and get some basic analysis at least.",0.0,False,1,6790 +2020-06-01 16:31:56.840,Surfaces or Sprites in Pygame?,"Good evening, I'm making a platformer and would like to know when you should use one of the both. +For example for: +1)The player controlled character +2)The textured tiles that make up the level +3)The background +Should/Could you make everything with sprites ? +I just want to know how you would do it if you were to work on a pygame project. +I ask this because I see lots of pygame tutorials that explain adding textures by using surfaces but then in other tutorials, they use sprite objects instead.","Yes you could make everything including the background with sprites. It usually does not make sense for the background though (unless you;re doing layers of some form). +The rest often make senses as sprite, but that depends on your situation.",1.2,True,1,6791 +2020-06-01 22:09:24.457,"Threading in Python, ""communication"" between threads","I have two functions: def is_updated_database(): is checking if database is updated and the other onedef scrape_links(database): is scraping through set of links(that it downloaded from aforementioned database). +So what I want do is when def is_updated_database(): finds that the updated is downloaded, I want to stop def scrape_links(database): and reload it with a new function parameter(database which would be a list of new links). +My attempt: I know how to run two threads, but I have no idea how to ""connect"" them, so that if something happens to one then something should happen to another one.","Well, one way to solve this problem, may be the checking of database state, and if something new appears there, you could return the new database object, and after that scrape the links, probably this is losing it's multithreading functionality, but that's the way it works. +I don't think that any code examples are required here for you to understand what I mean.",0.0,False,1,6792 +2020-06-02 05:00:54.747,"Given the dataset, how to select the learning algorithm?","I've to build an ML model to classify sentences into different categories. I have a dataset with 2 columns (sentence and label) and 350 rows i.e. with shape (350, 2). To convert the sentences into numeric representation I've used TfIdf vectorization, and so the transformed dataset now has 452 columns (451 columns were obtained using TfIdf, and 1 is the label) i.e. with shape (350, 452). More generally speaking, I have a dataset with a lot more features than training samples. In such a scenario what's the best classification algorithm to use? Logistic Regression, SVM (again what kernel?), neural networks (again which architecture?), naive Bayes or is there any other algorithm? +How about if I get more training samples in the future (but the number of columns doesn't increase much), say with a shape (10000, 750)? +Edit: The sentences are actually narrations from bank statements. I have around 10 to 15 labels, all of which I have labelled manually. Eg. Tax, Bank Charges, Loan etc. In future I do plan to get more statements and I will be labelling them as well. I believe I may end up having around 20 labels at most.","With such a small training set, I think you would only get any reasonable results by getting some pre-trained language model such as GPT-2 and fine tune to your problem. That probably is still true even for a larger dataset, a neural net would probably still do best even if you train your own from scratch. Btw, how many labels do you have? What kind of labels are those?",0.0,False,1,6793 +2020-06-02 06:45:38.810,What is the most efficient way to push and pop a list in Python?,"In Python how do I write code which shifts off the last element of a list and adds a new one to the beginning - to run as fast as possible at execution? +There are good solutions involving the use of append, rotate etc but not all may translate to fast execution.","Don't use a list. +A list can do fast inserts and removals of items only at its end. You'd use pop(-1) and append, and you'd end up with a stack. +Instead, use collections.deque, which is designed for efficient addition and removal at both ends. Working on the ""front"" of a deque uses the popleft and appendleft methods. Note, ""deque"" means ""double ended queue"", and is pronounced ""deck"".",0.9950547536867304,False,1,6794 +2020-06-02 16:26:04.147,How to set tkinter Entry Border Radius,"This is my first question to here. I don't know how to set Border Radius for Tkinter Entry, Thanks for your Help!","There is no option to set a border radius on the tkinter or ttk Entry widgets, or any of the other widgets in those modules. Tkinter doesn't support the concept of a border radius.",1.2,True,1,6795 +2020-06-02 18:46:29.100,A new table for each user created,I am using Django 3.0 and I was wondering how to create a new database table linked to the creation of each user. In a practical sense: I want an app that lets users add certain stuff to a list but each user to have a different list where they can add their stuff. How should I approach this as I can't seem to find the right documentation... Thanks a lot !!!,"This is too long for a comment. +Creating a new table for each user is almost never the right way to solve a problem. Instead, you just have a userStuff table that maintains the lists. It would have columns like: + +userId +stuffId + +And, if you want the stuff for a given user, just use a where clause.",1.2,True,1,6796 +2020-06-02 19:12:03.813,How to enable PyCharm autocompletion for imported library (Discord.py),How do I enable method autocompletion for discord.py in PyCharm? Until now I've been doing it the hard way by looking at the documentation and I didn't even know that autocomplete for a library existed. So how do I enable it?,"The answer in my case was to first create a new interpreter as a new virtual environment, copy over all of the libraries I needed (there is an option to inherit all of the libraries from the previous interpreter while setting up the new one) and then follow method 3 from above. I hope this helps anyone in the future!",1.2,True,1,6797 +2020-06-03 18:20:40.193,How to install turicreate on windows 7?,"Can anyone tell me how to install turicreate on windows 7? I am using python of version 3.7. I have tried using pip install -U turicreate to install but failed. +Thanks in advance","I am quoting from Turicreate website: +Turi Create supports: + +macOS 10.12+ +Linux (with glibc 2.12+) +Windows 10 (via WSL) + +System Requirements + +Python 2.7, 3.5, or 3.6 +Python 3.7 macOS only +x86_64 architecture + +So Windows 7 is not supported in this case.",0.0,False,1,6798 +2020-06-04 04:50:55.740,Identify domain related important keywords from a given text,"I am relatively new to the field of NLP/text processing. I would like to know how to identify domain-related important keywords from a given text. +For example, if I have to build a Q&A chatbot that will be used in the Banking domain, the Q would be like: What is the maturity date for TRADE:12345 ? +From the Q, I would like to extract the keywords: maturity date & TRADE:12345. +From the extracted information, I would frame a SQL-like query, search the DB, retrieve the SQL output and provide the response back to the user. +Any help would be appreciated. +Thanks in advance.","So, this is where the work comes in. +Normally people start with a stop word list. There are several, choose wisely. But more than likely you'll experiment and/or use a base list and then add more words to that list. +Depending on the list it will take out + +""what, is, the, for, ?"" + +Since this a pretty easy example, they'll all do that. But you'll notice that what is being done is just the opposite of what you wanted. You asked for domain-specific words but what is happening is the removal of all that other cruft (to the library). +From here it will depend on what you use. NLTK or Spacy are common choices. Regardless of what you pick, get a real understanding of concepts or it can bite you (like pretty much anything in Data Science). +Expect to start thinking in terms of linguistic patterns so, in your example: + +What is the maturity date for TRADE:12345 ? + +'What' is an interrogative, 'the' is a definite article, 'for' starts a prepositional phrase. +There may be other clues such as the ':' or that TRADE is in all caps. But, it might not be. +That should get you started but you might look at some of the other StackExchange sites for deeper expertise. +Finally, you want to break a question like this into more than one question (assuming that you've done the research and determined the question hasn't already been asked -- repeatedly). So, NLTK and NLP are decently new, but SQL queries are usually a Google search.",0.0,False,1,6799 +2020-06-04 12:37:35.410,Devpi REST API - How to retrieve versions of packages,"I'm trying to retrieve versions of all packages from specific index. I'm trying to sending GET request with /user/index/+api suffix but it not responding nothing intresting. I can't find docs about devpi rest api :( +Has anyone idea how could I do this? +Best regards, Matt.",Simply add header Accept: application/json - it's working!,1.2,True,1,6800 +2020-06-04 13:32:52.410,Use HTML interface to control a running python script on a lighttpd server,"I am trying to find out what the best tool is for my project. +I have a lighttpd server running on a raspberry pi (RPi) and a Python3 module which controls the camera. I need a lot of custom control of the camera, and I need to be able to change modes on the fly. +I would like to have a python script continuously running which waits for commands from the lighttpd server which will ultimately come from a user interacting with an HTML based webpage through an intranet (no outside connections). +I have used Flask in the past to control a running script, and I have used FastCGI to execute scripts. I would like to continue using the lighttpd server over rather than switching entirely over to Flask, but I don't know how to interact with the script once it is actually running to execute individual functions. I can't separate them into multiple functions because only one script can control the camera at a time. +Is the right solution to set up a Flask app and have the lighttpd send requests there, or is there a better tool for this?","You have several questions merged into one, and some of them are opion based questions as such I am going to avoid answering those. These are the opinion based questions. + +I am trying to find out what the best tool is for my project. +Is the right solution to set up a Flask app and have the lighttpd send requests there +Is there a better tool for this? + +The reason I point this out is not because your question isnn't valid but because often times questions like these will get flagged and/or closed. Take a look at this for future referece. +Now to answer this question: +"" I don't know how to interact with the script once it is actually running to execute individual functions"" +Try doing it this way: + +Modify your script to use threads and/or processes. +You will have for example a continously running thread which would be the camera. +You would have another non blocking thread listening to IO commands. +Your IO commands would be comming through command line arguments. +Your IO thread upon recieving an IO command would redirect your running camera thread to a specific function as needed. + +Hope that helps and good luck!!",0.0,False,2,6801 +2020-06-04 13:32:52.410,Use HTML interface to control a running python script on a lighttpd server,"I am trying to find out what the best tool is for my project. +I have a lighttpd server running on a raspberry pi (RPi) and a Python3 module which controls the camera. I need a lot of custom control of the camera, and I need to be able to change modes on the fly. +I would like to have a python script continuously running which waits for commands from the lighttpd server which will ultimately come from a user interacting with an HTML based webpage through an intranet (no outside connections). +I have used Flask in the past to control a running script, and I have used FastCGI to execute scripts. I would like to continue using the lighttpd server over rather than switching entirely over to Flask, but I don't know how to interact with the script once it is actually running to execute individual functions. I can't separate them into multiple functions because only one script can control the camera at a time. +Is the right solution to set up a Flask app and have the lighttpd send requests there, or is there a better tool for this?","I have used Flask in the past to control a running script, and I have used FastCGI to execute scripts. -OR -Generate it one digit at a time from a list, removing the digit from the list at each iteration. - -Generate a list with numbers 0 to 9 in it. -Create two variables, the result holding value 0, and multiplier holding 1. -Remove a random element from the list, multiply it by the multiplier variable, add it to the result. -multiply the multiplier by 10 -go to step 3 and repeat for the next digit (up to the desired digits) -you now have a random number with no repeats.",-0.3869120172231254,False,1,5484 -2018-04-30 15:12:36.730,Keras Neural Network. Preprocessing,"I have this doubt when I fit a neural network in a regression problem. I preprocessed the predictors (features) of my train and test data using the methods of Imputers and Scale from sklearn.preprocessing,but I did not preprocessed the class or target of my train data or test data. -In the architecture of my neural network all the layers has relu as activation function except the last layer that has the sigmoid function. I have choosen the sigmoid function for the last layer because the values of the predictions are between 0 and 1. -tl;dr: In summary, my question is: should I deprocess the output of my neuralnet? If I don't use the sigmoid function, the values of my output are < 0 and > 1. In this case, how should I do it? -Thanks","Usually, if you are doing regression you should use a linear' activation in the last layer. A sigmoid function will 'favor' values closer to 0 and 1, so it would be harder for your model to output intermediate values. -If the distribution of your targets is gaussian or uniform I would go with a linear output layer. De-processing shouldn't be necessary unless you have very large targets.",0.0,False,1,5485 -2018-05-01 02:43:55.537,How to calculate the HMAC(hsa256) of a text using a public certificate (.pem) as key,"I'm working on Json Web Tokens and wanted to reproduce it using python, but I'm struggling on how to calculate the HMAC_SHA256 of the texts using a public certificate (pem file) as a key. -Does anyone know how I can accomplish that!? -Tks","In case any one found this question. The answer provided by the host works, but the idea is wrong. You don't use any RSA keys with HMAC method. The RSA key pair (public and private) are used for asymmetric algorithm while HMAC is symmetric algorithm. -In HMAC, the two sides of the communication keep the same secret text(bytes) as the key. It can be a public_cert.pem as long as you keep it secretly. But a public.pem is usually shared publicly, which makes it unsafe.",0.3869120172231254,False,1,5486 -2018-05-01 05:40:22.107,How to auto scale in JES,"I'm coding watermarking images in JES and I was wondering how to Watermark a picture by automatically scaling a watermark image? -If anyone can help me that would be great. -Thanks.","Ill start by giving you a quote from the INFT1004 assignment you are asking for help with. -""In particular, you should try not to use code or algorithms from external sources, and not to obtain help from people other than your instructors, as this can prevent you from mastering these concepts"" -It specifically says in this assignment that you should not ask people online or use code you find or request online, and is a breach of the University of Newcastle academic integrity code - you know the thing you did a module on before you started the course. A copy of this post will be sent along to the course instructor.",0.0,False,1,5487 -2018-05-01 13:33:54.250,Multi-label classification methods for large dataset,"I realize there's another question with a similar title, but my dataset is very different. -I have nearly 40 million rows and about 3 thousand labels. Running a simply sklearn train_test_split takes nearly 20 minutes. -I initially was using multi-class classification models as that's all I had experience with, and realized that since I needed to come up with all the possible labels a particular record could be tied to, I should be using a multi-label classification method. -I'm looking for recommendations on how to do this efficiently. I tried binary relevance, which took nearly 4 hours to train. Classifier chains errored out with a memory error after 22 hours. I'm afraid to try a label powerset as I've read they don't work well with a ton of data. Lastly, I've got adapted algorithm, MlkNN and then ensemble approaches (which I'm also worried about performance wise). -Does anyone else have experience with this type of problem and volume of data? In addition to suggested models, I'm also hoping for advice on best training methods, like train_test_split ratios or different/better methods.","20 minutes for this size of a job doesn't seem that long, neither does 4 hours for training. -I would really try vowpal wabbit. It excels at this sort of multilabel problem and will probably give unmatched performance if that's what you're after. It requires significant tuning and will still require quality training data, but it's well worth it. This is essentially just a binary classification problem. An ensemble will of course take longer so consider whether or not it's necessary given your accuracy requirements.",1.2,True,1,5488 -2018-05-01 20:23:34.880,PULP: Check variable setting against constraints,"I'm looking to set up a constraint-check in Python using PULP. Suppose I had variables A1,..,Xn and a constraint (AffineExpression) A1X1 + ... + AnXn <= B, where A1,..,An and B are all constants. -Given an assignment for X (e.g. X1=1, X2=4,...Xn=2), how can I check if the constraints are satisfied? I know how to do this with matrices using Numpy, but wondering if it's possible to do using PULP to let the library handle the work. -My hope here is that I can check specific variable assignments. I do not want to run an optimization algorithm on the problem (e.g. prob.solve()). -Can PULP do this? Is there a different Python library that would be better? I've thought about Google's OR-Tools but have found the documentation is a little bit harder to parse through than PULP's.","It looks like this is possible doing the following: - -Define PULP variables and constraints and add them to an LpProblem -Make a dictionary of your assignments in the form {'variable name': value} -Use LpProblem.assignVarsVals(your_assignment_dict) to assign those values -Run LpProblem.valid() to check that your assignment meets all constraints and variable restrictions - -Note that this will almost certainly be slower than using numpy and Ax <= b. Formulating the problem might be easier, but performance will suffer due to how PULP runs these checks.",1.2,True,1,5489 -2018-05-02 15:18:32.357,How to find if there are wrong values in a pandas dataframe?,"I am quite new in Python coding, and I am dealing with a big dataframe for my internship. -I had an issue as sometimes there are wrong values in my dataframe. For example I find string type values (""broken leaf"") instead of integer type values as (""120 cm"") or (NaN). -I know there is the df.replace() function, but therefore you need to know that there are wrong values. So how do I find if there are any wrong values inside my dataframe? -Thank you in advance","""120 cm"" is a string, not an integer, so that's a confusing example. Some ways to find ""unexpected"" values include: -Use ""describe"" to examine the range of numerical values, to see if there are any far outside of your expected range. -Use ""unique"" to see the set of all values for cases where you expect a small number of permitted values, like a gender field. -Look at the datatypes of columns to see whether there are strings creeping in to fields that are supposed to be numerical. -Use regexps if valid values for a particular column follow a predictable pattern.",0.0,False,1,5490 -2018-05-03 09:34:04.367,Read raw ethernet packet using python on Raspberry,"I have a device which is sending packet with its own specific construction (header, data, crc) through its ethernet port. -What I would like to do is to communicate with this device using a Raspberry and Python 3.x. -I am already able to send Raw ethernet packet using the ""socket"" Library, I've checked with wireshark on my computer and everything seems to be transmitted as expected. -But now I would like to read incoming raw packet sent by the device and store it somewhere on my RPI to use it later. -I don't know how to use the ""socket"" Library to read raw packet (I mean layer 2 packet), I only find tutorials to read higher level packet like TCP/IP. -What I would like to do is Something similar to what wireshark does on my computer, that is to say read all raw packet going through the ethernet port. -Thanks, -Alban","Did you try using ettercap package (ettercap-graphical)? -It should be available with apt. -Alternatively you can try using TCPDump (Java tool) or even check ip tables",0.0,False,1,5491 -2018-05-04 02:07:04.457,Host command and ifconfig giving different ips,"I am using server(server_name.corp.com) inside a corporate company. On the server i am running a flask server to listen on 0.0.0.0:5000. -servers are not exposed to outside world but accessible via vpns. -Now when i run host server_name.corp.com in the box i get some ip1(10.*.*.*) -When i run ifconfig in the box it gives me ip2(10.*.*.*). -Also if i run ping server_name.corp.com in same box i get ip2. -Also i can ssh into server with ip1 not ip2 -I am able to access the flask server at ip1:5000 but not on ip2:5000. -I am not into networking so fully confused on why there are 2 different ips and why i can access ip1:5000 from browser not ip2:5000. -Also what is equivalent of host command in python ( how to get ip1 from python. I am using socktet.gethostbyname(server_name.corp.com) which gives me ip2)","Not quite clear about the network status by your statements, I can only tell that if you want to get ip1 by python, you could use standard lib subprocess, which usually be used to execute os command. (See subprocess.Popen)",0.0,False,1,5492 -2018-05-05 02:23:02.700,how to use python to check if subdomain exists?,"Does anyone know how to check if a subdomain exists on a website? -I am doing a sign up form and everyone gets there own subdomain, I have some javascript written on the front end but I need to find a way to check on the backend.","Put the assigned subdomain in a database table within unique indexed column. It will be easier to check from python (sqlalchemy, pymysql ect...) if subdomain has already been used + will automatically prevent duplicates to be assigned/inserted.",0.0,False,2,5493 -2018-05-05 02:23:02.700,how to use python to check if subdomain exists?,"Does anyone know how to check if a subdomain exists on a website? -I am doing a sign up form and everyone gets there own subdomain, I have some javascript written on the front end but I need to find a way to check on the backend.","Do a curl or http request on subdomain which you want to verify, if you get 404 that means it doesn't exists, if you get 200 it definitely exists",0.2012947653214861,False,2,5493 -2018-05-05 14:24:30.920,How to use visual studio code >after< installing anaconda,"If you have never installed anaconda, it seems to be rather simple. In the installation process of Anaconda, you choose to install visual studio code and that is it. -But I would like some help in my situation: -My objective: I want to use visual studio code with anaconda - -I have a mac with anaconda 1.5.1 installed. -I installed visual studio code. -I updated anaconda (from the terminal) now it is 1.6.9 - -From there, I don't know how to proceed. -any help please","You need to select the correct python interpreter. When you are in a .py file, there's a blue bar in the bottom of the window (if you have the dark theme), there you can select the anaconda python interpreter. -Else you can open the command window with ctrl+p or command+p and type '>' for running vscode commands and search '> Python Interpreter'. -If you don't see anaconda there google how to add a new python interpreter to vscode",0.3869120172231254,False,1,5494 -2018-05-05 16:40:27.703,Calling Python scripts from Java. Should I use Docker?,"We have a Java application in our project and what we want is to call some Python script and return results from it. What is the best way to do this? -We want to isolate Python execution to avoid affecting Java application at all. Probably, Dockerizing Python is the best solution. I don't know any other way. -Then, a question is how to call it from Java. -As far as I understand there are several ways: - -start some web-server inside Docker which accepts REST calls from Java App and runs Python scripts and returns results to Java via REST too. -handle request and response via Docker CLI somehow. -use Java Docker API to send REST request to Docker which then converted by Docker to Stdin/Stdout of Python script inside Docker. - -What is the most effective and correct way to connect Java App with Python, running inside Docker?","You don’t need docker for this. There are a couple of options, you should choose depending on what your Java application is doing. - -If the Java application is a client - based on swing, weblaunch, or providing UI directly - you will want to turn the python functionality to be wrapped in REST/HTTP calls. -If the Java application is a server/webapp - executing within Tomcat, JBoss or other application container - you should simply wrap the python scrip inside a exec call. See the Java Runtime and ProcessBuilder API for this purpose.",1.2,True,1,5495 -2018-05-05 21:56:31.143,Unintuitive solidity contract return values in ethereum python,"I'm playing around with ethereum and python and I'm running into some weird behavior I can't make sense of. I'm having trouble understanding how return values work when calling a contract function with the python w3 client. Here's a minimal example which is confusing me in several different ways: -Contract: - -pragma solidity ^0.4.0; - -contract test { - function test(){ - - } - - function return_true() public returns (bool) { - return true; - } - - function return_address() public returns (address) { - return 0x111111111111111111111111111111111111111; - } -} +Given your experience, one solution is to do what you know. lighttpd can execute your script via FastCGI. Python3 supports FastCGI with Flask (or other frameworks). A python3 app which serially processes requests will have one process issuing commands to the camera. -Python unittest code +I would like to continue using the lighttpd server over rather than switching entirely over to Flask, but I don't know how to interact with the script once it is actually running to execute individual functions. -from web3 import Web3, EthereumTesterProvider -from solc import compile_source -from web3.contract import ConciseContract -import unittest -import os +Configure your Flask app to run as a FastCGI app instead of as a standalone webserver.",1.2,True,2,6801 +2020-06-04 17:52:29.977,How to prevent direct access to cert files when connecting MQTT client with Python,"I am using the pho MQTT client library successfully to connect to AWS. After the mqtt client is created, providing the necessary keys and certificates is done with a call to client.tls_set() This method requires file paths to root certificate, own certificate and private key file. +All is well and life is good except that I now need to provide this code to external contractors whom should not have direct access to these cert and key files. The contractors have a mix of PC and macOS systems. On macOS we have keychain I am familiar with but do not know how to approach this with python - examples/library references would be great. On the PC I have no idea which is the prevalent mechanism to solve this. +To add to this, I have no control over the contractor PCs/Macs - i.e., I have no ability to revoke an item in their keychain. How do I solve this? +Sorry for being such a noob in security aspects. No need to provide complete examples, just references to articles to read, courses to follow and keywords to search would be great - though code examples will be happily accepted also of course.","Short answer: you don't. +Longer answer: +If you want them to be able connect then you have no choice but to give them the cert/private key that identifies that device/user. +The control you have is issue each contractor with their own unique key/cert and if you believe key/cert has been miss used, revoke the cert at the CA and have the broker check the revocation list. +You can protect the private key with a password, but again you have to either include this in the code or give it to the contractor. +Even if the contractors were using a device with a hardware keystore (secure element) that you could securely store the private key in, all that would do is stop the user from extracting the key and moving it to a different machine, they would still be able to make use of the private key for what ever they want on that machine. +The best mitigation is to make sure the certificate has a short life and control renewing the certificate, this means if a certificate is leaked then it will stop working quickly even if you don't notice and explicitly revoke it.",0.3869120172231254,False,1,6802 +2020-06-04 20:38:18.423,Importing module to VS code,"im very new in programming and i learn Python. +I'm coding on mac btw. +I'd like to know how can i import some modules in VS code. +For exemple, if i want to use the speedtest module i have to download it (what i did) and then import it to my code. But it never worked and i always have the error no module etc. +I used pip to install each package, i have them on my computer but i really don't know to import them on VS code. Even with the terminal of the IDE. +I know it must be something very common for u guys but i will help me a lot. +Thx","Quick Summary +This might not be an issue with VS Code. +Problem: The folder to which pip3 installs your packages is not on your $PATH. +Fix: Go to /Applications/Python 3.8 in Finder, and run the Update Shell Profile.command script. Also, if you are using pip install , instead of pip3 install that might be your problem. +Details +Your Mac looks for installed packages in several different folders on your Mac. The list of folders it searches is stored in an environment variable called $PATH. Paths like /Library/Frameworks/Python.framework/Versions/3.8/bin should be in the $PATH environment variable, since that's where pip3 installs all packages.",1.2,True,1,6803 +2020-06-05 09:05:11.073,How to install pip and python modules with a single batch file?,"I really don't understand how batch files work. But I made a python script for my father to use in his work. And I thought installing pip and necessary modules with a single batch file would make it a lot easier for him. So how can I do it? +The modules I'm using in script are: xlrd, xlsxwriter and tkinter.","You can create a requirements.txt file then use pip install -r requirements.txt to download all modules, if you are working on a virtual environment and you only have the modules your project uses, you can use pip3 freeze >> requirements.txt This is not a batch file but it will work just fine and it is pretty easy",0.296905446847765,False,1,6804 +2020-06-05 12:25:21.517,Python Contour Plot/HeatMap,"I have x and y coordinates in a df from LoL matches and i want to create a contour plot or heat map to show where the player normally moves in a match. +Does any one know how can I do it?","A contour plot or heat map needs 3 values. You have to provide x, y and z values in order to plot a contour since x and y give the position and z gives the value of the variable you want to show the contour of as a variable of x and y. +If you want to show the movement of the players as a function of time you should look at matplotlib's animations. Or if you want to show the ""players density field"" you have to calculate it.",0.0,False,1,6805 +2020-06-06 13:00:36.307,Login required in django,"I am developing ecommerce website in django . +I have view ( addToCart) +I want sure before add to cart if user logged in or not +so that i use @login_required('login') before view +but when click login it show error (can't access to page ). +Note that: normal login is working","Please check the following +1. Add login url on settings +2. Add redirect url on login required decorator +3. If you create a custom login view make sure to check next kwargs",0.0,False,1,6806 +2020-06-06 23:06:30.737,Running all Python scripts with the same name across many directories,"I have a file structure that looks something like this: +Master: +First -def get_contract_source(file_name): - with open(file_name) as f: - return f.read() - - -class TestContract(unittest.TestCase): - CONTRACT_FILE_PATH = ""test.sol"" - DEFAULT_PROPOSAL_ADDRESS = ""0x1111111111111111111111111111111111111111"" - - def setUp(self): - # copied from https://github.com/ethereum/web3.py/tree/1802e0f6c7871d921e6c5f6e43db6bf2ef06d8d1 with MIT licence - # has slight modifications to work with this unittest - contract_source_code = get_contract_source(self.CONTRACT_FILE_PATH) - compiled_sol = compile_source(contract_source_code) # Compiled source code - contract_interface = compiled_sol[':test'] - # web3.py instance - self.w3 = Web3(EthereumTesterProvider()) - # Instantiate and deploy contract - self.contract = self.w3.eth.contract(abi=contract_interface['abi'], bytecode=contract_interface['bin']) - # Get transaction hash from deployed contract - tx_hash = self.contract.constructor().transact({'from': self.w3.eth.accounts[0]}) - # Get tx receipt to get contract address - tx_receipt = self.w3.eth.getTransactionReceipt(tx_hash) - self.contract_address = tx_receipt['contractAddress'] - # Contract instance in concise mode - abi = contract_interface['abi'] - self.contract_instance = self.w3.eth.contract(address=self.contract_address, abi=abi, - ContractFactoryClass=ConciseContract) - - def test_return_true_with_gas(self): - # Fails with HexBytes('0xd302f7841b5d7c1b6dcff6fca0cd039666dbd0cba6e8827e72edb4d06bbab38f') != True - self.assertEqual(True, self.contract_instance.return_true(transact={""from"": self.w3.eth.accounts[0]})) - - def test_return_true_no_gas(self): - # passes - self.assertEqual(True, self.contract_instance.return_true()) - - def test_return_address(self): - # fails with AssertionError: '0x1111111111111111111111111111111111111111' != '0x0111111111111111111111111111111111111111' - self.assertEqual(self.DEFAULT_PROPOSAL_ADDRESS, self.contract_instance.return_address()) - -I have three methods performing tests on the functions in the contract. In one of them, a non-True value is returned and instead HexBytes are returned. In another, the contract functions returns an address constant but python sees a different value from what's expected. In yet another case I call the return_true contract function without gas and the True constant is seen by python. - -Why does calling return_true with transact={""from"": self.w3.eth.accounts[0]} cause the return value of the function to be HexBytes(...)? -Why does the address returned by return_address differ from what I expect? - -I think I have some sort of fundamental misunderstanding of how gas affects function calls.","The returned value is the transaction hash on the blockchain. When transacting (i.e., when using ""transact"" rather than ""call"") the blockchain gets modified, and the library you are using returns the transaction hash. During that process you must have paid ether in order to be able to modify the blockchain. However, operating in read-only mode costs no ether at all, so there is no need to specify gas. -Discounting the ""0x"" at the beginning, ethereum addresses have a length of 40, but in your test you are using a 39-character-long address, so there is a missing a ""1"" there. Meaning, tests are correct, you have an error in your input. - -Offtopic, both return_true and return_address should be marked as view in Solidity, since they are not actually modifying the state. I'm pretty sure you get a warning in remix. Once you do that, there is no need to access both methods using ""transact"" and paying ether, and you can do it using ""call"" for free. -EDIT -Forgot to mention: in case you need to access the transaction hash after using transact you can do so calling the .hex() method on the returned HexBytes object. That'll give you the transaction hash as a string, which is usually way more useful than as a HexBytes. -I hope it helps!",0.6730655149877884,False,1,5496 -2018-05-05 22:40:06.727,Colaboratory: How to install and use on local machine?,"Google Colab is awesome to work with, but I wish I can run Colab Notebooks completely locally and offline, just like Jupyter notebooks served from the local? -How do I do this? Is there a Colab package which I can install? - -EDIT: Some previous answers to the question seem to give methods to access Colab hosted by Google. But that's not what I'm looking for. -My question is how do I pip install colab so I can run it locally like jupyter after pip install jupyter. Colab package doesn't seem to exist, so if I want it, what do I do to install it from the source?","Google Colab is a cloud computer,it only runs through Internet,you can design your Python script,and run the Python script through Colab,run Python will use Google Colab hardware,Google will allocate CPU, RAM, GPU and etc for your Python script,your local computer just submit Python code to Google Colab,and run,then Google Colab return the result to your local computer,cloud computation is stronger than local -computation if your local computer hardware is limited,see this question link will inspire you,asked by me,https://stackoverflow.com/questions/48879495/how-to-apply-googlecolab-stronger-cpu-and-more-ram/48922199#48922199",-0.4961739557460144,False,1,5497 -2018-05-06 09:13:56.887,Predicting binary classification,"I have been self-learning machine learning lately, and I am now trying to solve a binary classification problem (i.e: one label which can either be true or false). I was representing this as a single column which can be 1 or 0 (true or false). -Nonetheless, I was researching and read about how categorical variables can reduce the effectiveness of an algorithm, and how one should one-hot encode them or translate into a dummy variable thus ending with 2 labels (variable_true, variable_false). -Which is the correct way to go about this? Should one predict a single variable with two possible values or 2 simultaneous variables with a fixed unique value? -As an example, let's say we want to predict whether a person is a male or female: -Should we have a single label Gender and predict 1 or 0 for that variable, or Gender_Male and Gender_Female?","it's basically the same, when talking about binary classification, you can think of a final layer for each model that adapt the output to other model -e.g if the model output 0 or 1 than the final layer will translate it to vector like [1,0] or [0,1] and vise-versa by a threshold criteria, usually is >= 0.5 -a nice byproduct of 2 nodes in the final layer is the confidence level of the model in it's predictions [0.80, 0.20] and [0.55, 0.45] will both yield [1,0] classification but the first prediction has more confidence -this can be also extrapolate from 1 node output by the distance of the output from the fringes 1 and 0 so 0.1 will be considered with more confidence than 0.3 as a 0 prediction",1.2,True,1,5498 -2018-05-06 21:22:33.530,Does gRPC have the ability to add a maximum retry for call?,"I haven't found any examples how to add a retry logic on some rpc call. Does gRPC have the ability to add a maximum retry for call? -If so, is it a built-in function?",Retries are not a feature of gRPC Python at this time.,1.2,True,1,5499 -2018-05-07 02:06:48.980,Tensorflow How can I make a classifier from a CSV file using TensorFlow?,"I need to create a classifier to identify some aphids. -My project has two parts, one with a computer vision (OpenCV), which I already conclude. The second part is with Machine Learning using TensorFlow. But I have no idea how to do it. -I have these data below that have been removed starting from the use of OpenCV, are HuMoments (I believe that is the path I must follow), each line is the HuMoments of an aphid (insect), I have 500 more data lines that I passed to one CSV file. -How can I make a classifier from a CSV file using TensorFlow? - -HuMoments (in CSV file): - 0.27356047,0.04652453,0.00084231,7.79486673,-1.4484489,-1.4727380,-1.3752532 - 0.27455502,0.04913969,3.91102408,1.35705980,3.08570234,2.71530819,-5.0277362 - 0.20708829,0.01563241,3.20141907,9.45211423,1.53559373,1.08038279,-5.8776765 - 0.23454372,0.02820523,5.91665789,6.96682467,1.02919203,7.58756583,-9.7028848","You can start with this tutorial, and try it first without changing anything; I strongly suggest this unless you are already familiar with Tensorflow so that you gain some familiarity with it. -Now you can modify the input layer of this network to match the dimensions of the HuMoments. Next, you can give a numeric label to each type of aphid that you want to recognize, and adjust the size of the output layer to match them. -You can now read the CSV file using python, and remove any text like ""HuMoments"". If your file has names of aphids, remove them and replace them with numerical class labels. Replace the training data of the code in the above link, with these data. -Now you can train the network according to the description under the title ""Train the Model"". -One more note. Unless it is essential to use Tensorflow to match your project requirements, I suggest using Keras. Keras is a higher level library that is much easier to learn than Tensorflow, and you have more sample code online.",0.0,False,1,5500 -2018-05-07 23:22:30.577,How can you fill in an open dialog box in headless chrome in Python and Selenium?,"I'm working with Python and Selenium to do some automation in the office, and I need to fill in an ""upload file"" dialog box (a windows ""open"" dialog box), which was invoked from a site using a headless chrome browser. Does anyone have any idea on how this could be done? -If I wasn't using a headless browser, Pywinauto could be used with a line similar to the following, for example, but this doesn't appear to be an option in headless chrome: -app.pane.open.ComboBox.Edit.type_keys(uploadfilename + ""{ENTER}"") -Thank you in advance!","This turned out to not be possible. I ended up running the code on a VM and setting a registry key to allow automation to be run while the VM was minimized, disconnected, or otherwise not being interacted with by users.",0.0,False,1,5501 -2018-05-08 10:55:31.387,"How to ""compile"" a python script to an ""exe"" file in a way it would be run as background process?","I know how to run a python script as a background process, but is there any way to compile a python script into exe file using pyinstaller or other tools so it could have no console or window ?","If you want to run it in background without ""console and ""window"" you have to run it as a service.",0.0,False,1,5502 -2018-05-08 12:08:02.053,(Django) Running asynchronous server task continously in the background,"I want to let a class run on my server, which contains a connected bluetooth socket and continously checks for incoming data, which can then by interpreted. In principle the class structure would look like this: -Interpreter: --> connect (initializes the class and starts the loop) --> loop (runs continously in the background) --> disconnect (stops the loop) -This class should be initiated at some point and then run continously in the background, from time to time a http request would perhaps need data from the attributes of the class, but it should run on its own. -I don't know how to accomplish this and don't want to get a description on how to do it, but would like to know where I should start, like how this kind of process is called.","Django on its own doesn't support any background processes - everything is request-response cycle based. -I don't know if what you're trying to do even has a dedicated name. But most certainly - it's possible. But don't tie yourself to Django with this solution. -The way I would accomplish this is I'd run a separate Python process, that would be responsible for keeping the connection to the device and upon request return the required data in some way. -The only difficulty you'd have is determining how to communicate with that process from Django. Since, like I said, django is request based, that secondary app could expose some data to your Django app - it could do any of the following: - -Expose a dead-simple HTTP Rest API -Expose an UNIX socket that would just return data immediatelly after connection -Continuously dump data to some file/database/mmap/queue that Django could read",1.2,True,1,5503 -2018-05-08 18:49:32.583,Replace character with a absolute value,"When searching my db all special characters work aside from the ""+"" - it thinks its a space. Looking on the backend which is python, there is no issues with it receiving special chars which I believe it is the frontend which is Javascript -what i need to do is replace ""+"" == ""%2b"". Is there a way for me to use create this so it has this value going forth?","You can use decodeURIComponent('%2b'), or encodeUriComponent('+'); -if you decode the response from the server, you get the + sign- -if you want to replace all ocurrence just place the whole string insde the method and it decodes/encodes the whole string.",1.2,True,1,5504 -2018-05-08 21:02:22.097,How to deal with working on one project on different machines (paths)?,"This is my first time coding a ""project"" (something more than solving exercises in single files). A number of my .py files have variables imported from a specific path. I also have a main ""Run"" file where I import things I've written in other files and execute the project as a whole. -Recently I've started working on this project on several different machines (home, work, laptop etc) and have just started learning how to use GitHub. -My question is, how do I deal with the fact that every time I open up my code on a different machine I need to go around changing all the paths to fit the new machine, and then change them back again when I'm home? I started writing a Run file for each location I work at so that my sys.path commands are ok with that machine, but it doesn't solve the problem of my other modules importing variables from specific paths that vary from machine to machine. Is there a way round this or is the problem in how I'm setting up the project itself? -In an ideal world it would all just work without me having to change something before I run, depending on the machine I'm working on, but I don't know if that's possible. -My current thoughts are whether there is some command I'm not aware of that can set variables inside a .py file from my main Run.py file - that way I can just have a run file for each machine. -Any suggestions are gladly taken! Whatever it is, it must be better than commenting back in the correct file path each time I open it on a different machine!","You should always use relative paths, not static which I assume you have got. -Assuming your in an index file and you need to access images folder, you probably have something like /users/username/project/images/image.png -Instead you want something like ../images/image.png, this tells your index file to go backwards one folder to say the root of the project, then proceed into our images folder etc. -Relative paths mean you create a path from where your file exists, and not an entire path from ground up.",0.0,False,1,5505 -2018-05-10 01:53:40.577,Document similarity in production environment,"We are having n number of documents. Upon submission of new document by user, our goal is to inform him about possible duplication of existing document (just like stackoverflow suggests questions may already have answer). -In our system, new document is uploaded every minute and mostly about the same topic (where there are more chance of duplication). -Our current implementation includes gensim doc2vec model trained on documents (tagged with unique document ids). We infer vector for new document and find most_similar docs (ids) with it. Reason behind choosing doc2vec model is that we wanted to take advantage of semantics to improve results. As far as we know, it does not support online training, so we might have to schedule a cron or something that periodically updates the model. But scheduling cron will be disadvantageous as documents come in a burst. User may upload duplicates while model is not yet trained for new data. Also given huge amount of data, training time will be higher. -So i would like to know how such cases are handled in big companies. Are there any better alternative? or better algorithm for such problem?","You don't have to take the old model down to start training a new model, so despite any training lags, or new-document bursts, you'll always have a live model doing the best it can. -Depending on how much the document space changes over time, you might find retraining to have a negligible benefit. (One good model, built on a large historical record, might remain fine for inferring new vectors indefinitely.) -Note that tuning inference to use more steps (especially for short documents), or a lower starting alpha (more like the training default of 0.025) may give better results. -If word-vectors are available, there is also the ""Word Mover's Distance"" (WMD) calculation of document similarity, which might be ever better at identifying close duplicates. Note, though, it can be quite expensive to calculate – you might want to do it only against a subset of likely candidates, or have to add many parallel processors, to do it in bulk. There's another newer distance metric called 'soft cosine similarity' (available in recent gensim) that's somewhere between simple vector-to-vector cosine-similarity and full WMD in its complexity, that may be worth trying. -To the extent the vocabulary hasn't expanded, you can load an old Doc2Vec model, and continue to train() it – and starting from an already working model may help you achieve similar results with fewer passes. But note: it currently doesn't support learning any new words, and the safest practice is to re-train with a mix of all known examples interleaved. (If you only train on incremental new examples, the model may lose a balanced understanding of the older documents that aren't re-presented.) -(If you chief concern is documents that duplicate exact runs-of-words, rather than just similar fuzzy topics, you might look at mixing-in other techniques, such as breaking a document into a bag-of-character-ngrams, or 'shingleprinting' as in common in plagiarism-detection applications.)",1.2,True,1,5506 -2018-05-10 02:52:36.463,Apache Airflow: Gunicorn Configuration File Not Being Read?,"I'm trying to run Apache Airflow's webserver from a virtualenv on a Redhat machine, with some configuration options from a Gunicorn config file. Gunicorn and Airflow are both installed in the virtualenv. The command airflow webserver starts Airflow's webserver and the Gunicorn server. The config file has options to make sure Gunicorn uses/accepts TLSv1.2 only, as well as a list of ciphers to use. -The Gunicorn config file is gunicorn.py. This file is referenced through an environment variable GUNICORN_CMD_ARGS=""--config=/path/to/gunicorn.py ..."" in .bashrc. This variable also sets a couple of other variables in addition to --config. However, when I run the airflow webserver command, the options in GUNICORN_CMD_ARGS are never applied. -Seeing as how Gunicorn is not called from command line, but instead by Airflow, I'm assuming this is why the GUNICORN_CMD_ARGS environment variable is not read, but I'm not sure and I'm new to both technologies... -TL;DR: -Is there another way to set up Gunicorn to automatically reference a config file, without the GUNICORN_CMD_ARGS environment variable? -Here's what I'm using: - -gunicorn 19.8.1 -apache-airflow 1.9.0 -python 2.7.5","When Gunicorn is called by Airflow, it uses ~\airflow\www\gunicorn_config.py as its config file.",1.2,True,1,5507 -2018-05-10 10:48:13.883,How to make a Python Visualization as service | Integrate with website | specially sagemaker,"I am from R background where we can use Plumber kind tool which provide visualization/graph as Image via end points so we can integrate in our Java application. -Now I want to integrate my Python/Juypter visualization graph with my Java application but not sure how to host it and make it as endpoint. Right now I using AWS sagemaker to host Juypter notebook","Amazon SageMaker is a set of different services for data scientists. You are using the notebook service that is used for developing ML models in an interactive way. The hosting service in SageMaker is creating an endpoint based on a trained model. You can call this endpoint with invoke-endpoint API call for real time inference. -It seems that you are looking for a different type of hosting that is more suitable for serving HTML media rich pages, and doesn’t fit into the hosting model of SageMaker. A combination of EC2 instances, with pre-built AMI or installation scripts, Congnito for authentication, S3 and EBS for object and block storage, and similar building blocks should give you a scalable and cost effective solution.",1.2,True,1,5508 -2018-05-11 04:04:54.463,Python - Enable TLS1.2 on OSX,"I have a virtualenv environment running python 3.5 -Today, when I booted up my MacBook, I found myself unable to install python packages for my Django project. I get the following error: - -Could not fetch URL : There was a problem confirming the ssl certificate: [SSL: TLSV1_ALERT_PROTOCOL_VERSION] tlsv1 alert protocol version (_ssl.c:646) - skipping - -I gather that TLS 1.0 has been discontinued, but from what I understand, newer versions of Python should be using TLS1.2, correct? Even outside of my environment, running pip3 trips the same error. I've updated to the latest version of Sierra and have updated Xcode as well. Does anyone know how to resolve this?","Here is the fix: -curl https://bootstrap.pypa.io/get-pip.py | python -Execute from within the appropriate virtual environment.",1.2,True,1,5509 -2018-05-11 21:19:50.853,python Ubuntu: too many open files [eventpoll],"Basically, it is a multi-threaded crawler program, which uses requests mainly. After running the program for a few hours, I keep getting the error ""Too many open files"". -By running: lsof -p pid, I saw a huge number of entries like below: -python 75452 xxx 396u a_inode 0,11 0 8121 [eventpoll] -I cannot figure out what it is and how to trace back to the problem. -Previously, I tried to have it running in Windows and never seen this error. -Any idea how to continue investigating this issue? thanks.","I have figured out that it is caused by Gevent. After replacing gevent with multi-thread, everything is just OK. -However, I still don't know what's wrong with gevent, which keeps opening new files(eventpoll).",0.0,False,1,5510 -2018-05-11 22:56:26.280,How to prepare Python Selenium project to be used on client's machine?,"I've recently started freelance python programming, and was hired to write a script that scraped certain info online (nothing nefarious, just checking how often keywords appear in search results). -I wrote this script with Selenium, and now that it's done, I'm not quite sure how to prepare it to run on the client's machine. -Selenium requires a path to your chromedriver file. Am I just going to have to compile the py file as an exe and accept the path to his chromedriver as an argument, then show him how to download chromedriver and how to write the path? -EDIT: Just actually had a thought while typing this out. Would it work if I sent the client a folder including a chromedriver.exe inside of said folder, so the path was always consistent?","Option 1) Deliver a Docker image if customer not to watch the browser during running and they can setup Docker environment. The Docker image should includes following items: - -Python -Dependencies for running your script, like selenium -Headless chrome browser and compatible chrome webdriver binary -Your script, put them in github and -fetch them when start docker container, so that customer can always get -your latest code - -This approach's benefit: - -You only need to focus on scripts like bug fix and improvement after delivery -Customer only need to execute same docker command - -Option 2) Deliver a Shell script to do most staff automatically. It should accomplish following items: - -Install Python (Or leave it for customer to complete) -Install Selenium library and others needed -Install latest chrome webdriver binary (which is compatible backward) -Fetch your script from code repo like github, or simply deliver as packaged folder -Run your script. - -Option 3) Deliver your script and an user guide, customer have to do many staff by self. You can supply a config file along with your script for customer to specify the chrome driver binary path after they download. Your script read the path from this file, better than enter it in cmd line every time.",0.0,False,1,5511 -2018-05-12 09:01:11.140,Using Hydrogen with Python 3,"The default version of python installed on my mac is python 2. I also have python 3 installed but can't install python 2. -I'd like to configure Hyrdrogen on Atom to run my script using python 3 instead. -Does anybody know how to do this?","I used jupyter kernelspec list and I found 2 kernels available, one for python2 and another for python3 -So I pasted python3 kernel folder in the same directory where python2 ken=rnel is installed and removed python2 kernel using 'rm -rf python2'",0.0,False,1,5512 -2018-05-12 09:48:08.917,Python 3 install location,"I am using Ubuntu 16.04 . Where is the python 3 installation directory ? -Running ""whereis python3"" in terminal gives me: - -python3: /usr/bin/python3.5m-config /usr/bin/python3 - /usr/bin/python3.5m /usr/bin/python3.5-config /usr/bin/python3.5 - /usr/lib/python3 /usr/lib/python3.5 /etc/python3 /etc/python3.5 - /usr/local/lib/python3.5 /usr/include/python3.5m - /usr/include/python3.5 /usr/share/python3 - /usr/share/man/man1/python3.1.gz - -Also where is the intrepreter i.e the python 3 executable ? And how would I add this path to Pycharm ?","you can try this : -which python3",1.2,True,1,5513 -2018-05-12 10:36:33.093,How to continue to train a model with new classes and data?,"I have trained a model successfully and now I want to continue training it with new data. If a given data with the same amount of classes it works fine. But having more data then initially it will give me the error: - -ValueError: Shapes (?, 14) and (?, 21) are not compatible - -How can I dynamically increase the number of classes in my trained model or how to make the model accept a lesser number of classes? Do I need to save the classes in a pickle file?","Best thing to do is to train your network from scratch with the output layers adjusted to the new output class size. -If retraining is an issue, then keep the trained network as it is and only drop the last layer. Add a new layer with the proper output size, initialized to random weights and then fine-tune (train) the entire network.",0.0,False,1,5514 -2018-05-13 11:51:03.607,transfer files between local machine and remote server,I want to make access from remote ubuntu server to local machine because I have multiple files in this machine and I want to transfer it periodically (every minute) to server how can I do that using python,"You can easily transfer files between local and remote or between two remote servers. If both servers are Linux-based and require to transfer multiple files and folder using single command, however, you need to follow up below steps: - -User from one remote server should have access to another remote server to corresponding directory you want to transfer the file. - -You might need to create a policy or group and assign to server list to that group -which you want to access and assign the user to that group so 2 different remote -server can talk to each other. - -Run the following scp command:- - - - scp [options] username1@source_host:directory1/filename1 - username2@destination_host:directory2/filename2",0.0,False,1,5515 -2018-05-13 19:28:44.063,Need help using Keras' model.predict,"My goal is to make an easy neural network fit by providing 2 verticies of a certain Graph and 1 if there's a link or 0 if there's none. -I fit my model, it gets loss of about 0.40, accuracy of about 83% during fitting. I then evaluate the model by providing a batch of all positive samples and several batches of negative ones (utilising random.sample). My model gets loss of ~0.35 and 1.0 accuracy for positive samples and ~0.46 loss 0.68 accuracy for negative ones. -My understanding of neural networks if extremely limited, but to my understanding the above means it theoretically always is right when it outputs 0 when there's no link, but can sometimes output 1 even if there is none. -Now for my actual problem: I try to ""reconstruct"" the original graph with my neural network via model.predict. The problem is I don't understand what the predict output means. At first I assumed values above 0.5 mean 1, else 0. But if that's the case the model doesn't even come close to rebuilding the original. -I get that it won't be perfect, but it simply returns value above 0.5 for random link candidates. -Can someone explain to me how exactly model.predict works and how to properly use it to rebuild my graph?","The model that you trained is not directly optimized w.r.t. the graph reconstruction. Without loss of generality, for a N-node graph, you need to predict N choose 2 links. And it may be reasonable to assume that the true values of the most of these links are 0. -When looking into your model accuracy on the 0-class and 1-class, it is clear that your model is prone to predict 1-class, assuming your training data is balanced. Therefore, your reconstructed graph contains many false alarm links. This is the exact reason why the performance of your reconstruction graph is poor. -If it is possible to retrain the model, I suggest you do it and use more negative samples. -If not, you need to consider applying some post-processing. For example, instead of finding a threshold to decide which two nodes have a link, use the raw predicted link probabilities to form a node-to-node linkage matrix, and apply something like the minimum spanning tree to further decide what are appropriate links.",0.0,False,1,5516 -2018-05-14 05:48:54.863,How to used a tensor in different graphs?,"I build two graphs in my code, graph1 and graph2. -There is a tensor, named embedding, in graph1. I tied to use it in graph2 by using get_variable, while the error is tensor must be from the same graph as Tensor. I found that this error occurs because they are in different graphs. -So how can I use a tensor in graph1 to graph2?","expanding on @jdehesa's comment, -embedding could be trained initially, saved from graph1 and restored to graph2 using tensorflows saver/restore tools. for this to work you should assign embedding to a name/variable scope in graph1 and reuse the scope in graph2",0.0,False,1,5517 -2018-05-14 18:25:36.107,Best practice for rollbacking a multi-purpose python script,"I'm sorry if the title is a little ambiguous. Let me explain what I mean by that : -I have a python script that does a few things : creates a row in a MySQL table, inserts a json document to a MongoDB, Updates stuff in a local file, and some other stuff, mostly related to databases. Thing is, I want the whole operation to be atomic. Means - If anything during the process I mentioned failed, I want to rollback everything I did. I thought of implementing a rollback function for every 'create' function I have. But I'd love to hear your opinion for how to make some sort of a linked list of operations, in which if any of the nodes failed, I want to discard all the changes done in the process. -How would you design such a thing? Is there a library in Python for such things?","You should implement every action to be reversible and the reverse action to be executable even if the original action has failed. Then if you have any failures, you execute every reversal.",0.0,False,1,5518 -2018-05-15 09:13:53.017,Why and how would you not use a python GUI framework and make one yourself like many applications including Blender do?,"I have looked at a few python GUI frameworks like PyQt, wxPython and Kivy, but have noticed there aren’t many popular (used widely) python applications, from what I can find, that use them. -Blender, which is pretty popular, doesn’t seem to use them. How would one go about doing what they did/what did they do and what are the potential benefits over using the previously mentioned frameworks?","I would say that python isn't a popular choice when it comes to making a GUI application, which is why you don't find many examples of using the GUI frameworks. tkinter, which is part of the python development is another option for GUI's. -Blender isn't really a good example as it isn't a GUI framework, it is a 3D application that integrates python as a means for users to manipulate it's data. It was started over 25 years ago when the choice of cross platform frameworks was limited, so making their own was an easier choice to make. Python support was added to blender about 13 years ago. One of the factors in blender's choice was to make each platform look identical. That goes against most frameworks that aim to implement a native look and feel for each target platform. -So you make your own framework when the work of starting your own framework seems easier than adjusting an existing framework to your needs, or the existing frameworks all fail to meet your needs, one of those needs may be licensing with Qt and wxWidgets both available under (L)GPL, while Qt also sells non-GPL licensing. -The benefit to using an existing framework is the amount of work that is already done, you will find there is more than you first think in a GUI framework, especially when you start supporting multiple operating systems.",1.2,True,1,5519 -2018-05-15 19:46:31.853,Installing Kivy to an alternate location,"I have Python version 3.5 which is located here C:\Program Files(x86)\Microsoft Visual Studio\Shared\Python35_64 If I install kivy and its components and add-ons with this command: python -m pip install kivy, then it does not install in the place that I need. I want to install kivy in this location C:\Program Files(x86)\ Microsoft Visual Studio\Shared\Python35_64\Lib\site-packages, how can I do this? -I did not understand how to do this from the explanations on the official website.","So it turned out that I again solved my problem myself, I have installed Python 3.5 and Python 3.6 on my PC, kiwy was installed in Python 3.6 by default, and my development environment was using Python 3.5, I replaced it with 3.6 and it all worked.",0.3869120172231254,False,1,5520 -2018-05-16 07:28:11.157,Portable application: s3 and Google cloud storage,"I want to write an application which is portable. -With ""portable"" I mean that it can be used to access these storages: - -amazon s3 -google cloud storage -Eucalyptus Storage - -The software should be developed using Python. -I am unsure how to start, since I could not find a library which supports all three storages.",You can use boto3 for accessing any services of Amazon.,0.3869120172231254,False,1,5521 -2018-05-16 14:25:25.257,How to access created nodes in a mininet topology?,"I am new in mininet. I created a custom topology with 2 linear switches and 4 nodes. I need to write a python module accessing each nodes in that topology and do something but I don't know how. -Any idea please?","try the following: -s1.cmd('ifconfig s1 192.168.1.0') -h1.cmd('ifconfig h1 192.168.2.0')",1.2,True,1,5522 -2018-05-16 16:07:12.060,Real width of detected face,"I've been researching like forever, but couldn't find an answer. I'm using OpenCV to detect faces, now I want to calculate the distance to the face. When I detect a face, I get a matofrect (which I can visualize with a rectangle). Pretty clear so far. But now: how do I get the width of the rectangle in the real world? There has to be some average values that represent the width of the human face. If I have that value (in inch, mm or whatever), I can calculate the distance using real width, pixel width and focal length. Please, can anyone help me? -Note: I'm comparing the ""simple"" rectangle solution against a Facemark based distance measuring solution, so no landmark based answers. I just need the damn average face / matofrectwidth :D -Thank you so much!","OpenCV's facial recognition is slightly larger than a face, therefore an average face may not be helpful. Instead, just take a picture of a face at different distances from the camera and record the distance from the camera along with the pixel width of the face for several distances. After plotting the two variables on a graph, use a trendline to come up with a predictive model.",0.6730655149877884,False,1,5523 -2018-05-16 17:31:21.103,Split a PDF file into two columns along a certain measurement in Python?,"I have a ton of PDF files that are laid out in two columns. When I use PyPDF2 to extract the text, it reads the entire first column (which are like headers) and the entire second column. This makes splitting on the headers impossible. It's laid out in two columns: -____ __________ -|Col1 Col2 | -|Col1 Col2 | -|Col1 Col2 | -|Col1 Col2 | -____ __________ -I think I need to split the PDF in half along the edge of the column, then read each column left to right. It's 2.26 inches width on an 8x11 PDF. I can also get the coordinates using PyPDF2. -Does anyone have any experience doing this or know how I would do it? -Edit: When I extractText using PyPDF2, the ouput has no spaces: Col1Col1Col1Col1Col2Col2Col2Col2",Using pdfminer.six successfully read from left to right with spaces in between.,0.3869120172231254,False,1,5524 -2018-05-17 16:34:01.880,how to make a copy of an sqlalchemy object (data only),"I get a db record as an sqlalchemy object and I need to consult the original values during some calculation process, so I need the original record till the end. However, the current code modifies the object as it goes and I don't want to refactor it too much at the moment. -How can I make a copy of the original data? The deepcopy seems to create a problem, as expected. I definitely prefer not to copy all the fields manually, as someone will forget to update this code when modifying the db object.","You can have many options here to copy your object.Two of them which I can think of are : - -Using __dict__ it will give the dictionary of the original sqlalchemy object and you can iterate through all the attributes using .keys() function which will give all the attributes. -You can also use inspect module and getmembers() to get all the attributes defined and set the required attributes using setattr() method.",0.0,False,1,5525 -2018-05-18 06:14:11.447,basic serial port contention,"I am using a pi3 which talks to an arduino via serial0 (ttyAMA0) -It all works fine. I can talk to it with minicom, bidirectionally. However, a python based server also wants this port. I notice when minicom is running, the python code can write to serial0 but not read from it. At least minicom reports the python server has sent a message. -Can someone let me know how this serial port handles contention, if at all? I notice running two minicom session to the same serial port wrecks both sessions. Is it possible to have multiple writers and readers if they are coordinated not to act at the same time? Or can there be multiple readers (several terms running cat /dev/serial0) -I have googled around for answers but most hits are about using multiple serial ports or getting a serial port to work at all. -Cheers","Since two minicoms can attempt to use the port and there are collisions minicom must not set an advisory lock on local writes to the serial port. I guess that the first app to read received remote serial message clears it, since serial doesn't buffer. When a local app writes to serial, minicom displays this and it gets sent. I'm going to make this assumed summary - -when a local process puts a message on the serial port everyone can -see it and it gets sent to remote. -when a remote message arrives on -serial, the first local process to get it, gets it. The others -can't see it. -for some reason, minicom has privilege over arriving -messages. This is why two minicoms break the message.",0.3869120172231254,False,1,5526 -2018-05-18 14:53:02.983,Effective passing of large data to python 3 functions,"I am coming from a C++ programming background and am wondering if there is a pass by reference equivalent in python. The reason I am asking is that I am passing very large arrays into different functions and want to know how to do it in a way that does not waste time or memory by having copy the array to a new temporary variable each time I pass it. It would also be nice if, like in C++, changes I make to the array would persist outside of the function. -Thanks in advance, -Jared","Python handles function arguments in the same manner as most common languages: Java, JavaScript, C (pointers), C++ (pointers, references). -All objects are allocated on the heap. Variables are always a reference/pointer to the object. The value, which is the pointer, is copied. The object remains on the heap and is not copied.",0.999329299739067,False,1,5527 -2018-05-19 10:36:50.560,How to find symbolic derivative using python without sympy?,"I need to make a program which will differentiate a function, but I have no idea how to do this. I've only made a part which transforms the regular expression(x ^ 2 + 2 for example ) into reverse polish notation. Can anybody help me with creating a program which will a find symbolic derivatives of expression with + * / - ^","Hint: Use a recursive routine. If an operation is unary plus or minus, leave the plus or minus sign alone and continue with the operand. (That means, recursively call the derivative routine on the operand.) If an operation is addition or subtraction, leave the plus or minus sign alone and recursively find the derivative of each operand. If the operation is multiplication, use the product rule. If the operation is division, use the quotient rule. If the operation is exponentiation, use the generalized power rule. (Do you know that rule, for u ^ v? It is not given in most first-year calculus books but is easy to find using logarithmic differentiation.) (Now that you have clarified in a comment that there will be no variable in the exponent, you can use the regular power rule (u^n)' = n * u^(n-1) * u' where n is a constant.) And at the base of the recursion, the derivative of x is 1 and the derivative of a constant is zero. -The result of such an algorithm would be very un-simplified but it would meet your stated requirements. Since this algorithm looks at an operation then looks at the operands, having the expression in Polish notation may be simpler than reverse Polish or ""regular expression."" But you could still do it for the expression in those forms. -If you need more detail, show us more of your work.",1.2,True,1,5528 -2018-05-19 21:46:47.500,how to get the distance of sequence of nodes in pgr_dijkstra pgrouting?,"I have an array of integers(nodes or destinations) i.e array[2,3,4,5,6,8] that need to be visited in the given sequence. -What I want is, to get the shortest distance using pgr_dijkstra. But the pgr_dijkstra finds the shortest path for two points, therefore I need to find the distance of each pair using pgr_dijkstra and adding all distances to get the total distance. -The pairs will be like -2,3 -3,4 -4,5 -5,6 -6,8. -Is there any way to define a function that takes this array and finds the shortest path using pgr_dijkstra. -Query is: -for 1st pair(2,3) -SELECT * FROM pgr_dijkstra('SELECT gid as id,source, target, rcost_len AS cost FROM finalroads',2,3, false); -for 2nd pair(3,4) -SELECT * FROM pgr_dijkstra('SELECT gid as id,source, target, rcost_len AS cost FROM finalroads'***,3,4,*** false) -for 3rd pair(4,5) -SELECT * FROM pgr_dijkstra('SELECT gid as id,source, target, rcost_len AS cost FROM finalroads'***,4,5,*** false); -NOTE: The array size is not fixed, it can be different. -Is there any way to automate this in postgres sql may be using a loop etc? -Please let me know how to do it. -Thank you.","If you want all pairs distance then use -select * from pgr_apspJohnson ('SELECT gid as id,source, target, rcost_len AS cost FROM finalroads)",0.0,False,1,5529 -2018-05-21 10:54:03.443,Using aws lambda to render an html page in aws lex chatbot,"I have built a chatbot using AWS Lex and lambda. I have a use case where in a user enters a question (For example: What is the sale of an item in a particular region). I want that once this question is asked, a html form/pop up appears that askes the user to select the value of region and item from dropdown menus and fills the slot of the question with the value selected by the user and then return a response. Can some one guide how can this be achieved? Thanks.","Lex has something called response cards where your can add all the possible values. These are called prompts. The user can simply select his/her choice and the slot gets filled. Lex response cards work in Facebook and slack. -In case of custom channel, you will have to custom develop the UI components.",0.0,False,1,5530 -2018-05-22 07:22:36.627,How to install image library in python 3.6.4 in windows 7?,I am new to Python and I am using Python 3.6.4. I also use PyCharm editor to write all my code. Please let me know how can I install Image library in Windows 7 and would it work in PyCharm too.,"From pycharm, - -goto settings -> project Interpreter -Click on + button on top right corner and you will get pop-up window of -Available packages. Then search for pillow, PIL image python packages. -Then click on Install package to install those packages.",1.2,True,1,5531 -2018-05-23 00:28:20.643,"I have downloaded eclipse and pydev, but I am unsure how to get install django","I am attempting to learn how to create a website using python. I have been going off the advice of various websites including stackoverflow. Currently I can run code in eclipse using pydev, but I need to install django. I have no idea how to do this and I don't know who to ask or where to begin. Please help","I would recommend the following: - -Install virtual environment - -$pip install virtualenv - -Create a new virtualenvironment - -$ virtualenv django-venv - -Activate virtual environment & use - -$ source django-venv/bin/activate - -And install django as expected - -(django-venv)$ pip install django==1.11.13 -(Replace with django version as needed)",0.0,False,1,5532 -2018-05-23 14:46:15.693,Proper way of streaming JSON with Django,"i have a webservice which gets user requests and produces (multiple) solution(s) to this request. -I want to return a solution as soon as possible, and send the remaining solutions when they are ready. -In order to do this, I thought about using Django's Http stream response. Unfortunately, I am not sure if this is the most adequate way of doing so, because of the problem I will describe below. -I have a Django view, which receives a query and answers with a stream response. This stream returns the data returned by a generator, which is always a python dictionary. -The problem is that upon the second return action of the stream, the Json content breaks. -If the python dictionary, which serves as a response, is something like {key: val}, after the second yield the returned response is {key: val} {key: val}, which is not valid Json. -Any suggestions on how to return multiple Json objects at different moments in time?","Try decoding with something like -for example - - - -import json -json.dumps( {key: val} {key: val}, separators=('}', ':')) #check it",0.0,False,1,5533 -2018-05-23 15:52:41.077,pycharm won't let me run from desktop,"I have been using pycharm for a while now, and I have to say that I am a real fan of it's features. I have one issue though, when I try to run a .py file from either the desktop or command prompt, I am instead prompted to use the run feature in pycharm. I consider this an issue because if I try to create a program for someone who doesn't know how to code, they would probably be scared off by opening pycharm. I don't, however, want to uninstall pycharm because it is so useful when writing code. Does anyone have any Ideas for me? By the way, I am using a dell Inspiron 15 7000 Gaming laptop with the current version of Windows 10 installed.","You can try running the direct path of the file, I'm not sure what you have tried. -If you wanted to run it as I just described you would do: -py C:\~AppData\Local\Programs\Python\Python36-32\hello.py -If you move the file into your current working directory when programming, you should just be able to run py hello.py.",1.2,True,1,5534 -2018-05-23 20:49:52.333,Calling database handler class in a python thread,"I'm programming a bit of server code and the MQTT side of it runs in it's own thread using the threading module which works great and no issues but now I'm wondering how to proceed. -I have two MariaDB databases, one of them is local and the other is remote (There is a good and niche reason for this.) and I'm writing a class which handles the databases. This class will start new threads of classes that submits the data to their respected databases. If conditions are true, then it tells the data to start a new thread to push data to one database, if they are false, the data will go to the other database. The MQTT thread has a instance of the ""Database handler"" class and passes data to it through different calling functions within the class. -Will this work to allow a thread to concentrate on MQTT tasks while another does the database work? There are other threads as well, I've just never combined databases and threads before so I'd like an opinion or any information that would help me out from more seasoned programmers.","Writing code that is ""thread safe"" can be tricky. I doubt if the Python connector to MySQL is thread safe; there is very little need for it. -MySQL is quite happy to have multiple connections to it from clients. But they must be separate connections, not the same connection running in separate threads. -Very few projects need multi-threaded access to the database. Do you have a particular need? If so let's hear about it, and discuss the 'right' way to do it. -For now, each of your threads that needs to talk to the database should create its own connection. Generally, such a connection can be created soon after starting the thread (or process) and kept open until close to the end of the thread. That is, normally you should have only one connection per thread.",0.0,False,1,5535 -2018-05-25 18:54:02.363,python logging multiple calls after each instantiation,"I have multiple modules and they each have their own log. The all write to the log correctly however when a class is instantiated more than once the log will write the same line multiple times depending on the number of times it was created. -If I create the object twice it will log every messages twice, create the object three times it will log every message three times, etc... -I was wondering how I could fix this without having to only create each object only once. -Any help would be appreciated.",I was adding the handler multiple times after each instantiation of a log. I checked if the handler had already been added at the instantiation and that fixed the multiple writes.,0.0,False,1,5536 -2018-05-28 15:00:34.117,using c extension library with gevent,"I use celery for doing snmp requests with easysnmp library which have a C interface. -The problem is lots of time is being wasted on I/O. I know that I should use eventlet or gevent in this kind of situations, but I don't know how to handle patching a third party library when it uses C extensions.","Eventlet and gevent can't monkey-patch C code. -You can offload blocking calls to OS threads with eventlet.tpool.execute(library.io_func)",0.3869120172231254,False,1,5537 -2018-05-29 02:13:44.043,How large data can Python Ray handle?,"Python Ray looks interesting for machine learning applications. However, I wonder how large Python Ray can handle. Is it limited by memory or can it actually handle data that exceeds memory?","It currently works best when the data fits in memory (if you're on a cluster, then that means the aggregate memory of the cluster). If the data exceeds the available memory, then Ray will evict the least recently used objects. If those objects are needed later on, they will be reconstructed by rerunning the tasks that created them.",1.2,True,1,5538 -2018-05-29 18:31:38.537,Discord bot with user specific counter,"I'm trying to make a Discord bot in Python that a user can request a unit every few minutes, and later ask the bot how many units they have. Would creating a google spreadsheet for the bot to write each user's number of units to be a good idea, or is there a better way to do this?","Using a database is the best option. If you're working with a small number of users and requests you could use something even simpler like a text file for ease of use, but I'd recommend a database. -Easy to use database options include sqlite (use the sqlite3 python library) and MongoDB (I use the mongoengine python library for my Slack bot).",0.0,False,1,5539 -2018-05-29 21:28:22.547,How execute python command within virtualenv with Visual Studio Code,"I have created virtual environment named virualenv. I have scrapy project and I am using there some programs installed in my virtualenv. When I run it from terminal in VSC I can see errors even when I set up my virtual environment via Ctrl+Shift+P -> Python: Select Interpreter -> Python 3.5.2(virtualenv). Interpreter works in some way, I can import libs without errors etc, but I am not possible to start my scrapy project from terminal. I have to activate my virtual environment first via /{path_to_virtualenv}/bin/activate. Is there a way, how to automatically activate it? Now I am using PyCharm and it is possible there, but VSC looks much better according to me.","One way I know how, -Start cmd -Start you virtual env -(helloworld) \path\etc> code . -It will start studio code in this environment. Hope it helps",0.3869120172231254,False,1,5540 -2018-05-30 15:56:33.700,"TensorFlow debug: WARNING:tensorflow:Tensor._shape is private, use Tensor.shape instead. Tensor._shape will eventually be removed","I'm new (obviously) to python, but not so new to TensorFlow -I've been trying to debug my program using breakpoint, but everytime I try to check the content of a tensor in the variable view of my Visual Studio Code debugger, the content doesn't show I get this warning in the console: - -WARNING:tensorflow:Tensor._shape is private, use Tensor.shape instead. Tensor._shape will eventually be removed. - -I'm a bit confused on how to fix this issue. Do I have to wait for an update of TensorFlow before it works?","Probably yes you may have to wait. In the debug mode a deprecated function is being called. -You can print out the shape explicitly by calling var.shape() in the code as a workaround. I know not very convenient.",0.0,False,2,5541 -2018-05-30 15:56:33.700,"TensorFlow debug: WARNING:tensorflow:Tensor._shape is private, use Tensor.shape instead. Tensor._shape will eventually be removed","I'm new (obviously) to python, but not so new to TensorFlow -I've been trying to debug my program using breakpoint, but everytime I try to check the content of a tensor in the variable view of my Visual Studio Code debugger, the content doesn't show I get this warning in the console: - -WARNING:tensorflow:Tensor._shape is private, use Tensor.shape instead. Tensor._shape will eventually be removed. - -I'm a bit confused on how to fix this issue. Do I have to wait for an update of TensorFlow before it works?","You can simply stop at the break point, and switch to DEBUG CONSOLE panel, and type var.shape. It's not that convenient, but at least you don't need to write any extra debug code in your code.",0.0,False,2,5541 -2018-05-30 16:38:21.447,Django storages S3 - Store existing file,"I have django 1.11 with latest django-storages, setup with S3 backend. -I am trying to programatically instantiate an ImageFile, using the AWS image link as a starting point. I cannot figure out how to do this looking at the source / documentation. -I assume I need to create a file, and give it the path derived from the url without the domain, but I can't find exactly how. -The final aim of this is to programatically create wagtail Image objects, that point to S3 images (So pass the new ImageFile to the Imagefield of the image). I own the S3 bucket the images are stored in it. -Uploading images works correctly, so the system is setup correctly. -Update -To clarify, I need to do the reverse of the normal process. Normally a physical image is given to the system, which then creates a ImageFile, the file is then uploaded to S3, and a URL is assigned to the File.url. I have the File.url and need an ImageFile object.","It turns out, in several models that expect files, when using DjangoStorages, all I had to do is instead of passing a File on the file field, pass the AWS S3 object key (so not a URL, just the object key). -When model.save() is called, a boto call is made to S3 to verify an object with the provided key is there, and the item is saved.",1.2,True,1,5542 -2018-05-31 22:09:08.750,import sklearn in python,"I installed miniconda for Windows10 successfully and then I could install numpy, scipy, sklearn successfully, but when I run import sklearn in python IDLE I receive No module named 'sklearn' in anaconda prompt. It recognized my python version, which was 3.6.5, correctly. I don't know what's wrong, can anyone tell me how do I import modules in IDLE ?","Why bot Download the full anaconda and this will install everything you need to start which includes Spider IDE, Rstudio, Jupyter and all the needed modules.. -I have been using anaconda without any error and i will recommend you try it out.",1.2,True,1,5543 -2018-06-01 01:04:30.917,Pycharm Can't install TensorFlow,"I cannot install tensorflow in pycharm on windows 10, though I have tried many different things: - -went to settings > project interpreter and tried clicking the green plus button to install it, gave me the error: non-zero exit code (1) and told me to try installing via pip in the command line, which was successful, but I can't figure out how to make Pycharm use it when it's installed there -tried changing to a Conda environment, which still would not allow me to run tensorflow since when I input into the python command line: pip.main(['install', 'tensorflow']) it gave me another error and told me to update pip -updated pip then tried step 2 again, but now that I have pip 10.0.1, I get the error 'pip has no attribute main'. I tried reverted pip to 9.0.3 in the command line, but this won't change the version used in pycharm, which makes no sense to me. I reinstalled anaconda, as well as pip, and deleted and made a new project and yet it still says that it is using pip 10.0.1 which makes no sense to me - -So in summary, I still can't install tensorflow, and I now have the wrong version of pip being used in Pycharm. I realize that there are many other posts about this issue but I'm pretty sure I've been to all of them and either didn't get an applicable answer or an answer that I understand.","what worked for is this; - -I installed TensorFlow on the command prompt as an administrator using this command pip install tensorflow -then I jumped back to my pycharm and clicked the red light bulb pop-up icon, it will have a few options when you click it, just select the one that says install tensor flow. This would not install in from scratch but basically, rebuild and update your pycharm workspace to note the newly installed tensorflow",0.0,False,1,5544 -2018-06-02 08:27:36.887,How should I move my completed Django Project in a Virtual Environment?,"I started learning django a few days back and started a project, by luck the project made is good and I'm thinking to deploy it. However I didn't initiate it in virtual environment. have made a virtual environment now and want to move project to that. I want to know how can I do that ? I have created requirements.txt whoever it has included all the irrelevant library names. How can I get rid of them and have only that are required for the project.","Django is completely unrelated to the environment you run it on. -The environment represents which python version are you using (2,3...) and the libraries installed. -To answer your question, the only thing you need to do is run your manage.py commands from the python executable in the new virtual environment. Of course install all of the necessary libraries in the new environment if you haven't already did so. -It might be a problem if you created a python3 environment while the one you created was in python2, but at that point it's a code portability issue.",1.2,True,1,5545 -2018-06-03 08:14:39.850,Train CNN model with multiple folders and sub-folders,"I am developing a convolution neural network (CNN) model to predict whether a patient in category 1,2,3 or 4. I use Keras on top of TensorFlow. -I have 64 breast cancer patient data, classified into four category (1=no disease, 2= …., 3=….., 4=progressive disease). In each patient's data, I have 3 set of MRI scan images taken at different dates and inside each MRI folder, I have 7 to 8 sub folders containing MRI images in different plane (such as coronal plane/sagittal plane etc). -I learned how to deal with basic “Cat-Dog-CNN-Classifier”, it was easy as I put all the cat & dog images into a single folder to train the network. But how do I tackle the problem in my breast cancer patient data? It has multiple folders and sub-solders. -Please suggest.",Use os.walk to access all the files in sub-directories recursively and append to the dataset.,-0.1352210990936997,False,1,5546 -2018-06-03 14:02:27.027,How can I change the default version of Python Used by Atom?,"I have started using Atom recently and for the past few days, I have been searching on how to change the default version used in Atom (The default version is currently python 2.7 but I want to use 3.6). Is there anyone I can change the default path? (I have tried adding a profile to the ""script"" package but it still reverts to python 2.7 when I restart Atom. Any help will be hugely appreciated!! Thank you very much in advance.","I am using script 3.18.1 in Atom 1.32.2 -Navigate to Atom (at top left) > Open Preferences > Open Config folder. -Now, Expand the tree as script > lib > grammars -Open python.coffee and change 'python' to 'python3' in both the places in command argument",0.9866142981514304,False,4,5547 -2018-06-03 14:02:27.027,How can I change the default version of Python Used by Atom?,"I have started using Atom recently and for the past few days, I have been searching on how to change the default version used in Atom (The default version is currently python 2.7 but I want to use 3.6). Is there anyone I can change the default path? (I have tried adding a profile to the ""script"" package but it still reverts to python 2.7 when I restart Atom. Any help will be hugely appreciated!! Thank you very much in advance.","I came up with an inelegant solution that may not be universal. Using platformio-ide-terminal, I simply had to call python3.9 instead of python or python3. Not sure if that is exactly what you're looking for.",0.0,False,4,5547 -2018-06-03 14:02:27.027,How can I change the default version of Python Used by Atom?,"I have started using Atom recently and for the past few days, I have been searching on how to change the default version used in Atom (The default version is currently python 2.7 but I want to use 3.6). Is there anyone I can change the default path? (I have tried adding a profile to the ""script"" package but it still reverts to python 2.7 when I restart Atom. Any help will be hugely appreciated!! Thank you very much in advance.","I would look in the atom installed plugins in settings.. you can get here by pressing command + shift + p, then searching for settings. -The only reason I suggest this is because, plugins is where I installed swift language usage accessibility through a plugin that manages that in atom. -Other words for plugins on atom would be ""community packages"" -Hope this helps.",0.0,False,4,5547 -2018-06-03 14:02:27.027,How can I change the default version of Python Used by Atom?,"I have started using Atom recently and for the past few days, I have been searching on how to change the default version used in Atom (The default version is currently python 2.7 but I want to use 3.6). Is there anyone I can change the default path? (I have tried adding a profile to the ""script"" package but it still reverts to python 2.7 when I restart Atom. Any help will be hugely appreciated!! Thank you very much in advance.","Yes, there is. After starting Atom, open the script you wish to run. Then open command palette and select 'Python: Select interpreter'. A list appears with the available python versions listed. Select the one you want and hit return. Now you can run the script by placing the cursor in the edit window and right-clicking the mouse. A long menu appears and you should choose the 'Run python in the terminal window'. This is towards the bottom of the long menu list. The script will run using the interpreter you selected.",0.0,False,4,5547 -2018-06-04 05:13:38.857,Line by line data from Google cloud vision API OCR,"I have scanned PDFs (image based) of bank statements. -Google vision API is able to detect the text pretty accurately but it returns blocks of text and I need line by line text (bank transactions). -Any idea how to go about it?","In Google Vision API there is a method fullTextAnnotation which returns a full text string with \n specifying the end of the line, You can try that.",0.0,False,1,5548 -2018-06-04 20:20:23.930,"XgBoost accuracy results differ on each run, with the same parameters. How can I make them constant?","The 'merror' and 'logloss' result from XGB multiclass classification differs by about 0.01 or 0.02 on each run, with the same parameters. Is this normal? -I want 'merror' and 'logloss' to be constant when I run XGB with the same parameters so I can evaluate the model precisely (e.g. when I add a new feature). -Now, if I add a new feature I can't really tell whether it had a positive impact on my model's accuracy or not, because my 'merror' and 'logloss' differ on each run regardless of whether I made any changes to the model or the data fed into it since the last run. -Should I try to fix this and if I should, how can I do it?","Managed to solve this. First I set the 'seed' parameter of XgBoost to a fixed value, as Hadus suggested. Then I found out that I used sklearn's train_test_split function earlier in the notebook, without setting the random_state parameter to a fixed value. So I set the random_state parameter to 22 (you can use whichever integer you want) and now I'm getting constant results.",0.0,False,1,5549 -2018-06-04 23:38:16.783,How to keep python programming running constantly,"I made a program that grabs the top three new posts on the r/wallpaper subreddit. It downloads the pictures every 24 hours and adds them to my wallpapers folder. What I'm running into is how to have the program running in the background. The program resumes every time I turn the computer on, but it pauses whenever I close the computer. Is there a way to close the computer without pausing the program? I'm on a mac.","Programs can't run when the computer is powered off. However, you can run a computer headlessly (without mouse, keyboard, and monitor) to save resources. Just ensure your program runs over the command line interface.",0.0,False,1,5550 -2018-06-05 04:53:45.747,Pandas - Read/Write to the same csv quickly.. getting permissions error,"I have a script that I am trying to execute every 2 seconds.. to begin it reads a .csv with pd.read_csv. Then executes modifications on the df and finally overwrites the original .csv with to_csv. -I'm running into a PermissionError: [Errno 13] Permission denied: and from my searches I believe it's due to trying to open/write too often to the same file though I could be wrong. - -Any suggestions how to avoid this? -Not sure if relevant but the file is stored in one-drive folder. -It does save on occasion, seemingly randomly. -Increasing the timeout so the script executes slower helps but I want it running fast! - -Thanks","Close the file that you are trying to read and write and then try running your script. -Hope it helps",-0.2012947653214861,False,1,5551 -2018-06-06 11:33:58.087,Optimizing RAM usage when training a learning model,"I have been working on creating and training a Deep Learning model for the first time. I did not have any knowledge about the subject prior to the project and therefor my knowledge is limited even now. -I used to run the model on my own laptop but after implementing a well working OHE and SMOTE I simply couldnt run it on my own device anymore due to MemoryError (8GB of RAM). Therefor I am currently running the model on a 30GB RAM RDP which allows me to do so much more, I thought. -My code seems to have some horribly inefficiencies of which I wonder if they can be solved. One example is that by using pandas.concat my model's RAM usages skyrockets from 3GB to 11GB which seems very extreme, afterwards I drop a few columns making the RAm spike to 19GB but actually returning back to 11GB after the computation is completed (unlike the concat). I also forced myself to stop using the SMOTE for now just because the RAM usage would just go up way too much. -At the end of the code, where the training happens the model breaths its final breath while trying to fit the model. What can I do to optimize this? -I have thought about splitting the code into multiple parts (for exmaple preprocessing and training) but to do so I would need to store massive datasets in a pickle which can only reach 4GB (correct me if I'm wrong). I have also given thought about using pre-trained models but I truely did not understand how this process goes to work and how to use one in Python. -P.S.: I would also like my SMOTE back if possible -Thank you all in advance!","Slightly orthogonal to your actual question, if your high RAM usage is caused by having entire dataset in memory for the training, you could eliminate such memory footprint by reading and storing only one batch at a time: read a batch, train on this batch, read next batch and so on.",0.0,False,1,5552 -2018-06-07 17:31:59.093,ARIMA Forecasting,"I have a time series data which looks something like this -Loan_id Loan_amount Loan_drawn_date - id_001 2000000 2015-7-15 - id_003 100 2014-7-8 - id_009 78650 2012-12-23 - id_990 100 2018-11-12 -I am trying to build a Arima forecasting model on this data which has round about 550 observations. These are the steps i have followed - -Converted the time series data into daily data and replaced NA values with 0. the data look something like this -Loan_id Loan_amount Loan_drawn_date -id_001 2000000 2015-7-15 -id_001 0 2015-7-16 -id_001 0 2015-7-17 -id_001 0 2015-7-18 -id_001 0 2015-7-19 -id_001 0 2015-7-20 -.... -id_003 100 2014-7-8 -id_003 0 2014-7-9 -id_003 0 2014-7-10 -id_003 0 2014-7-11 -id_003 0 2014-7-12 -id_003 0 2014-7-13 -.... -id_009 78650 2012-12-23 -id_009 0 2012-12-24 -id_009 0 2012-12-25 -id_009 0 2012-12-26 -id_009 0 2012-12-27 -id_009 0 2012-12-28 -... -id_990 100 2018-11-12 -id_990 0 2018-11-13 -id_990 0 2018-11-14 -id_990 0 2018-11-15 -id_990 0 2018-11-16 -id_990 0 2018-11-17 -id_990 0 2018-11-18 -id_990 0 2018-11-19 -Can Anyone please suggest me how do i proceed ahead with these 0 values now? -Seeing the variance in the loan amount numbers i would take log of the of the loan amount. i am trying to build the ARIMA model for the first time and I have read about all the methods of imputation but there is nothing i can find. Can anyone please tell me how do i proceed ahead in this data","I don't know exactly about your specific domain problem, but these things apply usually in general: - -If the NA values represent 0 values for your domain specific problem, then replace them with 0 and then fit the ARIMA model (this would for example be the case if you are looking at daily sales and on some days you have 0 sales) -If the NA values represent unknown values for your domain specific problem then do not replace them and fit your ARIMA model. (this would be the case, if on a specific day the employee forgot to write down the amount of sales and it could be any number). - -I probably would not use imputation at all. There are methods to fit an ARIMA model on time series that have missing values. Usually these algorithms should probably also implemented somewhere in python. (but I don't know since I am mostly using R)",1.2,True,1,5553 -2018-06-08 11:15:45.900,Randomizing lists with variables in Python 3,"I'm looking for a way to randomize lists in python (which I already know how to do) but to then make sure that two things aren't next to each other. For example, if I were to be seating people and numbering the listing going down by 0, 1, 2, 3, 4, 5 based on tables but 2 people couldn't sit next to each other how would I make the list organized in a way to prohibit the 2 people from sitting next to each other.","As you say that you know how to shuffle a list, the only requirement is that two elements are not next to each other. -A simple way is to: - -shuffle the full list -if the two elements are close, choose a random possible position for the second one -exchange the two elements - -Maximum cost: one shuffle, one random choice, one exchange",1.2,True,1,5554 -2018-06-09 00:49:48.297,how to check the SD card size before mounted and do not require root,"I want to check the SD card size in bash or python. Right now I know df can check it when the SD card is mounted or fdisk -l if root is available. -But I want to know how to check the SD card size without requiring mounting the card to the file system or requiring the root permission? For example, if the SD card is not mounted and I issue df -h /dev/sdc, this will return a wrong size. In python, os.statvfs this function returns the same content as well. I search on stack overflow but did not find a solution yet.","Well, I found the lsblk -l can do the job. It tells the total size of the partitions.",0.0,False,1,5555 -2018-06-09 15:59:07.447,How to write a python program that 'scrapes' the results from a website for all possible combinations chosen from the given drop down menus?,"There is a website that claims to predict the approximate salary of an individual on the basis of the following criteria presented in the form of individual drop-down - -Age : 5 options -Education : 3 Options -Sex : 3 Options -Work Experience : 4 Options -Nationality: 12 Options - -On clicking the Submit button, the website gives a bunch of text as output on a new page with an estimate of the salary in numerals. -So, there are technically 5*3*3*4*12 = 2160 data points. I want to get that and arrange it in an excel sheet. Then I would run a regression algorithm to guess the function this website has used. This is what I am looking forward to achieve through this exercise. This is entirely for learning purposes since I'm keen on learning these tools. -But I don't know how to go about it? Any relevant tutorial, documentation, guide would help! I am programming in python and I'd love to use it to achieve this task! -Thanks!","If you are uncomfortable asking them for database as roganjosh suggested :) use Selenium. Write in Python a script that controls Web Driver and repeatedly sends requests to all possible combinations. The script is pretty simple, just a nested loop for each type of parameter/drop down. -If you are sure that value of each type do not depend on each other, check what request is sent to the server. If it is simple URL encoded, like age=...&sex=...&..., then Selenium is not needed. Just generate such URLa for all possible combinations and call the server.",1.2,True,1,5556 -2018-06-09 16:51:02.213,"Rasa-core, dealing with dates","I have a problem with rasa core, let's suppose that I have a rasa-nlu able to detect time -eg ""let's start tomorrow"" would get the entity time: 2018-06-10:T18:39:155Z -Ok, now I want next branches, or decisions to be conditioned by: - -time is in the past -time before one month from now -time is beyond 1 -month - -I do not know how to do that. I do not know how to convert it to a slot able to influence the dialog. My only idea would be to have an action that converts the date to a categorical slot right after detecting time, but I see two problems with that approach: - -one it would already be too late, meaning that if I do it with a -posterior action it means the rasa-core has already decided what -decision to take without using the date -and secondly, I do know how to save it, because if I have a -stories.md that compares a detecting date like in the example with -the current time, maybe in the time of the example it was beyond one -month but now it is in the past, so the reset of that story would be -wrong. - -I am pretty lost and I do not know how to deal with this, thanks a lot!!!","I think you could have a validation in the custom form. -Where it perform validation on the time and perform next action base on the decision on the time. -Your story will have to train to handle different action paths.",0.0,False,1,5557 -2018-06-10 13:57:31.837,Multi crtieria alterative ranking based on mixed data types,"I am building a recommender system which does Multi Criteria based ranking of car alternatives. I just need to do ranking of the alternatives in a meaningful way. I have ways of asking user questions via a form. -Each car will be judged on the following criteria: price, size, electric/non electric, distance etc. As you can see its a mix of various data types, including ordinal, cardinal(count) and quantitative dat. -My question is as follows: - -Which technique should I use for incorporating all the models into a single score Which I can rank. I looked at normalized Weighted sum model, but I have a hard time assigning weights to ordinal(ranked) data. I tried using the SMARTER approach for assigning numerical weights to ordinal data but Im not sure if it is appropriate. Please help! -After someone can help me figure out answer to finding the best ranking method, what if the best ranked alternative isnt good enough on an absolute scale? how do i check that so that enlarge the alternative set further? - -3.Since the criterion mention above( price, etc) are all on different units, is there a good method to normalized mixed data types belonging to different scales? does it even make sense to do so, given that the data belongs to many different types? -any help on these problems will be greatly appreciated! Thank you!","I am happy to see that you are willing to use multiple criteria decision making tool. You can use Analytic Hierarchy Process (AHP), Analytic Network Process (ANP), TOPSIS, VIKOR etc. Please refer relevant papers. You can also refer my papers. -Krishnendu Mukherjee",-0.3869120172231254,False,1,5558 -2018-06-11 22:00:14.173,Security of SFTP packages in Python,"There is plenty of info on how to use what seems to be third-party packages that allow you to access your sFTP by inputting your credentials into these packages. -My dilemma is this: How do I know that these third-party packages are not sharing my credentials with developers/etc? -Thank you in advance for your input.","Thanks everyone for comments. -To distill it: Unless you do a code review yourself or you get a the sftp package from a verified vendor (ie - packages made by Amazon for AWS), you can not assume that these packages are ""safe"" and won't post your info to a third-party site.",1.2,True,1,5559 -2018-06-11 22:56:02.750,How to sync 2 streams from separate sources,"Can someone point me the right direction to where I can sync up a live video and audio stream? -I know it sound simple but here is my issue: - -We have 2 computers streaming to a single computer across multiple networks (which can be up to hundreds of miles away). -All three computers have their system clocks synchronized using NTP -Video computer gathers video and streams UDP to the Display computer -Audio computer gathers audio and also streams to the Display computer - -There is an application which accepts the audio stream. This application does two things (plays the audio over the speakers and sends network delay information to my application). I am not privileged to the method which they stream the audio. -My application displays the video and two other tasks (which I haven't been able to figure out how to do yet). -- I need to be able to determine the network delay on the video stream (ideally, it would be great to have a timestamp on the video stream from the Video computer which is related to that system clock so I can compare that timestamp to my own system clock). -- I also need to delay the video display to allow it to be synced up with the audio. -Everything I have found assumes that either the audio and video are being streamed from the same computer, or that the audio stream is being done by gstreamer so I could use some sync function. I am not privileged to the actual audio stream. I am only given the amount of time the audio was delayed getting there (network delay). -So intermittently, I am given a number as the network delay for the audio (example: 250 ms). I need to be able to determine my own network delay for the video (which I don't know how to do yet). Then I need to compare to see if the audio delay is more than the video network delay. Say the video is 100ms ... then I would need to delay the video display by 150ms (which I also don't know how to do). -ANY HELP is appreciated. I am trying to pick up where someone else has left off in this design so it hasn't been easy for me to figure this out and move forward. Also being done in Python ... which further limits the information I have been able to find. Thanks. -Scott","A typical way to synch audio and video tracks or streams is have a timestamp for each frame or packet, which is relative to the start of the streams. -This way you know that no mater how long it took to get to you, the correct audio to match with the video frame which is 20001999 (for example) milliseconds from the start is the audio which is also timestamped as 20001999 milliseconds from the start. -Trying to synch audio and video based on an estimate of the network delay will be extremely hard as the delay is very unlikely to be constant, especially on any kind of IP network. -If you really have no timestamp information available, then you may have to investigate more complex approaches such as 'markers' in the stream metadata or even some intelligent analysis of the audio and video streams to synch on an event in the streams themselves.",0.0,False,1,5560 -2018-06-12 08:22:14.127,Python script as service has not access to asoundrc configuration file,"I have a python script that records audio from an I2S MEMS microphone, connected to a Raspberry PI 3. -This script runs as supposed to, when accessed from the terminal. The problem appears when i run it as a service in the background. -From what i have seen, the problem is that the script as service, has no access to a software_volume i have configured in asoundrc. The strange thing is that i can see this ""device"" in the list of devices using the get_device_info_by_index() function. -For audio capturing i use the pyaudio library and for making the script a service i have utilized the supervisor utility. -Any ideas what the problem might be and how i can make my script to have access to asoundrc when it runs as a service?","The ~/.asoundrc file is looked for the home directory of the current user (this is what ~ means). -Put it into the home directory of the user as which the service runs, or put the definitions into the global ALSA configuration file /etc/asound.conf.",1.2,True,1,5561 -2018-06-12 14:34:32.823,Odoo 10 mass mailing configure bounces,"I'm using Odoo 10 mass mailing module to send newsletters. I have configured it but I don't know how to configure bounced emails. It is registering correctly sent emails, received (except that it is registering bounced as received), opened and clicks. -Can anyone please help me? -Regards","I managed to solve this problem. Just configured the 'bounce' system parameter to an email with the same name. -Example: -I created an email bounce-register@example.com. Also remember to configure the alias domain in your general settings to 'example.com' -After configuring your email to register bounces you need to configure an incomming mail server for this email (I configured it as an IMAP so I think that should do altough you can also configure it as a POP). That would be it. -Hope this info server for you",1.2,True,1,5562 -2018-06-14 15:07:58.413,How to predict word using trained skipgram model?,"I'm using Google's Word2vec and I'm wondering how to get the top words that are predicted by a skipgram model that is trained using hierarchical softmax, given an input word? -For instance, when using negative sampling, one can simply multiply an input word's embedding (from the input matrix) with each of the vectors in the output matrix and take the one with the top value. However, in hierarchical softmax, there are multiple output vectors that correspond to each input word, due to the use of the Huffman tree. -How do we compute the likelihood value/probability of an output word given an input word in this case?","I haven't seen any way to do this, and given the way hierarchical-softmax (HS) outputs work, there's no obviously correct way to turn the output nodes' activation levels into a precise per-word likelihood estimation. Note that: - -the predict_output_word() method that (sort-of) simulates a negative-sampling prediction doesn't even try to handle HS mode -during training, neither HS nor negative-sampling modes make exact predictions – they just nudge the outputs to be more like the current training example would require - -To the extent you could calculate all output node activations for a given context, then check each word's unique HS code-point node values for how close they are to ""being predicted"", you could potentially synthesize relative scores for each word – some measure of how far the values are from a ""certain"" output of that word. But whether and how each node's deviation should contribute to that score, and how that score might be indicative of a interpretable liklihood, is unclear. -There could also be issues because of the way HS codes are assigned strictly by word-frequency – so 'neighbor' word sharing mostly-the-same-encoding may be very different semantically. (There were some hints in the original word2vec.c code that it could potentially be beneficial to assign HS-encodings by clustering related words to have similar codings, rather than by strict frequency, but I've seen little practice of that since.) -I would suggest sticking to negative-sampling if interpretable predictions are important. (But also remember, word2vec isn't mainly used for predictions, it just uses the training-attempts-at-prediction to bootstrap a vector-arrangment that turn out to be useful for other tasks.)",0.0,False,1,5563 -2018-06-15 06:29:25.043,ImportError: cannot import name _remove_dead_weakref python 2.7,"I use windows 7 and python 2.7 -When I used py2exe to make an .exe file I get the error; -Traceback (most recent call last): -File ""mainpy"", line 17, in -File ""main.py"", line 17, in -File ""zipextimporter.pyc"", line 82, in load_module -File ""zipextimporter.pyc"", line 82, in load_module -File ""logging_init_.pyc"", line 26, in -File ""zipextimporter.pyc"", line 82, in load_module -File ""weakref.pyc"", line 14, in -ImportError: cannot import name _remove_dead_weakref -The same code could be used to make an .exe file in another computer so there is nothing wrong with the code in main.py. The minor environmental difference may cause this problem. I used pycharm, python 2.7.10 and py2exe 0.6.9. On another computer all other config are the same except using sublimetext instead of pycharm. -Could anyone please tell me how to fix that? -Another tricky thing is that",It is possible that the library does not exists for the other computer.Please check whether the library exists or not.,0.0,False,1,5564 -2018-06-15 08:06:21.200,finding length of linked list in constant time python,"I'm trying to write a function which finds the length of a linked list in O(1). -I know how to implement it in O(n) but I can't figure out how to do it in constant time... is that even possible?","Its not possible because you have to atleast pass through entire linked list and it takes O(n) -Else you have to use a variable which counts when inserting elements into linked list",0.0,False,1,5565 -2018-06-15 21:13:27.137,"Accessing Hidden Tabs, Web Scraping With Python 3.6","I'm using bs4 and urllib.request in python 3.6 to webscrape. I have to open tabs / be able to toggle an ""aria-expanded"" in button tabs in order to access the div tabs I need. -The button tab when the tab is closed is as follows with <> instead of --: -button id=""0-accordion-tab-0"" type=""button"" class=""accordion-panel-title u-padding-ver-s u-text-left text-l js-accordion-panel-title"" aria-controls=""0-accordion-panel-0"" aria-expanded=""false"" -When opened, the aria-expanded=""true"" and the div tab appears underneath. -Any idea on how to do this? -Help would be super appreciated.","BeautifulSoup is used to parse HTML/XML content. You can't click around on a webpage with it. -I recommend you look through the document to make sure it isn't just moving the content from one place to the other. If the content is loaded through AJAX when the button is clicked then you will have to use something like selenium to trigger the click. -An easier option could be to check what url the content is fetched from when you click the button and make a similar call in your script if possible.",0.0,False,1,5566 -2018-06-16 19:30:32.583,How to I close down a python server built using flask,"When I run this simple code: -from flask import Flask,render_template -app = Flask(__name__) -@app.route('/') -def index(): - return 'this is the homepage' -if __name__ == ""__main__"": - app.run(debug=True, host=""0.0.0.0"",port=8080) -It works fine but when I close it using ctrl+z in the terminal and try to run it again I get OSError: [Errno 98] Address already in use -So I tried changing the port address and re-running it which works for some of the port numbers I enter. But I want to know a graceful way to clear the address being used by previous program so that it is free for the current one. -Also is what is the apt way to shutdown a server and free the port address. -Kindly tell a simple way to do so OR explain the method used fully because I read solutions to similar problems but didn't understand any of it. -When I run -netstat -tulpn -The output is : -(Not all processes could be identified, non-owned process info - will not be shown, you would have to be root to see it all.) -Active Internet connections (only servers) -Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name -tcp 0 0 127.0.1.1:53 0.0.0.0:* LISTEN - -tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN - -tcp 0 0 0.0.0.0:3689 0.0.0.0:* LISTEN 4361/rhythmbox -tcp6 0 0 ::1:631 :::* LISTEN - -tcp6 0 0 :::3689 :::* LISTEN 4361/rhythmbox -udp 0 0 0.0.0.0:5353 0.0.0.0:* 3891/chrome -udp 0 0 0.0.0.0:5353 0.0.0.0:* - -udp 0 0 0.0.0.0:39223 0.0.0.0:* - -udp 0 0 127.0.1.1:53 0.0.0.0:* - -udp 0 0 0.0.0.0:68 0.0.0.0:* - -udp 0 0 0.0.0.0:631 0.0.0.0:* - -udp 0 0 0.0.0.0:58140 0.0.0.0:* - -udp6 0 0 :::5353 :::* 3891/chrome -udp6 0 0 :::5353 :::* - -udp6 0 0 :::41938 :::* - -I'm not sure how to interpret it. -the output of ps aux | grep 8080 -is : -shreyash 22402 0.0 0.0 14224 928 pts/2 S+ 01:20 0:00 grep --color=auto 8080 -I don't know how to interpret it. -Which one is the the process name and what is it's id?","It stays alive because you're not closing it. With Ctrl+Z you're removing the execution from current terminal without killing a process. -To stop the execution use Ctrl+C",0.2012947653214861,False,2,5567 -2018-06-16 19:30:32.583,How to I close down a python server built using flask,"When I run this simple code: -from flask import Flask,render_template -app = Flask(__name__) -@app.route('/') -def index(): - return 'this is the homepage' -if __name__ == ""__main__"": - app.run(debug=True, host=""0.0.0.0"",port=8080) -It works fine but when I close it using ctrl+z in the terminal and try to run it again I get OSError: [Errno 98] Address already in use -So I tried changing the port address and re-running it which works for some of the port numbers I enter. But I want to know a graceful way to clear the address being used by previous program so that it is free for the current one. -Also is what is the apt way to shutdown a server and free the port address. -Kindly tell a simple way to do so OR explain the method used fully because I read solutions to similar problems but didn't understand any of it. -When I run -netstat -tulpn -The output is : -(Not all processes could be identified, non-owned process info - will not be shown, you would have to be root to see it all.) -Active Internet connections (only servers) -Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name -tcp 0 0 127.0.1.1:53 0.0.0.0:* LISTEN - -tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN - -tcp 0 0 0.0.0.0:3689 0.0.0.0:* LISTEN 4361/rhythmbox -tcp6 0 0 ::1:631 :::* LISTEN - -tcp6 0 0 :::3689 :::* LISTEN 4361/rhythmbox -udp 0 0 0.0.0.0:5353 0.0.0.0:* 3891/chrome -udp 0 0 0.0.0.0:5353 0.0.0.0:* - -udp 0 0 0.0.0.0:39223 0.0.0.0:* - -udp 0 0 127.0.1.1:53 0.0.0.0:* - -udp 0 0 0.0.0.0:68 0.0.0.0:* - -udp 0 0 0.0.0.0:631 0.0.0.0:* - -udp 0 0 0.0.0.0:58140 0.0.0.0:* - -udp6 0 0 :::5353 :::* 3891/chrome -udp6 0 0 :::5353 :::* - -udp6 0 0 :::41938 :::* - -I'm not sure how to interpret it. -the output of ps aux | grep 8080 -is : -shreyash 22402 0.0 0.0 14224 928 pts/2 S+ 01:20 0:00 grep --color=auto 8080 -I don't know how to interpret it. -Which one is the the process name and what is it's id?","You will have another process listening on port 8080. You can check to see what that is and kill it. You can find processes listening on ports with netstat -tulpn. Before you do that, check to make sure you don't have another terminal window open with the running instance.",-0.1016881243684853,False,2,5567 -2018-06-18 05:46:38.073,How to print all recieved post request include headers in python,"I am a python newbie and i have a controler that get Post requests. -I try to print to log file the request that it receive, i am able to print the body but how can i extract all the request include the headers? -I am using request.POST.get() to get the body/data from the request. -Thanks","request.POST should give you the POST body if it is get use request.GET -if the request body is json use request.data",-0.2012947653214861,False,1,5568 -2018-06-18 09:08:56.737,Add conda to my environment variables or path?,"I am having trouble adding conda to my environment variables on windows. I installed anaconda 3 though I didn't installed python, so neither pip or pip3 is working in my prompt. I viewed a few post online but I didn't find anything regarding how to add conda to my environment variables. -I tried to create a PYTHONPATH variable which contained every single folder in Anaconda 3 though it didn't worked. -My anaconda prompt isn't working too. :( -so...How do I add conda and pip to my environment variables or path ?","Thanks guys for helping me out. I solved the problem reinstalling anaconda (several times :[ ), cleaning every log and resetting the path variables via set path= in the windows power shell (since I got some problems reinstalling anaconda adding the folder to PATH[specifically ""unable to load menus"" or something like that])",0.0,False,1,5569 -2018-06-18 16:13:18.567,"getting ""invalid environment marker"" when trying to install my python project","I'm trying to set up a beta environment on Heroku for my Django-based project, but when I install I am getting: - -error in cryptography setup command: Invalid environment marker: - python_version < '3' - -I've done some googling, and it is suggested that I upgrade setuptools, but I can't figure out how to do that. (Putting setuptools in requirements.txt gives me a different error message.) -Sadly, I'm still on Python 2.7, if that matters.","The problem ended up being the Heroku ""buildpack"" that I was using. I had been using the one from ""thenovices"" for a long time so that I could use numpy, scipy, etc. -Sadly, that buildpack specifies an old version of setuptools and python, and those versions were not understanding some of the new instructions (python_version) in the newer setup files for cryptography. -If you're facing this problem, Heroku's advice is to move to Docker-based Heroku, rather than ""traditional"" Heroku.",1.2,True,1,5570 -2018-06-19 10:47:23.783,how to use the Werkzeug debugger in postman?,"i am building a flask RESTapi and i am using postman to make http post requests to my api , i want to use the werkzeug debugger , but postman wont allow me to put in the debugging pin and debug the code from postman , what can i do ?","Never needed any debugger for postman. This is not the tool you need the long blanket of code for one endpoint to test. -It gives a good option - console. I have never experienced any trouble this simple element didn't help me so far.",0.0,False,1,5571 -2018-06-19 13:14:35.270,Importing Numpy into Sublime Text 3,"I'm new to coding and I have been learning it on Jupyter. I have anaconda, Sublime Text 3, and the numpy package installed on my Mac. -On Jupyter, we would import numpy by simply typing - import numpy as np -However, this doesnt seem to work on Sublime as I get the error ModuleNotFoundError: No module named 'numpy' -I would appreciate it if someone could guide me on how to get this working. Thanks!","If you have Annaconda, install Spyder. -If you continue to have this problem, you could check all the lib install from anaconda. -I suggest you to install nmpy from anaconda.",0.3869120172231254,False,1,5572 -2018-06-19 18:36:14.277,dataframe from underlying script not updating,"I have a script called ""RiskTemplate.py"" which generates a pandas dataframe consisting of 156 columns. I created two additional columns which gives me a total count of 158 columns. However, when I run this ""RiskTemplate.py"" script in another script using the below code, the dataframe only pulls the original 156 columns I had before the two additional columns were added. -exec(open(""RiskTemplate.py"").read()) -how can I get the reference script to pull in the revised dataframe from the underlying script ""RiskTemplate.py""? -here are the lines creating the two additional dataframe columns, they work as intended when I run it directly in the ""RiskTemplate.py"" script. The original dataframe is pulling from SQL via df = pd.read_sql(query,connection) -df['LMV % of NAV'] = df['longmv']/df['End of Month NAV']*100 -df['SMV % of NAV'] = df['shortmv']/df['End of Month NAV']*100","I figured it out, sorry for the confusion. I did not save the risktemplate that I updated the dataframe to in the same folder that the other reference script was looking at! Newbie!",0.3869120172231254,False,1,5573 -2018-06-20 01:59:58.440,Python regex to match words not having dot,"I want to accept only those strings having the pattern 'wild.flower', 'pink.flower',...i.e any word preceding '.flower', but the word should not contain dot. For example, ""pink.blue.flower"" is unacceptable. Can anyone help how to do this in python using regex?","You are looking for ""^\w+\.flower$"".",0.1618299653758019,False,2,5574 -2018-06-20 01:59:58.440,Python regex to match words not having dot,"I want to accept only those strings having the pattern 'wild.flower', 'pink.flower',...i.e any word preceding '.flower', but the word should not contain dot. For example, ""pink.blue.flower"" is unacceptable. Can anyone help how to do this in python using regex?","Your case of pink.blue.flower is unclear. There are 2 possibilities: - -Match only blue (cut off preceding dot and what was before). -Reject this case altogether (you want to match a word preceding .flower -only if it is not preceded with a dot). - -In the first case accept other answers. -But if you want the second solution, use: \b(? Settings > Project > Project Interpreter.",1.2,True,1,5580 -2018-06-22 18:41:42.363,Getting IDs from t-SNE plot?,"Quite simple, -If I perform t-SNE in Python for high-dimensional data then I get 2 or 3 coordinates that reflect each new point. -But how do I map these to the original IDs? -One way that I can think of is if the indices are kept fixed the entire time, then I can do: - -Pick a point in t-SNE -See what row it was in t-SNE (e.g. index 7) -Go to original data and pick out row/index 7. - -However, I don't know how to check if this actually works. My data is super high-dimensional and it is very hard to make sense of it with a normal ""sanity check"". -Thanks a lot! -Best,","If you are using sklearn's t-SNE, then your assumption is correct. The ordering of the inputs match the ordering of the outputs. So if you do y=TSNE(n_components=n).fit_transform(x) then y and x will be in the same order so y[7] will be the embedding of x[7]. You can trust scikit-learn that this will be the case.",0.3869120172231254,False,1,5581 -2018-06-22 19:56:07.460,how to print the first lines of a large XML?,"I have this large XML file on my drive. The file is too large to be opened with sublimetext or other text editors. -It is also too large to be loaded in memory by the regular XML parsers. -Therefore, I dont even know what's inside of it! -Is it just possible to ""print"" a few rows of the XML files (as if it was some sort of text document) so that I have an idea of the nodes/content? -I am suprised not to find an easy solution to that issue. -Thanks!","This is one of the few things I ever do on the command line: the ""more"" command is your friend. Just type - -more big.xml",0.1352210990936997,False,1,5582 -2018-06-25 05:26:40.297,Two python3 interpreters on win10 cause misunderstanding,"I used win10. When I installed Visual Studio2017, I configure the Python3 environment. And then after half year I installed Anaconda(Python3) in another directory. Now I have two interpreters in different directories. - -Now, no matter in what IDE I code the codes, after I save it and double click it in the directory, the Python File is run by the interpreter configured by VS2017. - -Why do I know that? I use sys.path to get to know it. But when I use VS2017 to run the code, it shows no mistake. The realistic example is that I pip install requests in cmd, then I import it in a Python File. Only when I double click it, the Traceback says I don't have this module. In other cases it works well. - -So, how to change the default python interpreter of the cmd.exe?","Just change the interpreter order of the python in the PATH is enough. -If you want to use python further more, I suggest you to use virtual environment tools like pipenv to control your python interpreters and modules.",0.0,False,1,5583 -2018-06-25 07:13:06.080,How can I update Python version when working on JGRASP on mac os?,"When I installed the new version of python 3.6.5, JGRASP was using the previous version, how can I use the new version on JGRASP?","By default, jGRASP will use the first ""python"" on the system path. -The new version probably only exists as ""python3"". If that is the case, install jGRASP 2.0.5 Beta if you are using 2.0.4 or a 2.0.5 Alpha. Then, go to ""Settings"" > ""Compiler Settings"" > ""Workspace"", select language ""Python"" if not already selected, select environment ""Python 3 (python 3) - generic"", hit ""Use"" button, and ""OK"" the dialog.",0.0,False,1,5584 -2018-06-25 13:30:47.293,Passing command line parameters to python script from html page,"I have a html page with text box and submit button. When somebody enters data in text box and click submit, i have to pass that value to a python script which does some operation and print output. Can someone let me now how to achieve this. I did some research on stackoverflow/google but nothing conclusive. I have python 2.7, Windows 10 and Apache tomcat. Any help would be greatly appreciated. -Thanks, -Jagadeesh.K","Short answer: You can't just run a python script in the clients browser. It doesn't work that way. -If you want to execute some python when the user does something, you will have to run a web app like the other answer suggested.",0.0,False,1,5585 -2018-06-26 09:53:17.980,How to uninstall (mini)conda entirely on Windows,"I was surprised to be unable to find any information anywhere on the web on how to do this properly, but I suppose my surprise ought to be mitigated by the fact that normally this can be done via Microsoft's 'Add or Remove Programs' via the Control Panel. -This option is not available to me at this time, since I had installed Python again elsewhere (without having uninstalled it), then uninstalled that installation the standard way. Now, despite no option for uninstalling conda via the Control Panel, conda persists in my command line. -Now, the goal is to remove every trace of it, to end up in a state as though conda never existed on my machine in the first place before I reinstall it to the necessary location. -I have a bad feeling that if I simply delete the files and then reinstall, this will cause problems. Does anyone have any guidance in how to achieve the above?","Open the folder where you installed miniconda, and then search for uninstall.exe. Open that it will erase miniconda for you.",0.9950547536867304,False,1,5586 -2018-06-27 02:35:38.367,"protobuf, and tensorflow installation, which version to choose","I already installed python3.5.2, tensorflow(with python3.5.2). -I want to install protobuf now. However, protobuf supports python3.5.0; 3.5.1; and 3.6.0 -I wonder which version should I install. -My question is should I upgrade python3.5.2 to python3.6, or downgrade it to python3.5.1. -I see some people are trying downgrade python3.6 to python3.5 -I googled how to change python3.5.2 to python3.5.1, but no valuable information. I guess this is not usual option.","So it is version problem -one google post says change python version to a more general version. -I am not sure how to change python3.5.2 to python3.5.1 -I just installed procobuf3.6 -I hope it works",0.0,False,1,5587 -2018-06-27 06:09:44.330,How to Resume Python Script After System Reboot?,"I'm still new to writing scripts with Python and would really appreciate some guidance. -I'm wondering how to continue executing my Python script from where it left off after a system restart. -The script essentially alternates between restarting and executing a task for example: restart the system, open an application and execute a task, restart the system, open another application and execute another task, etc... -But the issue is that once the system restarts and logs back in, all applications shut down including the terminal so the script stops running and never executes the following task. The program shuts down early without an error so the logs are not really of much use. Is there any way to reopen the script and continue from where it left off or prevent applications from being closed during a reboot ? Any guidance on the issue would be appreciated. -Thanks! -Also, I'm using a Mac running High Sierra for reference.","You could write your current progress to a file just before you reboot and read said file on Programm start. -About the automatic restart of the script after reboot: you could have the script to put itself in the Autostart of your system and after everything is done remove itself from it.",0.0,False,1,5588 -2018-06-29 09:49:04.483,Incorrect UTC date in MongoDB Compass,"I package my python (flask) application with docker. Within my app I'm generating UTC date with datetime library using datetime.utcnow(). -Unfortunately, when I inspect saved data with MongoDB Compass the UTC date is offset two hours (to my local time zone). All my docker containers have time zone set to Etc/UTC. Morover, mongoengine connection to MongoDB uses tz_aware=False and tzinfo=None, what prevents on fly date conversions. -Where does the offset come from and how to fix it?","Finally, after trying to prove myself wrong, and hairless head I found the cause and solution for my problem. -We are living in the world of illusion and what you see is not what you get!!!. I decided to inspect my data over mongo shell client -rather than MongoDB Compass GUI. I figure out that data that arrived to database contained correct UTC date. This narrowed all my previous -assumption that there has to be something wrong with my python application, and environment that the application is living in. What left was MongoDB Compass itself. -After changing time zone on my machine to a random time zone, and refreshing collection within MongoDB Compass, displayed UTC date changed to a date that fits random time zone. -Be aware that MongoDB Copass displays whatever is saved in database Date field, enlarged about your machine's time zone. Example, if you saved UTC time equivalent to 8:00 am, -and your machine's time zone is Europe/Warsaw then MongoDB Compass will display 10:00am.",1.2,True,1,5589 -2018-07-01 07:10:49.220,How to replace all string in all columns using pandas?,"In pandas, how do I replace & with '&' from all columns where & could be in any position in a string? -For example, in column Title if there is a value 'Good & bad', how do I replace it with 'Good & bad'?","Try this -df['Title'] = titanic_df['Title'].replace(""&"", ""&"")",0.0,False,1,5590 -2018-07-01 23:33:29.923,Binance API: how to get the USD as the quote asset,"I'm wondering what the symbol is or if I am even able to get historical price data on BTC, ETH, etc. denominated in United States Dollars. -right now when if I'm making a call to client such as: -Client.get_symbol_info('BTCUSD') -it returns nothing -Does anyone have any idea how to get this info? Thanks!","You can not make trades in Binance with dollars but instead with Tether(USDT) that is a cryptocurrency that is backed 1-to-1 with dollar. -To solve that use BTCUSDT -Change BTCUSD to BTCUSDT",0.9950547536867304,False,1,5591 -2018-07-02 10:22:40.247,How can i scale a thickness of a character in image using python OpenCV?,"I created one task, where I have white background and black digits. -I need to take the largest by thickness digit. I have made my picture bw, recognized all symbols, but I don't understand, how to scale thickness. I have tried arcLength(contours), but it gave me the largest by size. I have tried morphological operations, but as I undestood, it helps to remove noises and another mistakes in picture, right? And I had a thought to check the distance between neighbour points of contours, but then I thought that it would be hard because of not exact and clear form of symbols(I draw tnem on paint). So, that's all Ideas, that I had. Can you help me in this question by telling names of themes in Comp. vision and OpenCV, that could help me to solve this task? I don't need exact algorithm of solution, only themes. And if that's not OpenCV task, so which is? What library? Should I learn some pack of themes and basics before the solution of my task?","One possible solution that I can think of is to alternate erosion and find contours till you have only one contour left (that should be the thicker). This could work if the difference in thickness is enough, but I can also foresee many particular cases that can prevent a correct identification, so it depends very much on how is your original image.",0.2012947653214861,False,1,5592 -2018-07-02 13:55:01.080,"django inspectdb, how to write multiple table name during inspection","When I first execute this command it create model in my model.py but when I call it second time for another table in same model.py file then that second table replace model of first can anyone told the reason behind that because I am not able to find perfect solution for that? -$ python manage.py inspectdb tablename > v1/projectname/models.py -When executing this command second time for another table then it replace first table name. -$ python manage.py inspectdb tablename2 > v1/projectname/models.py","python manage.py inspectdb table1 table2 table3... > app_name/models.py -Apply this command for inspection of multiple tables of one database in django.",0.0,False,1,5593 -2018-07-02 17:04:29.297,Count Specific Values in Dataframe,"If I had a column in a dataframe, and that column contained two possible categorical variables, how do I count how many times each variable appeared? -So e.g, how do I count how many of the participants in the study were male or female? -I've tried value_counts, groupby, len etc, but seem to be getting it wrong. -Thanks","You could use len([x for x in df[""Sex""] if x == ""Male""). This iterates through the Sex column of your dataframe and determines whether an element is ""Male"" or not. If it is, it is appended to a list via list comprehension. The length of that list is the number of Males in your dataframe.",0.0,False,1,5594 -2018-07-03 17:27:42.043,Which newline character is in my CSV?,"We receive a .tar.gz file from a client every day and I am rewriting our import process using SSIS. One of the first steps in my process is to unzip the .tar.gz file which I achieve via a Python script. -After unzipping we are left with a number of CSV files which I then import into SQL Server. As an aside, I am loading using the CozyRoc DataFlow Task Plus. -Most of my CSV files load without issue but I have five files which fail. By reading the log I can see that the process is reading the Header and First line as though there is no HeaderRow Delimiter (i.e. it is trying to import the column header as ColumnHeader1ColumnValue1 -I took one of these CSVs, copied the top 5 rows into Excel, used Text-To-Columns to delimit the data then saved that as a new CSV file. -This version imported successfully. -That makes me think that somehow the original CSV isn't using {CR}{LF} as the row delimiter but I don't know how to check. Any suggestions?","Seeing that you have EmEditor, you can use EmEditor to find the eol character in two ways: - -Use View > Character Code Value... at the end of a line to display a dialog box showing information about the character at the current position. -Go to View > Marks and turn on Newline Characters and CR and LF with Different Marks to show the eol while editing. LF is displayed with a down arrow while CRLF is a right angle. - -Some other things you could try checking for are: file encoding, wrong type of data for a field and an inconsistent number of columns.",0.0,False,1,5595 -2018-07-03 18:21:44.653,Calling custom C subroutines in a Python application,"I have two custom-written C routines that I would like to use as a part of a large Python application. I would prefer not to rewrite the C code in pure Python (or Cython, etc.), especially to maintain speed. -What is the cleanest, easiest way that I can use my C code from my Python code? Or, what is the cleanest, easiest way for me to wrap my C code for use in my Python source? -I know ""cleanest"" and ""easiest"" will attract opinions, but I really just need some good options for using custom pre-written code, versus many of the other answers/tutorials which describe how to use full-on C libraries as CPython extensions. -EDIT: -Cython and ctypes have both been suggested. Which is a better choice in my case? Each of the two routines I mentioned originally are very computationally intensive. They are used for image calculations and reconstructions, so my plan is to build a Python application around their use (with other functionality in mind that I already have in Python) with the C code run as needed for processing.","Use cython to wrap your C code. In other words, create a CPython extension using Cython, that calls your C code.",1.2,True,1,5596 -2018-07-04 00:03:40.780,kubernetes architecture for microservices application - suggestions,"I have been asked to create a system which has different functionalities. Assume service 1, service 2 and service 3. I need to run these services per hour to do something. -To make the system of those services I need: database, web interface for seeing the result of the process, caching and etc. -This is what I have thought about so far: -I need kubernetes to orchestrate my services which are packaged as docker containers. I will deploy mySql to save my data and I can use Redis cache for caching. -My service are written by python scripts and Java and need to interact with each other through APIs. -I think I can use AWS EKS for my kubernetes cluster +train.py +other1.py +Second -this is what I need to know: -how to deploy python or Java applications and connect them to each other and also connect them to a database service -I also need to know how to schedule the application to run per hour so I can see the results in the web interface. +train.py +other2.py -Please shoot any ideas or questions you have. -Any help would be appreciated.","For python/java applications, create docker images for both applications. If these application run forever to serve traffic then deploy them as deployments.If you need to have only cron like functionality, deploy as Job in kubernetes. -To make services accessible, create services as selector for applications, so these services can route traffic to specific applications. -Database or cache should be exposed as service endpoints so your applications are environment independent.",0.3869120172231254,False,1,5597 -2018-07-04 12:45:42.993,search_s search_ext_s search_s methods of python-ldap library doesn't return any Success response code,"I am using search_ext_s() method of python-ldap to search results on the basis of filter_query, upon completion of search I get msg_id which I passed in result function like this ldap_object.result(msg_id) this returns tuple like this (100, attributes values) which is correct(I also tried result2, result3, result4 method of LDAP object), But how can I get response code for ldap search request, also if there are no result for given filter_criteria I get empty list whereas in case of exception I get proper message like this -ldap.SERVER_DOWN: {u'info': 'Transport endpoint is not connected', 'errno': 107, 'desc': u""Can't contact LDAP server""} -Can somebody please help me if there exists any attribute which can give result code for successful LDAP search operation. -Thanks, -Radhika","An LDAP server simply may not return any results, even if there was nothing wrong with the search operation sent by the client. With python-ldap you get an empty result list. Most times this is due to access control hiding directory content. In general the LDAP server won't tell you why it did not return results. -(There are some special cases where ldap.INSUFFICIENT_ACCESS is raised but you should expect the behaviour to be different when using different LDAP servers.) -In python-ldap if the search operation did not raise an exception the LDAP result code was ok(0). So your application has to deal with an empty search result in some application-specific way, e.g. by also raising a custom exception handled by upper layers.",1.2,True,1,5598 -2018-07-06 07:29:16.617,How to find dot product of two very large matrices to avoid memory error?,"I am trying to learn ML using Kaggle datasets. In one of the problems (using Logistic regression) inputs and parameters matrices are of size (1110001, 8) & (2122640, 8) respectively. -I am getting memory error while doing it in python. This would be same for any language I guess since it's too big. My question is how do they multiply matrices in real life ML implementations (since it would usually be this big)? -Things bugging me : +Third -Some ppl in SO have suggested to calculate dot product in parts and then combine. But even then matrix would be still too big for RAM (9.42TB? in this case) - -And If I write it to a file wouldn't it be too slow for optimization algorithms to read from file and minimize function? - -Even if I do write it to file how would fmin_bfgs(or any opt. function) read from file? - -Also Kaggle notebook shows only 1GB of storage available. I don't think anyone would allow TBs of storage space. - -In my input matrix many rows have similar values for some columns. Can I use it my advantage to save space? (like sparse matrix for zeros in matrix) -Can anyone point me to any real life sample implementation of such cases. Thanks!","I have tried many things. I will be mentioning these here, if anyone needs them in future: - -I had already cleaned up data like removing duplicates and -irrelevant records depending on given problem etc. -I have stored large matrices which hold mostly 0s as sparse matrix. -I implemented the gradient descent using mini-batch method instead of plain old Batch method (theta.T dot X). - -Now everything is working fine.",1.2,True,1,5599 -2018-07-06 17:58:05.770,Python Unit test debugging in VS code,"I use VS code for my Python projects and we have unit tests written using Python's unittest module. I am facing a weird issue with debugging unit tests. -VSCode Version: May 2018 (1.24) -OS Version: Windows 10 -Let's say I have 20 unit tests in a particular project. -I run the tests by right clicking on a unit test file and click 'Run all unit tests' -After the run is complete, the results bar displays how many tests are passed and how many are failed. (e.g. 15 passed, 5 failed). -And I can run/debug individual test because there is a small link on every unit test function for that. -If I re-run the tests from same file, then the results bar displays the twice number of tests. (e.g. 30 passed, 10 failed) -Also the links against individual test functions disappear. So I cannot run individual tests. -The only way to be able to run/debug individual tests after this is by re-launching the VS code. -Any suggestions on how to fix this?",This was a bug in Python extension for VS code and it is fixed now.,1.2,True,1,5600 -2018-07-08 23:33:21.993,Wondering how I can delete all of my python related files on Mac,"So I was trying to install kivy, which lead me to install pip, and I went down a rabbit hole of altering directories. I am using PyCharm for the record. -I would like to remove everything python related (including all libraries like pip) from my computer, and start fresh with empty directories, so when I download pycharm again, there will be no issues. -I am using a Mac, so if any of you could let me know how to do that on a Mac, it would be greatly appreciated. -Could I just open finder, search python, and delete all of the files (there are tons) or would that be too destructive? -I hope I am making my situation clear enough, please comment any questions to clarify things. -Thanks!","If you are familiar with the Terminal app, you can use command lines to uninstall Python from your Mac. For this, follow these steps: - - -Move Python to Trash. -Open the Terminal app and type the following command line in the window: ~ alexa$ sudo rm -rf /Applications/Python\ 3.6/ -It will require you to enter your administrator password to confirm the deletion. - - -And for the PyCharm: - -Just remove the ~/Library/Caches/PyCharm20 and - ~/Library/Preferences/PyCharm20 directories. - -Or if that won't be enough: - - -Go to Applications > right click PyCharm > move to trash -open a terminal and run the following: find ~/Library/ -iname ""pycharm"" -verify that all of the results are in fact related to PyCharm and not something else important you need to keep. Then, remove them all - using the command: find ~/Library -iname ""pycharm"" -exec rm -r ""{}"" - \;",0.3869120172231254,False,1,5601 -2018-07-10 09:58:49.683,Lost artwork while converting .m4a to .mp3 (Python),"I'm trying to convert m4a audio file with artwork (cover) to mp3. I'm using ffmpeg to convert the audio. -Once it copies, the artwork is lost. I'm quite not sure, how to retain the cover. I found some reference about mutagen library but not sure again how to use to copy the artwork. -Any help would be great. - -ffmpeg -i source/file -acodec libmp3lame -ab 128k destination.mp3 - -Update: -I'm reading the artwork and m4a to be able to attache it back. -I can get the artwork by using - -artwork = audio.tags['covr'] - -Now my problem is how do I save the artwork as image in a new file? -I tried the Following: - -with open(path/to/write, 'wb') as img: - img.write(artwork) - -This gives me an error - -'list' does not support the buffer interface line - -Any suggestion, how I can save the artwork extracted covr data?","If anyone is having the same issue; -I ended up reading the artwork from original file and attaching it back to mp3 - -if audioFileNameWithM4AExtension.startswith(""covr""): #checks if it has cover - cover = audioFileNameWithM4AExtension.tags['covr'][0] #gets the cover",0.0,False,1,5602 -2018-07-10 15:26:20.883,Use proxy sentences from cleaned data,"Gensim's Word2Vec model takes as an input a list of lists with the inner list containing individual tokens/words of a sentence. As I understand Word2Vec is used to ""quantify"" the context of words within a text using vectors. -I am currently dealing with a corpus of text that has already been split into individual tokens and no longer contains an obvious sentence format (punctuation has been removed). I was wondering how should I input this into the Word2Vec model? -Say if I simply split the corpus into ""sentences"" of uniform length (10 tokens per sentence for example), would this be a good way of inputting the data into the model? -Essentially, I am wondering how the format of the input sentences (list of lists) affects the output of Word2Vec?","That sounds like a reasonable solution. If you have access to data that is similar to your cleaned data you could get average sentence length from that data set. Otherwise, you could find other data in the language you are working with (from wikipedia or another source) and get average sentence length from there. -Of course your output vectors will not be as reliable as if you had the correct sentence boundaries, but it sounds like word order was preserved so there shouldn't be too much noise from incorrect sentence boundaries.",0.2012947653214861,False,1,5603 -2018-07-10 19:19:58.840,"Python: ContextualVersionConflict: pandas 0.22.0; Requirement.parse('pandas<0.22,>=0.19'), {'scikit-survival'})","I have this issue: - -ContextualVersionConflict: (pandas 0.22.0 (...), - Requirement.parse('pandas<0.22,>=0.19'), {'scikit-survival'}) - -I have even tried to uninstall pandas and install scikit-survival + dependencies via anaconda. But it still does not work.... -Anyone with a suggestion on how to fix? -Thanks!",Restarting jupyter notebook fixed it. But I am unsure why this would fix it?,0.9999092042625952,False,1,5604 -2018-07-11 15:01:09.260,How do I calculate the percentage of difference between two images using Python and OpenCV?,"I am trying to write a program in Python (with OpenCV) that compares 2 images, shows the difference between them, and then informs the user of the percentage of difference between the images. I have already made it so it generates a .jpg showing the difference, but I can't figure out how to make it calculate a percentage. Does anyone know how to do this? -Thanks in advance.",You will need to calculate this on your own. You will need the count of diferent pixels and the size of your original image then a simple math: (diferentPixelsCount / (mainImage.width * mainImage.height))*100,0.0,False,1,5605 -2018-07-11 21:22:40.900,How to import 'cluster' and 'pylab' into Pycharm,I would like to use Pycharm to write some data science code and I am using Visual Studio Code and run it from terminal. But I would like to know if I could do it on Pycharm? I could not find some modules such as cluster and pylab on Pycharm? Anyone knows how I could import these modules into Pycharm?,"Go to the Preferences Tab -> Project Interpreter, there's a + symbol that allows you to view and download packages. From there you should be able to find cluster and pylab and install them to PyCharm's interpreter. After that you can import them and run them in your scripts. -Alternatively, you may switch the project's interpreter to an interpreter that has the packages installed already. This can be done from that same menu.",0.1352210990936997,False,1,5606 -2018-07-14 17:06:41.383,"Multiple Inputs for CNN: images and parameters, how to merge","I use Keras for a CNN and have two types of Inputs: Images of objects, and one or two more parameters describing the object (e.g. weight). How can I train my network with both data sources? Concatenation doesn't seem to work because the inputs have different dimensions. My idea was to concatenate the output of the image analysis and the parameters somehow, before sending it into the dense layers, but I'm not sure how. Or is it possible to merge two classifications in Keras, i.e. classifying the image and the parameter and then merging the classification somehow?","You can use Concatenation layer to merge two inputs. Make sure you're converting multiple inputs into same shape; you can do this by adding additional Dense layer to either of your inputs, so that you can get equal length end layers. Use those same shape outputs in Concatenation layer.",1.2,True,1,5607 -2018-07-14 20:27:44.470,How to analyse the integrity of clustering with no ground truth labels?,"I'm clustering data (trying out multiple algorithms) and trying to evaluate the coherence/integrity of the resulting clusters from each algorithm. I do not have any ground truth labels, which rules out quite a few metrics for analysing the performance. -So far, I've been using Silhouette score as well as calinski harabaz score (from sklearn). With these scores, however, I can only compare the integrity of the clustering if my labels produced from an algorithm propose there to be at minimum, 2 clusters - but some of my algorithms propose that one cluster is the most reliable. -Thus, if you don't have any ground truth labels, how do you assess whether the proposed clustering by an algorithm is better than if all of the data was assigned in just one cluster?","Don't just rely on some heuristic, that someone proposed for a very different problem. -Key to clustering is to carefully consider the problem that you are working on. What is the proper way of proposing the data? How to scale (or not scale)? How to measure the similarity of two records in a way that it quantifies something meaningful for your domain. -It is not about choosing the right algorithm; your task is to do the math that relates your domain problem to what the algorithm does. Don't treat it as a black box. Choosing the approach based on the evaluation step does not work: it is already too late; you probably did some bad decisions already in the preprocessing, used the wrong distance, scaling, and other parameters.",0.0,False,1,5608 -2018-07-15 06:08:43.183,how to run python code in atom in a terminal?,"I'm a beginner in programming and atom so when try to run my python code written in the atom in terminal I don't know how...i tried installing packages like run-in-terminal,platformio-ide-terminal but I don't know how to use them.","Save your Script as a .py file in a directory. -Open the terminal and navigate to the directory containing your script using cd command. -Run python .py if you are using python2 -Run python3 if you are using python3",0.1352210990936997,False,3,5609 -2018-07-15 06:08:43.183,how to run python code in atom in a terminal?,"I'm a beginner in programming and atom so when try to run my python code written in the atom in terminal I don't know how...i tried installing packages like run-in-terminal,platformio-ide-terminal but I don't know how to use them.","""python filename.py"" should run your python code. If you wish to specifically run the program using python 3.6 then it would be ""python3.6 filename.py"".",0.0,False,3,5609 -2018-07-15 06:08:43.183,how to run python code in atom in a terminal?,"I'm a beginner in programming and atom so when try to run my python code written in the atom in terminal I don't know how...i tried installing packages like run-in-terminal,platformio-ide-terminal but I don't know how to use them.","I would not try to do it using extensions. I would use the platformio-ide-terminal and just do it from the command line. -Just type: Python script_name.py and it should run fine. Be sure you are in the same directory as your python script.",0.1352210990936997,False,3,5609 -2018-07-16 08:18:12.017,How to measure latency in paho-mqtt network,"I'm trying measure the latency from my publisher to my subscriber in an MQTT network. I was hoping to use the on_message() function to measure how long this trip takes but its not clear to me whether this callback comes after the broker receives the message or after the subscriber receives it? -Also does anyone else have any other suggestion on how to measure latency across the network?","on_message() is called on the subscriber when the message reaches the subscriber. -One way to measure latency is to do a loop back publish in the same client e.g. - -Setup a client -Subscribe to a given topic -Publish a message to the topic and record the current (high resolution) timestamp. -When on_message() is called record the time again - -It is worth pointing out that this sort of test assumes that both publisher/subscriber will be on similar networks (e.g. not cellular vs gigabit fibre). -Also latency will be influenced by the load on the broker and the number of subscribers to a given topic. -The other option is to measure latency passively by monitoring the network assuming you can see all the traffic from one location as synchronising clocks across monitoring point is very difficult.",0.3869120172231254,False,2,5610 -2018-07-16 08:18:12.017,How to measure latency in paho-mqtt network,"I'm trying measure the latency from my publisher to my subscriber in an MQTT network. I was hoping to use the on_message() function to measure how long this trip takes but its not clear to me whether this callback comes after the broker receives the message or after the subscriber receives it? -Also does anyone else have any other suggestion on how to measure latency across the network?","I was involved in similar kind of work where I was supposed measure the latency in wireless sensor networks. There are different ways to measure the latencies. -If the subscriber and client are synchronized. - -Fill the payload with the time stamp value at the client and transmit -this packet to subscriber. At the subscriber again take the time -stamp and take the difference between the time stamp at the -subscriber and the timestamp value in the packet. -This gives the time taken for the packet to reach subscriber from -client. - -If the subscriber and client are not synchronized. -In this case measurement of latency is little tricky. Assuming the network is symmetrical. - -Start the timer at client before sending the packet to subscriber. -Configure subscriber to echo back the message to client. Stop the -timer at the client take the difference in clock ticks. This time -represents the round trip time you divide it by two to get one -direction latency.",0.5457054096481145,False,2,5610 -2018-07-16 13:07:02.643,Brief explanation on tensorflow object detection working mechanism,"I've searched for working mechanism of tensorflow object detection in google. I've searched how tensorflow train models with dataset. It give me suggestion about how to implement rather than how it works. -Can anyone explain how dataset are trained in fit into models?","You can't ""simply"" understand how Tensorflow works without a good background on Artificial Intelligence and Machine Learning. -I suggest you start working on those topics. Tensorflow will get much easier to understand and to handle after that.",0.0,False,1,5611 -2018-07-16 16:38:23.357,fetch data from 3rd party API - Single Responsibility Principle in Django,"What's the most elegant way to fetch data from an external API if I want to be faithful to the Single Responsibility Principle? Where/when exactly should it be made? -Assuming I've got a POST /foo endpoint which after being called should somehow trigger a call to the external API and fetch/save some data from it in my local DB. -Should I add the call in the view? Or the Model?","I usually add any external API calls into dedicated services.py module (same level as your models.py that you're planning to save results into or common app if any of the existing are not logically related) -Inside that module you can use class called smth like MyExtarnalService and add all needed methods for fetching, posting, removing etc. just like you would do with drf api view. -Also remember to handle exceptions properly (timeouts, connection errors, error response codes) by defining custom error exception classes.",0.0,False,1,5612 -2018-07-16 18:35:21.250,What is the window length of moving average trend in seasonal.seasonal_decompose package?,"I am using seasonal.seasonal_decompose in python. -What is the window length of moving average trend in seasonal.seasonal_decompose package? -Based on my results, I think it is 25. But how can I be sure? how can I change this window length?","I found the answer. The ""freq"" part defines the window of moving average. Still not sure how the program choose the window when we do not declare it.",0.0,False,1,5613 -2018-07-17 10:48:39.477,How to retrain model in graph (.pb)?,"I have model saved in graph (.pb file). But now the model is inaccurate and I would like to develop it. I have pictures of additional data to learn, but I don't if it's possible or if it's how to do it? The result must be the modified of new data pb graph.","It's a good question. Actually it would be nice, if someone could explain how to do this. But in addition i can say you, that it would come to ""catastrophic forgetting"", so it wouldn't work out. You had to train all your data again. -But anyway, i also would like to know that espacially for ssd, just for test reasons.",0.5457054096481145,False,1,5614 -2018-07-17 10:52:00.203,Django - how to send mail 5 days before event?,"I'm Junior Django Dev. Got my first project. Doing quite well but senior dev that teaches me went on vacations.... -I have a Task in my company to create a function that will remind all people in specyfic Group, 5 days before event by sending mail. -There is a TournamentModel that contains a tournament_start_date for instance '10.08.2018'. -Player can join tournament, when he does he joins django group ""Registered"". -I have to create a function (job?) that will check tournament_start_date and if tournament begins in 5 days, this function will send emails to all people in ""Registered"" Group... automatically. -How can I do this? What should I use? How to run it and it will automatically check? I'm learning python/django for few months... but I meet jobs fot the first time ;/ -I will appreciate any help.",You can set this mail send function as cron job。You can schedule it by crontab or Celery if Your team has used it.,0.2012947653214861,False,1,5615 -2018-07-19 12:11:04.380,how to change vs code python extension's language?,"My computer's system language is zh_cn, so the vs code python extension set the default language to chinese. But i want to change the language to english. -I can't find the reference in the doc or on the internet. Anyone konws how to do it? Thank's for help -PS: vs code's locale is alreay set to english.",When VScode is open go to View menu and select Command Palette. Once the command palette is open type display in the box. This should display the message configure display language. Open that and you should be in a local.json file. The variable local should be set to en for English.,0.0,False,2,5616 -2018-07-19 12:11:04.380,how to change vs code python extension's language?,"My computer's system language is zh_cn, so the vs code python extension set the default language to chinese. But i want to change the language to english. -I can't find the reference in the doc or on the internet. Anyone konws how to do it? Thank's for help -PS: vs code's locale is alreay set to english.","You probably installed other python extensions for VSCode. Microsoft official python extension will follow the locale setting in user/workspace settings. -Try uninstall other python extensions, you may see it changes to English.",0.0,False,2,5616 -2018-07-19 19:12:26.090,Python3 remove multiple hyphenations from a german string,"I'm currently working on a neural network that evaluates students' answers to exam questions. Therefore, preprocessing the corpora for a Word2Vec network is needed. Hyphenation in german texts is quite common. There are mainly two different types of hyphenation: -1) End of line: -The text reaches the end of the line so the last word is sepa- -rated. -2) Short form of enumeration: -in case of two ""elements"": -Geistes- und Sozialwissenschaften -more ""elements"": -Wirtschafts-, Geistes- und Sozialwissenschaften -The de-hyphenated form of these enumerations should be: -Geisteswissenschaften und Sozialwissenschaften -Wirtschaftswissenschaften, Geisteswissenschaften und Sozialwissenschaften -I need to remove all hyphenations and put the words back together. I already found several solutions for the first problem. -But I have absoluteley no clue how to get the second part (in the example above ""wissenschaften"") of the words in the enumeration problem. I don't even know if it is possible at all. -I hope that I have pointet out my problem properly. -So has anyone an idea how to solve this problem? -Thank you very much in advance!","It's surely possible, as the pattern seems fairly regular. (Something vaguely analogous is sometimes seen in English. For example: The new requirements applied to under-, over-, and average-performing employees.) -The rule seems to be roughly, ""when you see word-fragments with a trailing hyphen, and then an und, look for known words that begin with the word-fragments, and end the same as the terminal-word-after-und – and replace the word-fragments with the longer words"". -Not being a German speaker and without language-specific knowledge, it wouldn't be possible to know exactly where breaks are appropriate. That is, in your Geistes- und Sozialwissenschaften example, without language-specific knowledge, it's unclear whether the first fragment should become Geisteszialwissenschaften or Geisteswissenschaften or Geistesenschaften or Geiestesaften or any other shared-suffix with Sozialwissenschaften. But if you've got a dictionary of word-fragments, or word-frequency info from other text that uses the same full-length word(s) without this particular enumeration-hyphenation, that could help choose. -(If there's more than one plausible suffix based on known words, this might even be a possible application of word2vec: the best suffix to choose might well be the one that creates a known-word that is closest to the terminal-word in word-vector-space.) -Since this seems a very German-specific issue, I'd try asking in forums specific to German natural-language-processing, or to libraries with specific German support. (Maybe, NLTK or Spacy?) -But also, knowing word2vec, this sort of patch-up may not actually be that important to your end-goals. Training without this logical-reassembly of the intended full words may still let the fragments achieve useful vectors, and the corresponding full words may achieve useful vectors from other usages. The fragments may wind up close enough to the full compound words that they're ""good enough"" for whatever your next regression/classifier step does. So if this seems a blocker, don't be afraid to just try ignoring it as a non-problem. (Then if you later find an adequate de-hyphenation approach, you can test whether it really helped or not.)",0.3869120172231254,False,1,5617 -2018-07-20 10:26:17.870,Can't install tensorflow with pip or anaconda,"Does anyone know how to properly install tensorflow on Windows? -I'm currently using Python 3.7 (also tried with 3.6) and every time I get the same ""Could not find a version that satisfies the requirement tensorflow-gpu (from versions: ) -No matching distribution found for tensorflow-gpu"" error -I tried installing using pip and anaconda, both don't work for me. - -Found a solution, seems like Tensorflow doesn't support versions of python after 3.6.4. This is the version I'm currently using and it works.","Not Enabling the Long Paths can be the potential problem.To solve that, -Steps include: - -Go to Registry Editor on the Windows Laptop - -Find the key ""HKEY_LOCAL_MACHINE""->""SYSTEM""->""CurrentControlSet""-> -""File System""->""LongPathsEnabled"" then double click on that option and change the value from 0 to 1. - - -3.Now try to install the tensorflow it will work.",0.0,False,5,5618 -2018-07-20 10:26:17.870,Can't install tensorflow with pip or anaconda,"Does anyone know how to properly install tensorflow on Windows? -I'm currently using Python 3.7 (also tried with 3.6) and every time I get the same ""Could not find a version that satisfies the requirement tensorflow-gpu (from versions: ) -No matching distribution found for tensorflow-gpu"" error -I tried installing using pip and anaconda, both don't work for me. - -Found a solution, seems like Tensorflow doesn't support versions of python after 3.6.4. This is the version I'm currently using and it works.","Actually the easiest way to install tensorflow is: -install python 3.5 (not 3.6 or 3.7) you can check wich version you have by typing ""python"" in the cmd. -When you install it check in the options that you install pip with it and you add it to variables environnement. -When its done just go into the cmd and tipe ""pip install tensorflow"" -It will download tensorflow automatically. -If you want to check that it's been installed type ""python"" in the cmd then some that "">>>"" will appear, then you write ""import tensorflow"" and if there's no error, you've done it!",0.0,False,5,5618 -2018-07-20 10:26:17.870,Can't install tensorflow with pip or anaconda,"Does anyone know how to properly install tensorflow on Windows? -I'm currently using Python 3.7 (also tried with 3.6) and every time I get the same ""Could not find a version that satisfies the requirement tensorflow-gpu (from versions: ) -No matching distribution found for tensorflow-gpu"" error -I tried installing using pip and anaconda, both don't work for me. - -Found a solution, seems like Tensorflow doesn't support versions of python after 3.6.4. This is the version I'm currently using and it works.","As of July 2019, I have installed it on python 3.7.3 using py -3 -m pip install tensorflow-gpu -py -3 in my installation selects the version 3.7.3. -The installation can also fail if the python installation is not 64 bit. Install a 64 bit version first.",0.0,False,5,5618 -2018-07-20 10:26:17.870,Can't install tensorflow with pip or anaconda,"Does anyone know how to properly install tensorflow on Windows? -I'm currently using Python 3.7 (also tried with 3.6) and every time I get the same ""Could not find a version that satisfies the requirement tensorflow-gpu (from versions: ) -No matching distribution found for tensorflow-gpu"" error -I tried installing using pip and anaconda, both don't work for me. - -Found a solution, seems like Tensorflow doesn't support versions of python after 3.6.4. This is the version I'm currently using and it works.","You mentioned Anaconda. Do you run your python through there? -If so check in Anaconda Navigator --> Environments, if your current environment have got tensorflow installed. -If not, install tensorflow and run from that environment. -Should work.",0.0,False,5,5618 -2018-07-20 10:26:17.870,Can't install tensorflow with pip or anaconda,"Does anyone know how to properly install tensorflow on Windows? -I'm currently using Python 3.7 (also tried with 3.6) and every time I get the same ""Could not find a version that satisfies the requirement tensorflow-gpu (from versions: ) -No matching distribution found for tensorflow-gpu"" error -I tried installing using pip and anaconda, both don't work for me. - -Found a solution, seems like Tensorflow doesn't support versions of python after 3.6.4. This is the version I'm currently using and it works.",Tensorflow or Tensorflow-gpu is supported only for 3.5.X versions of Python. Try installing with any Python 3.5.X version. This should fix your problem.,1.2,True,5,5618 -2018-07-21 10:12:24.710,Chatterbot dynamic training,"I'm using chatter bot for implementing chat bot. I want Chatterbot to training the data set dynamically. -Whenever I run my code it should train itself from the beginning, because I require new data for every person who'll chat with my bot. -So how can I achieve this in python3 and on windows platform ? -what I want to achieve and problem I'm facing: -I've a python program which will create a text file student_record.txt, this will be generate from a data base and almost new when different student signup or login. In the chatter bot, I trained the bot using with giving this file name but it still replay from the previous trained data","I got the solution for that, I just deleted the data base on the beginning of the program thus new data base will create during the execution of the program. - I used the following command to delete the data base -import os - os.remove(""database_name"") -in my case -import os - os.remove(""db.sqlite3"") -thank you",0.0,False,1,5619 -2018-07-21 11:51:55.627,How do I use Google Cloud API's via Anaconda Spyder?,"I am pretty new to Python in general and recently started messing with the Google Cloud environment, specifically with the Natural Language API. -One thing that I just cant grasp is how do I make use of this environment, running scripts that use this API or any API from my local PC in this case my Anaconda Spyder environment? -I have my project setup, but from there I am not exactly sure, which steps are necessary. Do I have to include the authentication somehow in the Script inside Spyder? -Some insights would be really helpful.",First install the API by pip install or conda install in the scripts directory of anaconda and then simply import it into your code and start coding.,-0.2012947653214861,False,1,5620 -2018-07-21 16:20:50.893,How to open/create images in Python without using external modules,"I have a python script which opens an image file (.png or .ppm) using OpenCV, then loads all the RGB values into a multidimensional Python array (or list), performs some pixel by pixel calculations solely on the Python array (OpenCV is not used at all for this stage), then uses the newly created array (containing new RGB values) to write a new image file (.png here) using OpenCV again. Numpy is not used at all in this script. The program works fine. -The question is how to do this without using any external libraries, regardless whether they are for image processing or not (e.g. OpenCV, Numpy, Scipy, Pillow etc.). To summarize, I need to use bare bones Python's internal modules to: 1. open image and read the RGB values and 2. write a new image from pre-calculated RGB values. I will use Pypy instead of CPython for this purpose, to speed things up. -Note: I use Windows 10, if that matters.","Working with bare-bones .ppm files is trivial: you have three lines of text (P6, ""width height"", 255), and then you have the 3*width*height bytes of RGB. As long as you don't need more complicated variants of the .ppm format, you can write a loader and a saver in 5 lines of code each.",0.1016881243684853,False,1,5621 -2018-07-22 01:51:12.200,How run my code in spyder as i used to run it in linux terminal,"Apologies if my question is stupid. -I am a newbie is all aspects. -I used to run my python code straight from the terminal in Linux Ubuntu, -e.g. I just open the terminal go to my folder and run my command in my Linux terminal -CUDA_VISIBLE_DEVICES=0 python trainval_net.py --dataset pascal_voc --net resnet101 --epochs 7 --bs 1 --nw 4 --lr 1e-3 --lr_decay_step 5 --cuda -now im trying to use Spyder. -So for the same project i have a folder with bunch of functions/folders/stuff inside it. -So i just open that main folder as a new project, then i have noo idea how i can run my code... -There is a console in the right side of spyder which looks like Ipython and i can do stuff in there, but i cannot run the code that i run in terminal there. -In iphython or jupyther i used to usee ! at the begining of the command but here when i do it (e.g. !CUDA_VISIBLE_DEVICES=0 python trainval_net.py --dataset pascal_voc --net resnet101 --epochs 7 --bs 1 --nw 4 --lr 1e-3 --lr_decay_step 5 --cuda) it does not even know the modules and throw errors (e.g. ImportError: No module named numpy`) -Can anyone tell me how should i run my code here in Spyder -Thank you in advance! :)","Okay I figured it out. -I need to go to run->configure per file and in the command line options put the configuration (--dataset pascal_voc --net resnet101 --epochs 7 --bs 1 --nw 4 --lr 1e-3 --lr_decay_step 5 --cuda)",0.0,False,1,5622 -2018-07-22 04:44:09.413,How to use Midiutil to add multiple notes in one timespot (or how to add chords),I am using Midiutil to recreate a modified Bach contrapuntist melody and I am having difficulty finding a method for creating chords using Midiutil in python. Does anyone know a way to create chords using Midiuitl or if there is a way to create chords.,"A chord consists of multiple notes. -Just add multiple notes with the same timestamp.",1.2,True,1,5623 -2018-07-22 16:11:22.640,"PyCharm, stop the console from clearing every time you run the program","So I have just switched over from Spyder to PyCharm. In Spyder, each time you run the program, the console just gets added to, not cleared. This was very useful because I could look through the console to see how my changes to the code were changing the outputs of the program (obviously the console had a maximum length so stuff would get cleared eventually) -However in PyCharm each time I run the program the console is cleared. Surely there must be a way to change this, but I can't find the setting. Thanks.","In Spyder the output is there because you are running iPython. -In PyCharm you can get the same by pressing on View -> Scientific Mode. -Then every time you run you see a the new output and the history there.",0.3869120172231254,False,1,5624 -2018-07-23 00:44:09.343,dateutil 2.5.0 is the minimum required version,"I'm running the jupyter notebook (Enthought Canopy python distribution 2.7) on Mac OSX (v 10.13.6). When I try to import pandas (import pandas as pd), I am getting the complaint: ImportError: dateutil 2.5.0 is the minimum required version. I have these package versions: - -Canopy version 2.1.3.3542 (64 bit) -jupyter version 1.0.0-25 -pandas version 0.23.1-1 -python_dateutil version 2.6.0-1 - -I'm not getting this complaint when I run with the Canopy Editor so it must be some jupyter compatibility problem. Does anyone have a solution on how to fix this? All was well a few months ago until I recently (and mindlessly) allowed an update of my packages.","Installed Canopy version 2.1.9. The downloaded version worked without updating any of the packages called out by the Canopy Package Manager. Updated all the packages, but then the ""import pandas as pd"" failed when using the jupyter notebook. Downgraded the notebook package from 4.4.1-5 to 4.4.1-4 which cascaded to 35 additional package downgrades. Retested the import of pandas and the issue seems to have disappeared.",0.0,False,3,5625 -2018-07-23 00:44:09.343,dateutil 2.5.0 is the minimum required version,"I'm running the jupyter notebook (Enthought Canopy python distribution 2.7) on Mac OSX (v 10.13.6). When I try to import pandas (import pandas as pd), I am getting the complaint: ImportError: dateutil 2.5.0 is the minimum required version. I have these package versions: - -Canopy version 2.1.3.3542 (64 bit) -jupyter version 1.0.0-25 -pandas version 0.23.1-1 -python_dateutil version 2.6.0-1 - -I'm not getting this complaint when I run with the Canopy Editor so it must be some jupyter compatibility problem. Does anyone have a solution on how to fix this? All was well a few months ago until I recently (and mindlessly) allowed an update of my packages.","I had this same issue using the newest pandas version - downgrading to pandas 0.22.0 fixes the problem. -pip install pandas==0.22.0",0.2401167094949473,False,3,5625 -2018-07-23 00:44:09.343,dateutil 2.5.0 is the minimum required version,"I'm running the jupyter notebook (Enthought Canopy python distribution 2.7) on Mac OSX (v 10.13.6). When I try to import pandas (import pandas as pd), I am getting the complaint: ImportError: dateutil 2.5.0 is the minimum required version. I have these package versions: - -Canopy version 2.1.3.3542 (64 bit) -jupyter version 1.0.0-25 -pandas version 0.23.1-1 -python_dateutil version 2.6.0-1 - -I'm not getting this complaint when I run with the Canopy Editor so it must be some jupyter compatibility problem. Does anyone have a solution on how to fix this? All was well a few months ago until I recently (and mindlessly) allowed an update of my packages.","The issue is with the pandas lib -downgrade using the command below -pip install pandas==0.22.0",0.0,False,3,5625 -2018-07-23 17:57:30.150,CNN image extraction to predict a continuous value,"I have images of vehicles . I need to predict the price of the vehicle based on image extraction. -What I have learnt is , I can use CNN to extract the image features but what I am not able to get is, How to predict the prices of vehicles. -I know that the I need to train my CNN model before it predicts the price. -I don't know how to train the model with images along with prices . -In the end what I expect is , I will input an vehicle image and I need to get price of the vehicle. -Can any one provide the approach for this ?","I would use the CNN to predict the model of the car and then using a list of all the car prices it's easy enough to get the price, or if you dont care about the car model just use the prices as lables",0.0,False,1,5626 -2018-07-24 11:59:30.057,How can I handle Pepper robot shutdown event?,I need to handle the event when the shutdown process is started(for example with long press the robot's chest button or when the battery is critically low). The problem is that I didn't find a way to handle the shutdown/poweroff event. Do you have any idea how this can be done in some convenient way?,"Unfortunately this won't be possible as when you trigger a shutdown naoqi will exit as well and destroy your service. -If you are coding in c++ you could use a destructor, but there is no proper equivalent for python... -An alternative would be to execute some code when your script exits whatever the reason. For this you can start your script as a service and wait for ""the end"" using qiApplication.run(). This method will simply block until naoqi asks your service to exit. -Note: in case of shutdown, all services are being killed, so you cannot run any command from the robot API (as they are probably not available anymore!)",1.2,True,1,5627 -2018-07-24 16:25:19.637,Python - pandas / openpyxl: Tips on Automating Reports (Moving Away from VBA).,"I currently have macros set up to automate all my reports. However, some of my macros can take up to 5-10 minutes due to the size of my data. -I have been moving away from Excel/VBA to Python/pandas for data analysis and manipulation. I still use excel for data visualization (i.e., pivot tables). -I would like to know how other people use python to automate their reports? What do you guys do? Any tips on how I can start the process? -Majority of my macros do the following actions - - -Import text file(s) -Paste the raw data into a table that's linked to pivot tables / charts. -Refresh workbook -Save as new","When using python to automate reports I fully converted the report from Excel to Pandas. I use pd.read_csv or pd.read_excel to read in the data, and export the fully formatted pivot tables into excel for viewing. doing the 'paste into a table and refresh' is not handled well by python in my experience, and will likely still need macros to handle properly ie, export a csv with the formatted data from python then run a short macro to copy and paste. -if you have any more specific questions please ask, i have done a decent bit of this",0.0,False,1,5628 -2018-07-24 19:41:53.300,How to make RNN time-forecast multiple days using Keras?,"I am currently working on a program that would take the previous 4000 days of stock data about a particular stock and predict the next 90 days of performance. -The way I've elected to do this is with an RNN that makes use of LSTM layers to use the previous 90 days to predict the next day's performance (when training, the previous 90 days are the x-values and the next day is used as the y-value). What I would like to do however, is use the previous 90-180 days to predict all the values for the next 90 days. However, I am unsure of how to implement this in Keras as all the examples I have seen only predict the next day and then they may loop that prediction into the next day's 90 day x-values. -Is there any ways to just use the previous 180 days to predict the next 90? Or is the LSTM restricted to only predicting the next day?","I don't have the rep to comment, but I'll say here that I've toyed with a similar task. One could use a sliding window approach for 90 days (I used 30, since 90 is pushing LSTM limits), then predict the price appreciation for next month (so your prediction is for a single value). @Digital-Thinking is generally right though, you shouldn't expect great performance.",0.0,False,1,5629 -2018-07-24 21:28:16.190,How do you setup script RELOAD/RESTART upon file changes using bash?,"I have a Python Kafka worker run by a bash script in a Docker image inside a docker-compose setup that I need to reload and restart whenever a file in its directory changes, as I edit the code. Does anyone know how to accomplish this for a bash script? -Please don't merge this with the several answers about running a script whenever a file in a directory changes. I've seen other answers regarding this, but I can't find a way to run a script once, and then stop, reload and re-run it if any files change. -Thanks!","My suggestion is to let docker start a wrapper script that simply starts the real script in the background. -Then in an infinite loop: - -using inotifywait the wrapper waits for the appropriate change -then kills/stop/reload/... the child process -starts a new one in the background again.",1.2,True,1,5630 -2018-07-25 09:28:59.487,Creating an exe file for windows using mac for my Kivy app,"I've created a kivy app that works perfectly as I desire. It's got a few files in a particular folder that it uses. For the life of me, I don't understand how to create an exe on mac. I know I can use pyinstaller but how do I create an exe from mac. -Please help!","For pyinstaller, they have stated that packaging Windows binaries while running under OS X is NOT supported, and recommended to use Wine for this. - - -Can I package Windows binaries while running under Linux? - -No, this is not supported. Please use Wine for this, PyInstaller runs - fine in Wine. You may also want to have a look at this thread in the - mailinglist. In version 1.4 we had build in some support for this, but - it showed to work only half. It would require some Windows system on - another partition and would only work for pure Python programs. As - soon as you want a decent GUI (gtk, qt, wx), you would need to install - Windows libraries anyhow. So it's much easier to just use Wine. - -Can I package Windows binaries while running under OS X? - -No, this is not supported. Please try Wine for this. - -Can I package OS X binaries while running under Linux? - -This is currently not possible at all. Sorry! If you want to help out, - you are very welcome.",0.2012947653214861,False,2,5631 -2018-07-25 09:28:59.487,Creating an exe file for windows using mac for my Kivy app,"I've created a kivy app that works perfectly as I desire. It's got a few files in a particular folder that it uses. For the life of me, I don't understand how to create an exe on mac. I know I can use pyinstaller but how do I create an exe from mac. -Please help!","This is easy with Pyinstaller. I've used it recently. -Install pyinstaller -pip install pyinstaller +train.py +other3.py -Hit following command on terminal where file.py is path to your main file - -pyinstaller -w -F file.py - -Your exe will be created inside a folder dist -NOTE : verified on windowns, not on mac",-0.3869120172231254,False,2,5631 -2018-07-25 12:50:07.533,Python Redis on Heroku reached max clients,"I am writing a server with multiple gunicorn workers and want to let them all have access to a specific variable. I'm using Redis to do this(it's in RAM, so it's fast, right?) but every GET or SET request adds another client. I'm performing maybe ~150 requests per second, so it quickly reaches the 25 connection limit that Heroku has. To access the database, I'm using db = redis.from_url(os.environ.get(""REDIS_URL"")) and then db.set() and db.get(). Is there a way to lower that number? For instance, by using the same connection over and over again for each worker? But how would I do that? The 3 gunicorn workers I have are performing around 50 queries each per second. -If using redis is a bad idea(which it probably is), it would be great if you could suggest alternatives, but also please include a way to fix my current problem as most of my code is based off of it and I don't have enough time to rewrite the whole thing yet. -Note: The three pieces of code are the only times redis and db are called. I didn't do any configuration or anything. Maybe that info will help.","Most likely, your script creates a new connection for each request. -But each worker should create it once and use forever. -Which framework are you using? -It should have some documentation about how to configure Redis for your webapp. -P.S. Redis is a good choice to handle that :)",0.0,False,1,5632 -2018-07-25 18:37:23.550,Async HTTP server with scrapy and mongodb in python,"I am basically trying to start an HTTP server which will respond with content from a website which I can crawl using Scrapy. In order to start crawling the website I need to login to it and to do so I need to access a DB with credentials and such. The main issue here is that I need everything to be fully asynchronous and so far I am struggling to find a combination that will make everything work properly without many sloppy implementations. -I already got Klein + Scrapy working but when I get to implementing DB accesses I get all messed up in my head. Is there any way to make PyMongo asynchronous with twisted or something (yes, I have seen TxMongo but the documentation is quite bad and I would like to avoid it. I have also found an implementation with adbapi but I would like something more similar to PyMongo). -Trying to think things through the other way around I'm sure aiohttp has many more options to implement async db accesses and stuff but then I find myself at an impasse with Scrapy integration. -I have seen things like scrapa, scrapyd and ScrapyRT but those don't really work for me. Are there any other options? -Finally, if nothing works, I'll just use aiohttp and instead of Scrapy I'll do the requests to the websito to scrap manually and use beautifulsoup or something like that to get the info I need from the response. Any advice on how to proceed down that road? -Thanks for your attention, I'm quite a noob in this area so I don't know if I'm making complete sense. Regardless, any help will be appreciated :)","Is there any way to make pymongo asynchronous with twisted - -No. pymongo is designed as a synchronous library, and there is no way you can make it asynchronous without basically rewriting it (you could use threads or processes, but that is not what you asked, also you can run into issues with thread-safeness of the code). - -Trying to think things through the other way around I'm sure aiohttp has many more options to implement async db accesses and stuff - -It doesn't. aiohttp is a http library - it can do http asynchronously and that is all, it has nothing to help you access databases. You'd have to basically rewrite pymongo on top of it. - -Finally, if nothing works, I'll just use aiohttp and instead of scrapy I'll do the requests to the websito to scrap manually and use beautifulsoup or something like that to get the info I need from the response. - -That means lots of work for not using scrapy, and it won't help you with the pymongo issue - you still have to rewrite pymongo! -My suggestion is - learn txmongo! If you can't and want to rewrite it, use twisted.web to write it instead of aiohttp since then you can continue using scrapy!",1.2,True,1,5633 -2018-07-25 21:15:26.713,Python: How to plot an array of y values for one x value in python,"I am trying to plot an array of temperatures for different location during one day in python and want it to be graphed in the format (time, temperature_array). I am using matplotlib and currently only know how to graph 1 y value for an x value. -The temperature code looks like this: -Temperatures = [[Temp_array0] [Temp_array1] [Temp_array2]...], where each numbered array corresponds to that time and the temperature values in the array are at different latitudes and longitudes.","You can simply repeat the X values which are common for y values -Suppose -[x,x,x,x],[y1,y2,y3,y4]",0.0,False,1,5634 -2018-07-26 21:21:24.690,Triggering email out of Spotfire based on conditions,"Does anyone have experience with triggering an email from Spotfire based on a condition? Say, a sales figure falls below a certain threshold and an email gets sent to the appropriate distribution list. I want to know how involved this would be to do. I know that it can be done using an iron python script, but I'm curious if it can be done based on conditions rather than me hitting ""run""?","we actually have a product that does exactly this called the Spotfire Alerting Tool. it functions off of Automation Services and allows you to configure various thresholds for any metrics in the analysis, and then can notify users via email or even SMS. -of course there is the possibility of coding this yourself (the tool is simply an extension developed using the Spotfire SDK) but I can't comment on how to code it. -the best way to get this tool is probably to check with your TIBCO sales rep. if you'd like I can try to reach him on your behalf, but I'll need a bit more info from you. please contact me at nmaresco@tibco.com. -I hope this kind of answer is okay on SO. I don't have a way to reach you privately and this is the best answer I know how to give :)",0.3869120172231254,False,1,5635 -2018-07-27 00:49:39.630,"Scipy interp2d function produces z = f(x,y), I would like to solve for x","I am using the 2d interpolation function in scipy to smooth a 2d image. As I understand it, interpolate will return z = f(x,y). What I want to do is find x with known values of y and z. I tried something like this; -f = interp2d(x,y,z) -index = (np.abs(f(:,y) - z)).argmin() -However the interp2d object does not work that way. Any ideas on how to do this?","I was able to figure this out. yvalue, zvalue, xmin, and xmax are known values. By creating a linspace out of the possible values x can take on, a list can be created with all of the corresponding function values. Then using argmin() we can find the closest value in the list to the known z value. -f = interp2d(x,y,z) -xnew = numpy.linspace(xmin, xmax) -fnew = f(xnew, yvalue) -xindex = (numpy.abs(fnew - zvalue)).argmin() -xvalue = xnew(xindex)",0.0,False,1,5636 -2018-07-27 04:42:13.823,"How to set an start solution in Gurobi, when only objective function is known?","I have a minimization problem, that is modeled to be solved in Gurobi, via python. -Besides, I can calculate a ""good"" initial solution for the problem separately, that can be used as an upper bound for the problem. -What I want to do is to set Gurobi use this upper bound, to enhance its efficiency. I mean, if this upper bound can help Gurobi for its search. The point is that I just have the objective value, but not a complete solution. -Can anybody help me how to set this upper bound in the Gurobi? -Thanks.","I think that if you can calculate a good solution, you can also know some bound for your variable even you dont have the solution exactly ?",0.0,False,1,5637 -2018-07-28 15:56:50.503,Many to many relationship SQLite (studio or sql),"Hellow. It seems to me that I just don't understand something quite obvios in databases. -So, we have an author that write books and have books themselves. One author can write many books as well as one book could be written by many authors. -Thus, we have two tables 'Books' and 'Authors'. -In 'Authors' I have an 'ID'(Primary key) and 'Name', for example: -1 - L.Carrol -2 - D.Brown -In 'Books' - 'ID' (pr.key), 'Name' and 'Authors' (and this column is foreign key to the 'Authors' table ID) -1 - Some_name - 2 (L.Carol) -2 - Another_name - 2,1 (D.Brown, L.Carol) -And here is my stumbling block, cause i don't understand how to provide the possibility to choose several values from 'Authors' table to one column in 'Books' table.But this must be so simple, isn't it? -I've red about many-to-many relationship, saw many examples with added extra table to implement that, but still don't understand how to store multiple values from one table in the other's table column. Please, explain the logic, how should I do something like that ? I use SQLiteStudio but clear sql is appropriate too. Help ^(","You should have third intermediate table which will have following columns: - -id (primary) -author id (from Authors table) -book id (from Books table) - -This way you will be able to create a record which will map 1 author to 1 book. So you can have following records: - -1 ... Author1ID ... Book1ID -2 ... Author1ID ... Book2ID -3 ... Author2ID ... Book2ID - -AuthorXID and BookXID - foreign keys from corresponding tables. -So Book2 has 2 authors, Author1 has 2 books. -Also separate tables for Books and Authors don't need to contain any info about anything except itself. -Authors .. 1---Many .. BOOKSFORAUTHORS .. Many---1 .. Books",1.2,True,1,5638 -2018-07-28 23:43:19.713,Screen up time in desktop,"I might be sounding like a noob while asking this question but I really want to know how can I get the time from when my screen is on. Not the system up time but the screen up time. I want to use this time in a python app. So please tell me if there is any way to get that. Thanks in advance. -Edit- I want to get the time from when the display is black due to no activity and we move mouse or press a key and screen comes up, the display is up, the user is able to read and/or able to edit a document or play games. -OS is windows .","In Mac OS ioreg might have the information you're looking for. -ioreg -n IODisplayWrangler -r IODisplayWrangler -w 0 | grep IOPowerManagement",0.0,False,1,5639 -2018-07-29 11:14:44.810,Django Queryset find data between date,"I don't know what title should be, I just got stuck and need to ask. -I have a model called shift -and imagine the db_table like this: - -#table shift -+---------------+---------------+---------------+---------------+------------+------------+ -| start | end | off_start | off_end | time | user_id | -+---------------+---------------+---------------+---------------+------------+------------+ -| 2018-01-01 | 2018-01-05 | 2018-01-06 | 2018-01-07 | 07:00 | 1 | -| 2018-01-08 | 2018-01-14 | 2018-01-15 | Null | 12:00 | 1 | -| 2018-01-16 | 2018-01-20 | 2018-01-21 | 2018-01-22 | 18:00 | 1 | -| 2018-01-23 | 2018-01-27 | 2018-01-28 | 2018-01-31 | 24:00 | 1 | -| .... | .... | .... | .... | .... | .... | -+---------------+---------------+---------------+---------------+------------+------------+ - -if I use queryset with filter like start=2018-01-01 result will 07:00 -but how to get result 12:00 if I Input 2018-01-10 ?... -thank you!","Question isnt too clear, but maybe you're after something like -start__lte=2018-01-10, end__gte=2018-01-10?",1.2,True,1,5640 -2018-07-31 16:24:41.370,cannot run jupyter notebook from anaconda but able to run it from python,"After installing Anaconda to C:\ I cannot open jupyter notebook. Both in the Anaconda Prompt with jupyter notebook and inside the navigator. I just can't make it to work. It doesn't appear any line when I type jupyter notebook iniside the prompt. Neither does the navigator work. Then after that I reinstall Anaconda, didn't work either. -But then I try to reinstall jupyter notebook dependently using python -m install jupyter and then run python -m jupyter. It works and connect to the localhost:8888. So my question is that how can I make Jupyter works from Anaconda -Also note that my anaconda is not in the environment variable( or %PATH% ) and I have tried reinstalling pyzmq and it didn't solve the problem. I'm using Python 3.7 and 3.6.5 in Anaconda -Moreover, the spyder works perfectly","You need to activate the anaconda environment first. -In terminal: source activate environment_name, (or activate environment_name on windows?) -then jupyter notebook -If you don't know the env name, do conda list -to restore the default python environment: source deactivate",1.2,True,1,5641 -2018-07-31 16:30:46.247,Handling Error for Continuous Features in a Content-Based Filtering Recommender System,"I've got a content-based recommender that works... fine. I was fairly certain it was the right approach to take for this problem (matching established ""users"" with ""items"" that are virtually always new, but contain known features similar to existing items). -As I was researching, I found that virtually all examples of content-based filtering use articles/movies as an example and look exclusively at using encoded tf-idf features from blocks of text. That wasn't exactly what I was dealing with, but most of my features were boolean features, so making a similar vector and looking at cosine distance was not particularly difficult. I also had one continuous feature, which I scaled and included in the vector. As I said, it seemed to work, but was pretty iffy, and I think I know part of the reason why... -The continuous feature that I'm using is a rating (let's call this ""deliciousness""), where, in virtually all cases, a better score would indicate an item more favorable for the user. It's continuous, but it also has a clear ""direction"" (not sure if this is the correct terminology). Error in one direction is not the same as error in another. -I have cases where some users have given high ratings to items with mediocre ""deliciousness"" scores, but logically they would still prefer something that was more delicious. That user's vector might have an average deliciousness of 2.3. My understanding of cosine distance is that in my model, if that user encountered two new items that were exactly the same except that one had a deliciousness of 1.0 and the other had a deliciousness of 4.5, it would actually favor the former because it's a shorter distance between vectors. -How do I modify or incorporate some other kind of distance measure here that takes into account that deliciousness error/distance in one direction is not the same as error/distance in the other direction? -(As a secondary question, how do I decide how to best scale this continuous feature next to my boolean features?)","There are two basic approaches to solve this: -(1) Write your own distance function. The obvious approach is to remove the deliciousness element from each vector, evaluating that difference independently. Use cosine similarity on the rest of the vector. Combine that figure with the taste differential as desired. -(2) Transform your deliciousness data such that the resulting metric is linear. This will allow a ""normal"" distance metric to do its job as expected.",1.2,True,1,5642 -2018-07-31 22:16:11.853,How do i get Mac 10.13 to install modules into a 3.x install instead of 2.7,"I'm trying to learn python practically. -I installed PIP via easy_install and then I wanted to play with some mp3 files so I installed eyed3 via pip while in the project directory. Issue is that it installed the module into python 2.7 which comes standard with mac. I found this out as it keeps telling me that when a script does not run due to missing libraries like libmagic and no matter what I do, it keeps putting any libraries I install into 2.7 thus not being found when running python3. -My question is how to I get my system to pretty much ignore the 2.7 install and use the 3.7 install which I have. -I keep thinking I am doing something wrong as heaps of tutorials breeze over it and only one has so far mentioned that you get clashes between the versions. I really want to learn python and would appreciate some help getting past this blockage.","Have you tried pip3 install [module-name]? -Then you should be able to check which modules you've installed using pip3 freeze.",0.0,False,1,5643 -2018-08-01 06:16:42.720,Any way to save format when importing an excel file in Python?,"I'm doing some work on the data in an excel sheet using python pandas. When I write and save the data it seems that pandas only saves and cares about the raw data on the import. Meaning a lot of stuff I really want to keep such as cell colouring, font size, borders, etc get lost. Does anyone know of a way to make pandas save such things? -From what I've read so far it doesn't appear to be possible. The best solution I've found so far is to use the xlsxwriter to format the file in my code before exporting. This seems like a very tedious task that will involve a lot of testing to figure out how to achieve the various formats and aesthetic changes I need. I haven't found anything but would said writer happen to in any way be able to save the sheet format upon import? -Alternatively, what would you suggest I do to solve the problem that I have described?",Separate data from formatting. Have a sheet that contains only the data – that's the one you will be reading/writing to – and another that has formatting and reads the data from the first sheet.,0.0,False,1,5644 -2018-08-01 10:39:07.337,How backing file works in qcow2?,"qcow2 is an image for qemu and it's good to emulate. -I know how to write data for qcow2 format, but I don't know how backing files in qcow2 work? -I found nothing tutorial said this. -Can anyone give me tips?","Backing file is external snapshot for qcow2 and the qemu will write COW data in the new image. -For example: -You have image A and B, and A is backing file of B. -When you mount B to /dev/nbd and check its data, you'll find you can saw data of A. -That's because if there's no data in the range of B, qemu will read the same range of A. -An important notes: If qemu doesn't find A, you won't be able to mount B on /dev/nbd.",0.3869120172231254,False,1,5645 -2018-08-02 13:30:37.763,how to download many pdf files from google at once using python?,I want to download approximately 50 pdf files from the Internet using a python script. Can Google APIs help me anyhow?,"I am going to assume that you are downloading from Google drive. You can only download one file at a time. You cant batch download of the actual file itself. -YOu could look into some kind of multi threading system and download the files at the same time that way but you man run into quota issues.",0.0,False,1,5646 -2018-08-03 12:50:35.807,how to use coverage run --source = {dir_name},"I have certain files in a directory named benchmarks and I want to get code coverage by running these source files. -I have tried using source flag in the following ways but it doesn't work. -coverage3 run --source=benchmarks -coverage3 run --source=benchmarks/ -On running, I always get Nothing to do. -Thanks","coverage run is like python. If you would run a file with python myprog.py, then you can use coverage run myprog.py.",1.2,True,1,5647 -2018-08-04 18:15:06.493,Discord.py get message embed,"How can I get the embed of a message to a variable with the ID of the message in discord.py? -I get the message with uzenet = await client.get_message(channel, id), but I don't know how to get it's embed.","To get the first Embed of your message, as you said that would be a dict(): -embedFromMessage = uzenet.embeds[0] -To transfer the dict() into an discord.Embed object: -embed = discord.Embed.from_data(embedFromMessage)",1.2,True,1,5648 -2018-08-04 22:59:50.310,How to use Windows credentials to connect remote desktop,"In my Python script I want to connect to remote server every time. So how can I use my windows credentials to connect to server without typing user ID and password. -By default it should read the userid/password from local system and will connect to remote server. -I tried with getuser() and getpass() but I have to enter the password everytime. I don't want to enter the password it should take automatically from local system password. -Any suggestions..",I am sorry this is not exactly an answer but I have looked on the web and I do not think you can write a code to automatically open Remote desktop without you having to enter the credentials but can you please edit the question so that I can see the code?,0.0,False,1,5649 -2018-08-07 05:02:59.393,On project task created do not send email,"By default subscribers get email messages once the new task in a project is created. How it can be tailored so that unless the projects has checkbox ""Send e-mail on new task"" checked it will not send e-mails on new task? -I know how to add a custom field to project.project model. But don't know the next step. -What action to override to not send the email when a new task is created and ""Send e-mail on new task"" is not checked for project?","I found that if project has notifications option "" -Visible by following customers"" enabled then one can configure subscription for each follower. -To not receive e-mails when new task is added to the project: unmark the checkbox ""Task opened"" in the ""Edit subscription of User"" form.",1.2,True,1,5650 -2018-08-08 05:01:22.287,How can I pack python into my project?,"I am making a program that will call python. I would like to add python in my project so users don't have to download python in order to use it, also it will be better to use the python that my program has so users don't have to download any dependency. -My program it's going to be writing in C++ (but can be any language) and I guess I have to call the python that is in the same path of my project? -Let's say that the system where the user is running already has python and he/she calls 'pip' i want the program to call pip provided by the python give it by my program and install it in the program directory instead of the system's python? -It's that possible? If it is how can I do it? -Real examples: -There are programs that offer a terminal where you can execute python to do things in the program like: - -Maya by Autodesk -Nuke by The foundry -Houdini by Side Effects - -Note: It has to be Cross-platform solution","In order to run python code, the runtime is sufficient. Under Windows, you can use py2exe to pack your program code together with the python runtime and all recessary dependencies. But pip cannot be used and it makes no sense, as you don't want to develop, but only use the python part. -To distribute the complete python installation, like Panda3D does, you'll have to include it in the chosen installer software.",0.1352210990936997,False,1,5651 -2018-08-08 06:15:54.700,Python app to organise ideas by tags,"Please give me a hint about how is it better to code a Python application which helps to organise ideas by tags. -Add a new idea: -Input 1: the idea -Input 2: corresponding tags -Search for the idea: -Input 1: one or multiple tags -As far as I understood, it's necessary to create an array with ideas and an array with tags. But how to connect them? For example, idea number 3 corresponds to tags number 1 and 2. So the question is: how to link these two arrays in the most simple and elegant way?","Have two dictionaries: - -Idea -> Set of Tags -Tag -> Set of Ideas - -When you add a new idea, add it to the first dictionary, and then update all the sets of the tags it uses in the second dictionary. This way you get easy lookup by both tag and idea.",0.0,False,1,5652 -2018-08-08 13:54:35.003,Does ImageDataGenerator add more images to my dataset?,"I'm trying to do image classification with the Inception V3 model. Does ImageDataGenerator from Keras create new images which are added onto my dataset? If I have 1000 images, will using this function double it to 2000 images which are used for training? Is there a way to know how many images were created and now fed into the model?","Let me try and tell u in the easiest way possible with the help of an example. -For example: -you have a set of 500 images -you applied the ImageDataGenerator to the dataset with batch_size = 25 -now you run your model for lets say 5 epochs with -steps_per_epoch=total_samples/batch_size -so , steps_per_epoch will be equal to 20 -now your model will run on all 500 images (randomly transformed according to instructions provided to ImageDataGenerator) in each epoch",0.0,False,2,5653 -2018-08-08 13:54:35.003,Does ImageDataGenerator add more images to my dataset?,"I'm trying to do image classification with the Inception V3 model. Does ImageDataGenerator from Keras create new images which are added onto my dataset? If I have 1000 images, will using this function double it to 2000 images which are used for training? Is there a way to know how many images were created and now fed into the model?","Also note that: These augmented images are not stored in the memory, they are generated on the fly while training and lost after training. You can't read again those augmented images. -Not storing those images is a good idea because we'd run out of memory very soon storing huge no of images",0.1160922760327606,False,2,5653 -2018-08-09 09:03:04.903,Can I use JetBrains MPS in a web application?,"I am developing a small web application with Flask. This application needs a DSL, which can express the content of .pdf files. -I have developed a DSL with JetBrains MPS but now I'm not sure how to use it in my web application. Is it possible? Or should I consider to switch to another DSL or make my DSL directly in Python.","If you want to use MPS in the web frontend the simple answer is: no. -Since MPS is a projectional editor it needs a projection engine so that user can interact with the program/model. The projection engine of MPS is build in Java for desktop applications. There have been some efforts to put MPS on the web and build Java Script/HTML projection engine but none of the work is complete. So unless you would build something like that there is no way to use MPS in the frontend. -If your DSL is textual anyway and doesn't leverage the projectional nature of MPS I would go down the text DSL road with specialised tooling for that e.g. python as you suggested or Xtext.",1.2,True,1,5654 -2018-08-09 10:03:54.250,"How to solve error Expected singleton: purchase.order.line (57, 58, 59, 60, 61, 62, 63, 64)","I'm using odoo version 9 and I've created a module to customize the reports of purchase order. Among the fields that I want displayed in the reports is the supplier reference for article but when I add the code that displays this field - but it displays an error when I want to start printing the report -QWebException: ""Expected singleton: purchase.order.line(57, 58, 59, 60, 61, 62, 63, 64)"" while evaluating -""', '.join([str(x.product_code) for x in o.order_line.product_id.product_tmpl_id.seller_ids])"" -PS: I don't change anything in the module purchase. -I don't know how to fix this problem any idea for help please ?","It is because your purchase order got several orderlines and you are hoping that the order will have only one orderline. -o.orderline.product_id.product_tmpl_id.seller_ids -will work only if there is one orderline otherwise you have loop through each orderline. Here o.orderline will have multiple orderlines and you can get product_id from multiple orderline. If you try o.orderline[0].product_id.product_tmpl_id.seller_ids it will work but will get only first orderline details. Inorder to get all the orderline details you need to loop through it.",1.2,True,1,5655 -2018-08-10 09:07:10.620,how to convert tensorflow .meta .data .index to .ckpt file?,"As we know, when using tensorflow to save checkpoint, we have 3 files, for e.g.: -model.ckpt.data-00000-of-00001 -model.ckpt.index -model.ckpt.meta -I check on the faster rcnn and found that they have an evaluation.py script which helps evaluate the pre-trained model, but the script only accept .ckpt file (as they provided some pre-trained models above). -I have run some finetuning from their pre-trained model -And then I wonder if there's a way to convert all the .data-00000-of-00001, .index and .meta into one single .ckpt file to run the evaluate.py script on the checkpoint? -(I also notice that the pre-trained models they provided in the repo do have only 1 .ckpt file, how can they do that when the save-checkpoint function generates 3 files?)","These -{ -model.ckpt.data-00000-of-00001 -model.ckpt.index -model.ckpt.meta -} -are the more recent checkpoint format -while -{model.ckpt} -is a previous checkpoint format -It will be in the same concept as to convert a Nintendo Switch to NES ... Or a 3 pieces CD bundle to a single ROM cartridge...",0.0,False,1,5656 -2018-08-10 17:54:31.013,How do I write a script that configures an applications settings for me?,"I need help on how to write a script that configures an applications (VLC) settings to my needs without having to do it manually myself. The reason for this is because I will eventually need to start this application on boot with the correct settings already configured. -Steps I need done in the script. -1) I need to open the application. -2) Open the “Open Network Stream…” tab (Can be done with Ctrl+N). -3) Type a string of characters “String of characters” -4) Push “Enter” twice on the keyboard. -I’ve checked various websites across the internet and could not find any information regarding this. I am sure it’s possible but I am new to writing scripts and not too experienced. Are commands like the steps above possible to be completed in a script? -Note: Using Linux based OS (Raspbian). -Thank you.","Do whichever changes you want manually once on an arbitrary system, then make a copy of the application's configuration files (in this case ~/.config/vlc) -When you want to replicate the settings on a different machine, simply copy the settings to the same location.",1.2,True,1,5657 -2018-08-10 22:27:20.097,Python/Tkinter - Making The Background of a Textbox an Image?,"Since Text(Tk(), image=""somepicture.png"") is not an option on text boxes, I was wondering how I could make bg= a .png image. Or any other method of allowing a text box to stay a text box, with an image in the background so it can blend into a its surroundings.","You cannot use an image as a background in a text widget. -The best you can do is to create a canvas, place an image on the canvas, and then create a text item on top of that. Text items are editable, but you would have to write a lot of bindings, and you wouldn't have nearly as many features as the text widget. In short, it would be a lot of work.",1.2,True,1,5658 -2018-08-11 06:44:26.587,how to uninstall pyenv(installed by homebrew) on Mac,"I used to install pyenv by homebrew to manage versions of python, but now, I want to use anaconda.But I don't know how to uninstall pyenv.Please tell me.","Try removing it using the following command: -brew remove pyenv",0.3869120172231254,False,2,5659 -2018-08-11 06:44:26.587,how to uninstall pyenv(installed by homebrew) on Mac,"I used to install pyenv by homebrew to manage versions of python, but now, I want to use anaconda.But I don't know how to uninstall pyenv.Please tell me.","None work for me (under brew) under Mac Cataline. -They have a warning about file missing under .pyenv. -(After I removed the bash_profile lines and also rm -rf ~/.pyenv, -I just install Mac OS version of python under python.org and seems ok. -Seems get my IDLE work and ...",0.3869120172231254,False,2,5659 -2018-08-11 08:48:32.293,How to install pandas for sublimetext?,"I cannot find the way to install pandas for sublimetext. Do you might know how? -There is something called pandas theme in the package control, but that was not the one I needed; I need the pandas for python for sublimetext.","You can install this awesome theme through the Package Control. - -Press cmd/ctrl + shift + p to open the command palette. -Type “install package” and press enter. Then search for “Panda Syntax Sublime” - -Manual installation - -Download the latest release, extract and rename the directory to “Panda Syntax”. -Move the directory inside your sublime Packages directory. (Preferences > Browse packages…) - -Activate the theme -Open you preferences (Preferences > Setting - User) and add this lines: -""color_scheme"": ""Packages/Panda Syntax Sublime/Panda/panda-syntax.tmTheme"" -NOTE: Restart Sublime Text after activating the theme.",-0.2012947653214861,False,2,5660 -2018-08-11 08:48:32.293,How to install pandas for sublimetext?,"I cannot find the way to install pandas for sublimetext. Do you might know how? -There is something called pandas theme in the package control, but that was not the one I needed; I need the pandas for python for sublimetext.","For me, ""pip install pandas"" was not working, so I used pip3 install pandas which worked nicely. -I would advise using either pip install pandas or pip3 install pandas for sublime text",0.0,False,2,5660 -2018-08-11 14:25:40.620,Can I get a list of all urls on my site from the Google Analytics API?,"I have a site www.domain.com and wanted to get all of the urls from my entire website and how many times they have been clicked on, from the Google Analytics API. -I am especially interested in some of my external links (the ones that don't have www.mydomain.com). I will then match this against all of the links on my site (I somehow need to get these from somewhere so may scrape my own site). -I am using Python and wanted to do this programmatically. Does anyone know how to do this?","I have a site www.domain.com and wanted to get all of the urls from my - entire website and how many times they have been clicked on - -I guess you need parameter Page and metric Pageviews - -I am especially interested in some of my external links - -You can get list of external links if you track they as events. -Try to use some crawler, for example Screaming Frog. It allows to get internal and external links. Free use up to 500 pages.",1.2,True,1,5661 -2018-08-12 10:05:41.443,Data extraction from wef output file,"I have a wrf output netcdf file.File have variables temp abd prec.Dimensions keys are time, south-north and west-east. So how I select different lat long value in region. The problem is south-north and west-east are not variable. I have to find index value of four lat long value","1) Change your Registry files (I think it is Registry.EM_COMMON) so that you print latitude and longitude in your wrfout_d01_time.nc files. -2) Go to your WRFV3 map. -3) Clean, configure and recompile. -4) Run your model again the way you are used to.",0.0,False,1,5662 -2018-08-12 19:39:13.970,Cosmic ray removal in spectra,"Python developers -I am working on spectroscopy in a university. My experimental 1-D data sometimes shows ""cosmic ray"", 3-pixel ultra-high intensity, which is not what I want to analyze. So I want to remove this kind of weird peaks. -Does anybody know how to fix this issue in Python 3? -Thanks in advance!!","The answer depends a on what your data looks like: If you have access to two-dimensional CCD readouts that the one-dimensional spectra were created from, then you can use the lacosmic module to get rid of the cosmic rays there. If you have only one-dimensional spectra, but multiple spectra from the same source, then a quick ad-hoc fix is to make a rough normalisation of the spectra and remove those pixels that are several times brighter than the corresponding pixels in the other spectra. If you have only one one-dimensional spectrum from each source, then a less reliable option is to remove all pixels that are much brighter than their neighbours. (Depending on the shape of your cosmics, you may even want to remove the nearest 5 pixels or something, to catch the wings of the cosmic ray peak as well).",0.0,False,1,5663 -2018-08-13 21:59:31.640,PyCharm running Python file always opens a new console,"I initially started learning Python in Spyder, but decided to switch to PyCharm recently, hence I'm learning PyCharm with a Spyder-like mentality. -I'm interested in running a file in the Python console, but every time I rerun this file, it will run under a newly opened Python console. This can become annoying after a while, as there will be multiple Python consoles open which basically all do the same thing but with slight variations. -I would prefer to just have one single Python console and run an entire file within that single console. Would anybody know how to change this? Perhaps the mindset I'm using isn't very PyCharmic?","One console is one instance of Python being run on your system. If you want to run different variations of code within the same Python kernel, you can highlight the code you want to run and then choose the run option (Alt+Shift+F10 default).",0.0,False,3,5664 -2018-08-13 21:59:31.640,PyCharm running Python file always opens a new console,"I initially started learning Python in Spyder, but decided to switch to PyCharm recently, hence I'm learning PyCharm with a Spyder-like mentality. -I'm interested in running a file in the Python console, but every time I rerun this file, it will run under a newly opened Python console. This can become annoying after a while, as there will be multiple Python consoles open which basically all do the same thing but with slight variations. -I would prefer to just have one single Python console and run an entire file within that single console. Would anybody know how to change this? Perhaps the mindset I'm using isn't very PyCharmic?","You have an option to Rerun the program. -Simply open and navigate to currently running app with: - -Alt+4 (Windows) -⌘+4 (Mac) - -And then rerun it with: - -Ctrl+R (Windows) -⌘+R (Mac) - -Another option: -Show actions popup: - -Ctrl+Shift+A (Windows) -⇧+⌘+A (Mac) - -And type Rerun ..., IDE then hint you with desired action, and call it.",0.0,False,3,5664 -2018-08-13 21:59:31.640,PyCharm running Python file always opens a new console,"I initially started learning Python in Spyder, but decided to switch to PyCharm recently, hence I'm learning PyCharm with a Spyder-like mentality. -I'm interested in running a file in the Python console, but every time I rerun this file, it will run under a newly opened Python console. This can become annoying after a while, as there will be multiple Python consoles open which basically all do the same thing but with slight variations. -I would prefer to just have one single Python console and run an entire file within that single console. Would anybody know how to change this? Perhaps the mindset I'm using isn't very PyCharmic?","To allow only one instance to run, go to ""Run"" in the top bar, then ""Edit Configurations..."". Finally, check ""Single instance only"" at the right side. This will run only one instance and restart every time you run.",0.0679224682270276,False,3,5664 -2018-08-14 03:28:58.627,What is Killed:9 and how to fix in macOS Terminal?,"I have a simple Python code for a machine learning project. I have a relatively big database of spontaneous speech. I started to train my speech model. Since it's a huge database I let it work overnight. In the morning I woke up and saw a mysterious -Killed: 9 -line in my Terminal. Nothing else. There is no other error message or something to work with. The code run well for about 6 hours which is 75% of the whole process so I really don't understand whats went wrong. -What is Killed:9 and how to fix it? It's very frustrating to lose hours of computing time... -I'm on macOS Mojave beta if it's matter. Thank you in advance!","Try to change the node version. -In my case, that helps.",-0.2012947653214861,False,1,5665 -2018-08-15 17:19:50.610,Identifying parameters in HTTP request,"I am fairly proficient in Python and have started exploring the requests library to formulate simple HTTP requests. I have also taken a look at Sessions objects that allow me to login to a website and -using the session key- continue to interact with the website through my account. -Here comes my problem: I am trying to build a simple API in Python to perform certain actions that I would be able to do via the website. However, I do not know how certain HTTP requests need to look like in order to implement them via the requests library. -In general, when I know how to perform a task via the website, how can I identify: - -the type of HTTP request (GET or POST will suffice in my case) -the URL, i.e where the resource is located on the server -the body parameters that I need to specify for the request to be successful","This has nothing to do with python, but you can use a network proxy to examine your requests. - -Download a network proxy like Burpsuite -Setup your browser to route all traffic through Burpsuite (default is localhost:8080) -Deactivate packet interception (in the Proxy tab) -Browse to your target website normally -Examine the request history in Burpsuite. You will find every information you need",1.2,True,1,5666 -2018-08-16 03:16:46.443,Why there is binary type after writing to hive table,"I read the data from oracle database to panda dataframe, then, there are some columns with type 'object', then I write the dataframe to hive table, these 'object' types are converted to 'binary' type, does any one know how to solve the problem?","When you read data from oracle to dataframe it's created columns with object datatypes. -You can ask pandas dataframe try to infer better datatypes (before saving to Hive) if it can: -dataframe.infer_objects()",0.0,False,1,5667 -2018-08-16 04:22:51.340,What is the use of Jupyter Notebook cluster,"Can you tell me what is the use of jupyter cluster. I created jupyter cluster,and established its connection.But still I'm confused,how to use this cluster effectively? -Thank you","With Jupyter Notebook cluster, you can run notebook on the local machine and connect to the notebook on the cluster by setting the appropriate port number. Example code: - -Go to Server using ssh username@ip_address to server. -Set up the port number for running notebook. On remote terminal run jupyter notebook --no-browser --port=7800 -On your local terminal run ssh -N -f -L localhost:8001:localhost:7800 username@ip_address of server. -Open web browser on local machine and go to http://localhost:8001/",1.2,True,1,5668 -2018-08-16 12:03:34.353,How to decompose affine matrix?,"I have a series of points in two 3D systems. With them, I use np.linalg.lstsq to calculate the affine transformation matrix (4x4) between both. However, due to my project, I have to ""disable"" the shear in the transform. Is there a way to decompose the matrix into the base transformations? I have found out how to do so for Translation and Scaling but I don't know how to separate Rotation and Shear. -If not, is there a way to calculate a transformation matrix from the points that doesn't include shear? -I can only use numpy or tensorflow to solve this problem btw.","I'm not sure I understand what you're asking. -Anyway If you have two sets of 3D points P and Q, you can use Kabsch algorithm to find out a rotation matrix R and a translation vector T such that the sum of square distances between (RP+T) and Q is minimized. -You can of course combine R and T into a 4x4 matrix (of rotation and translation only. without shear or scale).",1.2,True,1,5669 -2018-08-16 13:00:32.667,Jupyter notebook kernel does not want to interrupt,"I was running a cell in a Jupyter Notebook for a while and decided to interrupt. However, it still continues to run and I don't know how to proceed to have the thing interrupted... -Thanks for help","Sometimes this happens, when you are on a GPU accelerated machine, where the Kernel is waiting for some GPU operation to be finished. I noticed this even on AWS instances. -The best thing you can do is just to wait. In the most cases it will recover and finish at some point. If it does not, at least it will tell you the kernel died after some minutes and you don´t have to copy paste your notebook, to back up your work. In rare cases, you have to kill your python process manually.",1.2,True,1,5670 -2018-08-17 02:02:19.417,find token between two delimiters - discord emotes,"i am trying to recognise discord emotes. -They are always between two : and don't contain space. e.g. -:smile: -I know how to split strings at delimiters, but how do i only split tokens that are within exactly two : and contain no space? -Thanks in advance!","Thanks to @G_M i found the following solution: - - regex = re.compile(r':[A-Za-z0-9]+:') - result = regex.findall(message.content) - -Will give me a list with all the emotes within a message, independent of where they are within the message.",1.2,True,1,5671 -2018-08-17 14:49:24.567,Post file from one server to another,"I have an Apache server A set up that currently hosts a webpage of a bar chart (using Chart.js). This data is currently pulled from a local SQLite database every couple seconds, and the web chart is updated. -I now want to use a separate server B on a Raspberry Pi to send data to the server to be used for the chart, rather than using the database on server A. -So one server sends a file to another server, which somehow realises this and accepts it and processes it. -The data can either be sent and placed into the current SQLite database, or bypass the database and have the chart update directly from the Pi's sent information. -I have come across HTTP Post requests, but not sure if that's what I need or quite how to implement it. -I have managed to get the Pi to simply host a json file (viewable from the external ip address) and pull the data from that with a simple requests.get('ip_address/json_file') in Python, but this doesn't seem like the most robust or secure solution. -Any help with what I should be using much appreciated, thanks!","Maybe I didn't quite understand your request but this is the solution I imagined: - -You create a Frontend with WebSocket support that connects to Server A -Server B (the one running on the raspberry) sends a POST request -with the JSON to Server A -Server A accepts the JSON and sends it to all clients connected with the WebSocket protocol - -Server B ----> Server A <----> Frontend -This way you do not expose your Raspberry directly and every request made by the Frontend goes only to Server A. -To provide a better user experience you could also create a GET endpoint on Server A to retrieve the latest received JSON, so that when the user loads the Frontend for the first time it calls that endpoint and even if the Raspberry has yet to update the data at least the user can have an insight of the latest available data.",0.0,False,1,5672 -2018-08-17 15:42:47.703,How to display a pandas Series in Python?,"I have a variable target_test (for machine learning) and I'd like to display just one element of target_test. -type(target_test) print the following statement on the terminal : -class 'pandas.core.series.Series' -If I do print(target_test) then I get the entire 2 vectors that are displayed. -But I'd like to print just the second element of the first column for example. -So do you have an idea how I could do that ? -I convert target_test to frame or to xarray but it didn't change the error I get. -When I write something like : print(targets_test[0][0]) -I got the following output : -TypeError: 'instancemethod' object has no attribute '__getitem__'","For the first column, you can use targets_test.keys()[i], for the second one targets_test.values[i] where i is the row starting from 0.",1.2,True,1,5673 -2018-08-18 22:38:40.803,django-storages boto3 accessing file url of a private file,"I'm trying to get the generated URL of a file in a test model I've created, -and I'm trying to get the correct url of the file by: modelobject.file.url which does give me the correct url if the file is public, however if the file is private it does not automatically generate a signed url for me, how is this normally done with django-storages? -Is the API supposed to automatically generate a signed url for private files? I am getting the expected Access Denied Page for 'none' signed urls currently, and need to get the signed 'volatile' link to the file. -Thanks in advance","I've figured out what I needed to do, -in the Private Storage class, I forgot to put custom_domain = False originally left this line off, because I did not think I needed it however you absolutely do in order to generate signed urls automatically.",0.9999877116507956,False,1,5674 -2018-08-19 22:55:22.463,Django - DRF (django-rest-framework-social-oauth2) and React creating a user,"I'm using the DRF and ReactJS and I am trying to login with Patreon using -django-rest-framework-social-oauth2. -In React, I send a request to the back-end auth/login/patreon/ and I reach the Patreon OAuth screen where I say I want to login with PAtreon. Patreon then returns with a request to the back-end at accounts/profile. At this point a python-social-oauth user has also been created. -At this point I'm confused. How do I make a request to Patreon to login, create a user in the back-end, and return the session information to the react front-end so that I can include the session information in all following requests from the front-end? I don't want the returned request to be at the backend/accounts/profile, do I? -Update -I now realize I can set the redirect url with LOGIN_REDIRECT_URL but still, how do I now retrieve the session id, pass it to the front-end, and include it with all requests?","Once you receive the user profile email, unique id, and other details from Patreon then create a user at the Database level. -Now after creating a user at the Database level you have to log in the user using the Django login function or any other login mechanism before redirecting the user to the frontend with a session. The redirect URL for the home/ landing page is provided by the Frontend side where they want to land the user after being successfully logged with session-id being set in cookies. Onward Frontend side can use session id in cookies for other requests. -Here is the flow: - -React JS -> auth/login/patreon/ -> redirected to Patreon -> Redirected back to the Backend with user information -> Create User (DB level) -> Login user -> Redirect back to Frontend (React JS on a specific URL provided by Front end)",0.0,False,1,5675 -2018-08-20 15:34:25.450,"In Visual Studio Code, how do I load my python code to a read-print-eval loop?","I am teaching a class that uses VScode. -I am used to teaching using IDLE, and it is very nice for the students to be able to call their defined functions and run snippets of code in a python terminal, for debugging purposes. -In VScode, they I have been unable to do the same in a satisfactory way. -Option1: I can select all code, right click and run selection/line on terminal. This works for small snippets, but I cannot do it for the whole file (even after selecting the whole file with ctrl-A). On linux, this works, but on windows, it does not, unfortunately (and my students use windows) -Option2: I can use the debug console. This requires adding a breakpoint in one of the last lines of the file, and does not offer tab completion. It works, but is less convenient than IDLE. -Option 3: I can also add the commands to run to the bottom of the file (which is a least preferred alternative, given that is forgoes the interativity of the read-print-eval loop). -Is there any better solution? Installing a VScode extension would not be a problem.","Visual Code is just a text editor like your traditional notepad. to run and debug any kind program you need to install the particular extension for the programming language. -In your case you are using python so you need to install the extension of it. the best one is the ""Python"" which is developed by microsoft itself. go to your extensions manager and install this extension. right click and click ""run python file in terminal"" and you are all set. -this will run exactly as they run from the idle(which is default IDE provided by python itself) you can enter the arguments from the console itself. according to me this is the best way to run and debug python programs in VScode. -another way is that VScode shows which python version is installed on your computer on the left bottom side, click on it and the programs will use this interpreter. -out of all the ways listed here and many others, the best method is to run the program in the terminal which is the recommend by python itself and many other programmers. -this method is very simple. what you have to do is open up your command prompt and type the path where python.exe is installed and the type the path of the your program as the argument and press enter. you are done ! -ex : C:\Python27\python.exe C:\Users\Username\Desktop\my_python_script.py -You can also pass your arguments of your program in the command prompt itself. -if you do not want to type all this and then just use the solution mentioned above. -hope that your query is solved. -regards",0.9950547536867304,False,1,5676 -2018-08-20 22:16:15.047,Maximum files size for Pyspark RDD,"I’m practicing Pyspark (standalone) in the Pyspark shell at work and it’s pretty new to me. Is there a rule of thumb regarding max file size and the RAM (or any other spec) on my machine? What about when using a cluster? -The file I’m practicing with is about 1200 lines. But I’m curious to know how large of a file size can be read into an RDD in regards to machine specifications or cluster specifications.","There is no hard limit on the Data size you can process, however when your RDD (Resilient Distributed Dataset) size exceeds the size of your RAM then the data will be moved to Disk. Even after the data is moved to the Disk spark will be equally capable of processing it. For example if your data is 12GB and available memory is 8GB spark will distribute the leftover data to disk and takes care of all transformations / actions seamlessly. Having said that you can process the data appropriately equal to size of disk. -There are of-course size limitation on size of single RDD which is 2GB. In other words the maximum size of a block will not exceed 2GB.",1.2,True,1,5677 -2018-08-22 12:17:01.487,Abaqus: parametric geometry/assembly in Inputfile or Python script?,"i want to do something as a parametric study in Abaqus, where the parameter i am changing is a part of the assembly/geometry. -Imagine the following: -A cube is hanging on 8 ropes. Each two of the 8 ropes line up in one corner of a room. the other ends of the ropes merge with the room diagonal of the cube. It's something like a cable-driven parallel robot/rope robot. -Now, i want to calculate the forces in the ropes in different positions of the cube, while only 7 of the 8 ropes are actually used. That means i have 8 simulations for each position of my cube. -I wrote a matlab script to generate the nodes and wires of the cube in different positions and angle of rotations so i can copy them into an input file for Abaqus. -Since I'm new to Abaqus scripting etc, i wonder which is the best way to make this work. -would you guys generate 8 input files for one position of the cube and calculate -them manually or is there a way to let abaqus somehow iterate different assemblys? -I guess i should wright a python script, but i don't know how to make the ropes the parameter that is changing. -Any help is appreciated! -Thanks, Tobi","In case someon is interested, i was able to do it the following way: -I created a model in abaqus till the point, i could have started the job. Then i took the .jnl file (which is created automaticaly by abaqus) and saved it as a .py file. Then i modified this script by defining every single point as a variable and every wire for the parts as tuples, consisting out of the variables. Than i made for loops and for every 9 cases unique wire definitions, which i called during the loop. During the loop also the constraints were changed and the job were started. I also made a field output request for the endnodes of the ropes (representing motors) for there coordinates and reaction force (the same nodes are the bc pinned) -Then i saved the fieldoutput in a certain simple txt file which i was able to analyse via matlab. -Then i wrote a matlab script which created the points, attached them to the python script, copied it to a unique directory and even started the job. -This way, i was able to do geometric parametric studies in abaqus using matlab and python. -Code will be uploaded soon",1.2,True,1,5678 -2018-08-22 12:57:46.077,Pandas DataFrame Display in Jupyter Notebook,"I want to make my display tables bigger so users can see the tables better when that are used in conjunction with Jupyter RISE (slide shows). -How do I do that? -I don't need to show more columns, but rather I want the table to fill up the whole width of the Jupyter RISE slide. -Any idea on how to do that? -Thanks","If df is a pandas.DataFrame object. -You can do: -df.style.set_properties(**{'max-width': '200px', 'font-size': '15pt'})",0.0,False,1,5679 -2018-08-22 13:38:01.097,Will making a Django website public on github let others get the data in its database ? If so how to prevent it?,"I have a locally made Django website and I hosted it on Heroku, at the same time I push changes to anathor github repo. I am using built in Database to store data. Will other users be able to get the data that has been entered in the database from my repo (like user details) ? -If so how to prevent it from happening ? Solutions like adding files to .gitignore will also prevent pushing to Heroku.","The code itself wouldn't be enough to get access to the database. For that you need the db name and password, which shouldn't be in your git repo at all. -On Heroku you use environment variables - which are set automatically by the postgres add-on - along with the dj_database_url library which turns that into the relevant values in the Django DATABASES setting.",0.0,False,1,5680 -2018-08-22 15:24:11.663,Uploading an image to S3 and manipulating with Python in Lambda - best practice,"I'm building my first web application and I've got a question around process and best practice, I'm hoping the expertise on this website might be give me a bit of direction. -Essentially, all the MVP is doing is going to be writing an overlay onto an image and presenting this back to the user, as follows; - -User uploads picture via web form (into AWS S3) - to do -Python script executes (in lambda) and creates image overlay, saves new image back into S3 - complete -User is presented back with new image to download - to do - -I've been running this locally as sort of a proof of concept and was planning on linking up with S3 today but then suddenly realised, what happens when there are two concurrent users and two images being uploaded with different filenames with two separate lambda functions working? -The only solution I could think of is having the image renamed upon upload with a record inserted into an RDS, then the lambda function to run upon record insertion against the new image, which would resolve half of it, but then how would I get the correct image relayed back to the user? -I'll be clear, I have next to no experience in web development, I want the front end to be as dumb as possible and run everything in Python (I'm a data scientist, I can write Python for data analysis but no experience as a software dev!)","You don't really need an RDS, just invoke your lambda synchronously from the browser. -So - -Upload file to S3, using a randomized file name -Invoke your lambda synchronously, passing it the file name -Have your lambda read the file, convert it, and respond with either the file itself (binary responses aren't trivial), or a path to the converted file in S3.",0.0,False,1,5681 -2018-08-23 12:03:16.460,How to install twilio via pip,"how to install twilio via pip? -I tried to install twilio python module -but i can't install it -i get following error -no Module named twilio -When trying to install twilio -pip install twilio -I get the following error. -pyopenssl 18.0.0 has requirement six>=1.5.2, but you'll have six 1.4.1 which is incompatible. -Cannot uninstall 'pyOpenSSL'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall. -i got the answer and installed -pip install --ignore-installed twilio -but i get following error - -Could not install packages due to an EnvironmentError: [Errno 13] Permission denied: '/Library/Python/2.7/site-packages/pytz-2018.5.dist-info' -Consider using the `--user` option or check the permissions. - -i have anaconda installed -is this a problem?","step1:download python-2.7.15.msi -step 2:install and If your system does not have Python added to your PATH while installing -""add python exe to path"" -step 3:go C:\Python27\Scripts of your system -step4:in command prompt C:\Python27\Scripts>pip install twilio -step 5:after installation is done >python command line - import twilio -print(twilio.version) -step 6:if u get the version ...you are done",-0.2012947653214861,False,1,5682 -2018-08-23 14:53:44.523,How to retrieve objects from the sotlayer saved quote using Python API,"I'm trying to retrieve the objects/items (server name, host name, domain name, location, etc...) that are stored under the saved quote for a particular Softlayer account. Can someone help how to retrieve the objects within a quote? I could find a REST API (Python) to retrieve quote details (quote ID, status, etc..) but couldn't find a way to fetch objects within a quote. -Thanks! -Best regards, -Khelan Patel",Thanks Albert getRecalculatedOrderContainer is the thing I was looking for.,0.0,False,1,5683 -2018-08-23 23:45:21.277,Can I debug Flask applications in IntelliJ?,"I know how to debug a flask application in Pycharm. The question is whether this is also possible in IntelliJ. -I have my flask application debugging in Pycharm but one thing I could do in IntelliJ was evaluate expressions inline by pressing the alt + left mouse click. This isn't available in Pycharm so I wanted to run my Flask application in IntelliJ but there isn't a Flask template. -Is it possible to add a Flask template to the Run/Debug configuration? I tried looking for a plugin but couldn't find that either.","Yes, you can. Just setup the proper parameters for Run script into PyCharm IDE. After that you can debug it as usual py script. In PyCharm you can evaluate any line in debug mode too.",0.0,False,1,5684 -2018-08-24 14:36:02.090,"how to add the overall ""precision"" and ""recall"" metrics to ""tensorboard"" log file, after training is finished?","After the training is finished and I did the prediction on my network, I want to calculate ""precision"" and ""recall"" of my model, and then send it to log file of ""tensorboard"" to show the plot. -while training, I send ""tensorboard"" function as a callback to keras. but after training is finished, I dont know how to add some more data to tensorboard to be plotted. -I use keras for coding and tensorflow as its backend.",I believe that you've already done that work: it's the same process as the validation (prediction and check) step you do after training. You simply tally the results of the four categories (true/false pos/neg) and plug those counts into the equations (ratios) for precision and recall.,0.0,False,1,5685 -2018-08-27 21:20:09.317,Convolutional neural network architectures with an arbitrary number of input channels (more than RGB),"I am very new to image recognition with CNNs and currently using several standard (pre-trained) architectures available within Keras (VGG and ResNet) for image classification tasks. I am wondering how one can generalise the number of input channels to more than 3 (instead of standard RGB). For example, I have an image which was taken through 5 different (optic) filters and I am thinking about passing these 5 images to the network. -So, conceptually, I need to pass as an input (Height, Width, Depth) = (28, 28, 5), where 28x28 is the image size and 5 - the number of channels. -Any easy way to do it with ResNet or VGG please?","If you retrain the models, that's not a problem. Only if you want to use a trained model, you have to keep the input the same.",1.2,True,1,5686 -2018-08-28 02:21:26.060,How to use Docker AND Conda in PyCharm,"I want to run python in PyCharm by using a Docker image, but also with a Conda environment that is set up in the Docker image. I've been able to set up Docker and (locally) set up Conda in PyCharm independently, but I'm stumped as to how to make all three work together. -The problem comes when I try to create a new project interpreter for the Conda environment inside the Docker image. When I try to enter the python interpreter path, it throws an error saying that the directory/path doesn't exist. -In short, the question is the same as the title: how can I set up PyCharm to run on a Conda environment inside a Docker image?","I'm not sure if this is the most eloquent solution, but I do have a solution to this now! - -Start up a container from the your base image and attach to it -Install the Conda env yaml file inside the docker container -From outside the Docker container stream (i.e. a new terminal window), commit the existing container (and its changes) to a new image: docker commit SOURCE_CONTAINER NEW_IMAGE - -Note: see docker commit --help for more options here - -Run the new image and start a container for it -From PyCharm, in preferences, go to Project > Project Interpreter -Add a new Docker project interpreter, choosing your new image as the image name, and set the path to wherever you installed your Conda environment on the Docker image (ex: /usr/local/conda3/envs/my_env/bin/python) - -And just like that, you're good to go!",1.2,True,1,5687 -2018-08-28 13:52:18.727,how to detect upside down face?,"I would like to detect upright and upside-down faces, however faces weren't recognized in upside-down images. -I used the dlib library in Python with shape_predictor_68_face_landmarks.dat. -Is there a library that can recognize upright and upside-down faces?","You could use the same library to detect upside down faces. If the library is unable to detect the face initially, transform it 180° and check again. If it is recognized in this condition, you know it was an upside down face.",1.2,True,1,5688 -2018-08-29 10:28:25.480,How to have cfiles in python code,"I'm using the Geany IDE and I've wrote a python code that makes a GUI. Im new to python and i'm better with C. I've done research on the web and its too complicated because theres so much jargon involved. Behind each button I want C to be the backbone of it (So c to execute when clicked). So, how can i make a c file and link it to my code?","I too had a question like this and I found a website that described how to do it step by step but I can’t seem to find it. If you think about it, all these ‘import’ files are just code thats been made separately and thats why you import them. So, in order to import your ‘C File’ do the following. - -Create the file you want to put in c (e.g bloop.c) -Then open the terminal and assuming you saved your file to the desktop, type ‘cd Desktop’. If you put it somewhere else other than the desktop, then type cd (insert the directory). -Now, type in gcc -shared -Wl,-soname,adder -o adder.so -fPIC bloop.c into the terminal. -After that, go into you python code and right at the very top of your code, type ‘import ctypes’ or ‘from ctypes import *’ to import the ctypes library. -Below that type adder = CDLL(‘./adder.so’). -if you want to add a instance for the class you need to type (letter or word)=adder.main(). For example, ctest = adder.main() -Now lets say you have a method you want to use from your c program you can type your charater or word (dot) method you created in c. For example ‘ctest.beans()’ (assuming you have a method in your code called beans).",1.2,True,1,5689 -2018-08-29 13:57:38.713,Cannot update svg file(s) for saleor framework + python + django,"I would like to know how should i could manage to change the static files use by the saelor framework. I've tried to change the logo.svg but failed to do so. -I'm still learning python program while using the saleor framework for e-commerce. -Thank you.",Here is how it should be done. You must put your logo in the saleor/static/images folder then change it in base.html file in footer and navbar section.,1.2,True,1,5690 -2018-08-29 20:22:17.757,"Determining ""SystemFaceButton"" RBG Value At RunTime","I am using tkinter and the PIL to make a basic photo viewer (mostly for learning purposes). I have the bg color of all of my widgets set to the default which is ""systemfacebutton"", whatever that means. -I am using the PIL.Image module to view and rotate my images. When an image is rotated you have to choose a fillcolor for the area behind the image. I want this fill color to be the same as the default system color but I have no idea how to get a the rgb value or a supported color name for this. It has to be calculated by python at run time so that it is consistent on anyone's OS. -Does anyone know how I can do this?","You can use w.winfo_rgb(""systembuttonface"") to turn any color name to a tuple of R, G, B. (w is any Tkinter widget, the root window perhaps. Note that you had the color name scrambled.) The values returned are 16-bit for some unknown reason, you'll likely need to shift them right by 8 bits to get the 0-255 values commonly used for specifying colors.",1.2,True,1,5691 -2018-08-30 01:29:02.027,"In tf.layers.conv2d, with use_bias=True, are the biases tied or untied?","One more question: -If they are tied biases, how can I implement untied biases? -I am using tensorflow 1.10.0 in python.","tied biases is used in tf.layers.conv2d. -If you want united biases, just turn off use_bias and create bias variable manually with tf.Variable or tf.get_variable same shape with following feature map, finally sum them up.",1.2,True,1,5692 -2018-08-30 19:43:08.963,Reading all the image files in a folder in Django,"I am trying to create a picture slideshow which will show all the png and jpg files of a folder using django. -Problem is how do I open windows explorer through django and prompt user to choose a folder name to load images from. Once this is done, how do I read all image files from this folder? Can I store all image files from this folder inside a list and pass this list in template views through context?","This link “https://github.com/csev/dj4e-samples/tree/master/pics” -shows how to store data into to database(sqlite is the database used here) using Django forms. But you cannot upload an entire folder at once, so you have to create a one to many model between display_id(This is just a field name in models you can name it anything you want) and pics. Now you can individually upload all pics in the folder to the same display _id and access all of them using this display_id. Also make sure to pass content_type for jpg and png separately while retrieving the pics.",0.0,False,1,5693 -2018-08-31 00:05:09.460,How can I get SMS verification code in my Python program?,"I'm writing a Python script to do some web automation stuff. In order to log in the website, I have to give it my phone number and the website will send out an SMS verification code. Is there a way to get this code so that I can use it in my Python program? Right now what I can think of is that I can write an Android APP and it will be triggered once there are new SMS and it will get the code and invoke an API so that the code will be stored somewhere. Then I can grab the stored code from within my Python program. This is doable but a little bit hard for me as I don't know how to develop a mobile APP. I want to know is there any other methods so that I can get this code? Thanks. -BTW, I have to use my own phone number and can't use other phone to receive the verification code. So it may not possible to use some services.",Answer my own question. I use IFTTT to forward the message to Slack and use Slack API to access the message.,0.0,False,1,5694 -2018-08-31 16:13:31.870,How to list available policies for an assumed AWS IAM role,"I am using python and boto to assume an AWS IAM role. I want to see what policies are attached to the role so i can loop through them and determine what actions are available for the role. I want to do this so I can know if some actions are available instead of doing this by calling them and checking if i get an error. However I cannot find a way to list the policies for the role after assuming it as the role is not authorised to perform IAM actions. -Is there anyone who knows how this is done or is this perhaps something i should not be doing.","To obtain policies, your AWS credentials require permissions to retrieve the policies. -If such permissions are not associated with the assumed role, you could use another set of credentials to retrieve the permissions (but those credentials would need appropriate IAM permissions). -There is no way to ask ""What policies do I have?"" without having the necessary permissions. This is an intentional part of AWS security because seeing policies can reveal some security information (eg ""Oh, why am I specifically denied access to the Top-Secret-XYZ S3 bucket?"").",0.3869120172231254,False,1,5695 -2018-08-31 19:23:27.853,"Creating ""zero state"" migration for existing db with sqlalchemy/alembic and ""faking"" zero migration for that existing db","I want to add alembic to an existing ,sqlalchemy using, project, with a working production db. I fail to find what's the standard way to do a ""zero"" migration == the migration setting up the db as it is now (For new developers setting up their environment) -Currently I've added import the declarative base class and all the models using it to the env.py , but first time alembic -c alembic.dev.ini revision --autogenerate does create the existing tables. -And I need to ""fake"" the migration on existing installations - using code. For django ORM I know how to make this work, but I fail to find what's the right way to do this with sqlalchemy/alembic","alembic revision --autogenerate inspects the state of the connected database and the state of the target metadata and then creates a migration that brings the database in line with metadata. -If you are introducing alembic/sqlalchemy to an existing database, and you want a migration file that given an empty, fresh database would reproduce the current state- follow these steps. - -Ensure that your metadata is truly in line with your current database(i.e. ensure that running alembic revision --autogenerate creates a migration with zero operations). - -Create a new temp_db that is empty and point your sqlalchemy.url in alembic.ini to this new temp_db. - -Run alembic revision --autogenerate. This will create your desired bulk migration that brings a fresh db in line with the current one. - -Remove temp_db and re-point sqlalchemy.url to your existing database. - -Run alembic stamp head. This tells sqlalchemy that the current migration represents the state of the database- so next time you run alembic upgrade head it will begin from this migration.",0.9999999999999966,False,1,5696 -2018-09-02 16:24:08.867,Django send progress back to client before request has ended,"I am working on an application in Django where there is a feature which lets the user share a download link to a public file. The server downloads the file and processes the information within. This can be a time taking task therefore I want to send periodic feedbacks to the user before operations has completed. For instances, I would like to inform the user that file has downloaded successfully or if some information was missing from one of the record e.t.c. -I was thinking that after the client app has sent the upload request, I could get client app to periodically ask the server about the status. But I don't know how can I track the progress a different request.How can I implement this?","At first the progress task information can be saved in rdb or redis。 -You can return the id of the task when uses submit the request to start task and the task can be executed in the background context。 -The background task can save the task progress info in the db which you selected. -The app client get the progress info by the task id which the backend returned and the backend get the progress info from the db and push it in the response. -The interval of the request can be defined by yourself.",0.0,False,1,5697 -2018-09-03 02:29:05.750,Numpy array size different when saved to disk (compared to nbytes),"Is it possible that a flat numpy 1d array's size (nbytes) is 16568 (~16.5kb) but when saved to disk, has a size of >2 mbs? -I am saving the array using numpy's numpy.save method. Dtype of array is 'O' (or object). -Also, how do I save that flat array to disk such that I get approx similar size to nbytes when saved on disk? Thanks","For others references, From numpy documentation: - -numpy.ndarray.nbytes attribute -ndarray.nbytes Total bytes consumed by the elements of the array. -Notes -Does not include memory consumed by non-element attributes of the -array object. - -So, the nbytes just considers elements of the array.",0.0,False,1,5698 -2018-09-05 10:27:35.310,Regex to match all lowercase character except some words,"I would like to write a RE to match all lowercase characters and words (special characters and symbols should not match), so like [a-z]+ EXCEPT the two words true and false. -I'm going to use it with Python. -I've written (?!true|false\b)\b[a-z]+, it works but it does not recognise lowercase characters following an uppercase one (e.g. with ""This"" it doesn't match ""his""). I don't know how to include also this kind of match. -For instance: - -true & G(asymbol) & false should match only asymbol -true & G(asymbol) & anothersymbol should match only [asymbol, anothersymbol] -asymbolUbsymbol | false should match only [asymbol, bsymbol] - -Thanks","I would create two regexes (you want to mix word boundary matching with optionally splitting words apart, which is, AFAIK not straighforward mixable, you would have to re-phrase your regex either without word boundaries or without splitting): - -first regex: [a-z]+ -second regex: \b(?!true|false)[a-z]+",0.0,False,1,5699 -2018-09-06 08:27:52.960,How to use double as the default type for floating numbers in PyTorch,"I want all the floating numbers in my PyTorch code double type by default, how can I do that?","You should use for that torch.set_default_dtype. -It is true that using torch.set_default_tensor_type will also have a similar effect, but torch.set_default_tensor_type not only sets the default data type, but also sets the default values for the device where the tensor is allocated, and the layout of the tensor.",0.3869120172231254,False,1,5700 -2018-09-06 20:34:52.430,how to change directory in Jupyter Notebook with Special characters?,"When I created directory under the python env, it has single quote like (D:\'Test Directory'). How do I change to this directory in Jupyter notebook?",I could able to change the directory using escape sequence like this.. os.chdir('C:\\'Test Directory\'),0.0,False,1,5701 -2018-09-08 02:24:30.413,"Graph traversal, maybe another type of mathematics?","Let’s say you have a set/list/collection of numbers: [1,3,7,13,21,19] (the order does not matter). Let’s say for reasons that are not important, you run them through a function and receive the following pairs: -(1, 13), (1, 19), (1, 21), (3,19), (7, 3), (7,13), (7,19), (21, 13), (21,19). Again order does not matter. My question involves the next part: how do I find out the minimum amount of numbers that can be part of a pair without being repeated? For this particular sequence it is all six. For [1,4,2] the pairs are (1,4), (1,2), (2,4). In this case any one of the numbers could be excluded as they are all in pairs, but they each repeat, therefore it would be 2 (which 2 do not matter). -At first glance this seems like a graph traversal problem - the numbers are nodes, the pairs edges. Is there some part of mathematics that deals with this? I have no problem writing up a traversal algorithm, I was just wondering if there was a solution with a lower time complexity. Thanks!","If you really intended to find the minimum amount, the answer is 0, because you don't have to use any number at all. -I guess you meant to write ""maximal amount of numbers"". -If I understand your problem correctly, it sounds like we can translated it to the following problem: -Given a set of n numbers (1,..,n), what is the maximal amount of numbers I can use to divide the set into pairs, where each number can appear only once. -The answer to this question is: - -when n = 2k f(n) = 2k for k>=0 -when n = 2k+1 f(n) = 2k for k>=0 - -I'll explain, using induction. - -if n = 0 then we can use at most 0 numbers to create pairs. -if n = 2 (the set can be [1,2]) then we can use both numbers to -create one pair (1,2) -Assumption: if n=2k lets assume we can use all 2k numbers to create 2k pairs and prove using induction that we can use 2k+2 numbers for n = 2k+2. -Proof: if n = 2k+2, [1,2,..,k,..,2k,2k+1,2k+2], we can create k pairs using 2k numbers (from our assomption). without loss of generality, lets assume out pairs are (1,2),(3,4),..,(2k-1,2k). we can see that we still have two numbers [2k+1, 2k+2] that we didn't use, and therefor we can create a pair out of two of them, which means that we used 2k+2 numbers. - -You can prove on your own the case when n is odd.",0.0,False,2,5702 -2018-09-08 02:24:30.413,"Graph traversal, maybe another type of mathematics?","Let’s say you have a set/list/collection of numbers: [1,3,7,13,21,19] (the order does not matter). Let’s say for reasons that are not important, you run them through a function and receive the following pairs: -(1, 13), (1, 19), (1, 21), (3,19), (7, 3), (7,13), (7,19), (21, 13), (21,19). Again order does not matter. My question involves the next part: how do I find out the minimum amount of numbers that can be part of a pair without being repeated? For this particular sequence it is all six. For [1,4,2] the pairs are (1,4), (1,2), (2,4). In this case any one of the numbers could be excluded as they are all in pairs, but they each repeat, therefore it would be 2 (which 2 do not matter). -At first glance this seems like a graph traversal problem - the numbers are nodes, the pairs edges. Is there some part of mathematics that deals with this? I have no problem writing up a traversal algorithm, I was just wondering if there was a solution with a lower time complexity. Thanks!","In case anyone cares in the future, the solution is called a blossom algorithm.",0.0,False,2,5702 -2018-09-08 12:37:37.387,error in missingno module import in Jupyter Notebook,"Getting error in missingno module import in Jupyter Notebook . It works fine in IDLE . But showing ""No missingno module exist"" in Jupyter Notebook . Can anybody tell me how to resolve this ?",Installing missingno through anaconda solved the problem for me,0.5457054096481145,False,2,5703 -2018-09-08 12:37:37.387,error in missingno module import in Jupyter Notebook,"Getting error in missingno module import in Jupyter Notebook . It works fine in IDLE . But showing ""No missingno module exist"" in Jupyter Notebook . Can anybody tell me how to resolve this ?","This command helped me: -conda install -c conda-forge/label/gcc7 missingno - You have to make sure that you run Anaconda prompt as Administrator.",0.3869120172231254,False,2,5703 -2018-09-08 18:25:29.300,Lazy loading with python and flask,"I’ve build a web based data dashboard that shows 4 graphs - each containing a large amount of data points. -When the URL endpoint is visited Flask calls my python script that grabs the data from a sql server and then starts manipulating it and finally outputs the bokeh graphs. -However, as these graphs get larger/there becomes more graphs on the screen the website takes long to load - since the entire function has to run before something is displayed. -How would I go about lazy loading these? I.e. it loads the first (most important graph) and displays it while running the function for the other graphs, showing them as and when they finish running (showing a sort of loading bar where each of the graphs are or something). -Would love some advice on how to implement this or similar. -Thanks!","I had the same problem as you. The problem with any kind of flask render is that all data is processed and passed to the page (i.e. client) simultaneously, often at large time cost. Not only that, but the the server web process is quite heavily loaded. -The solution I was forced to implement as the comment suggested was to load the page with blank charts and then upon mounting them access a flask api (via JS ajax) that returns chart json data to the client. This permits lazy loading of charts, as well as allowing the data manipulation to possibly be performed on a worker and not web server.",0.9950547536867304,False,1,5704 -2018-09-09 08:36:10.063,I can't import tkinter in pycharm community edition,"I've been trying for a few days now, to be able to import the library tkinter in pycharm. But, I am unable to do so. -,I tried to import it or to install some packages but still nothing, I reinstalled python and pycharm again nothing. Does anyone know how to fix this? -I am using pycharm community edition 2018 2.3 and python 3.7 . -EDIT:So , I uninstalled python 3.7 and I installed python 3.6 x64 ,I tried changing my interpreter to the new path to python and still not working... -EDIT 2 : I installed pycharm pro(free trial 30 days) and it's actually works and I tried to open my project in pycharm community and it's not working... -EDIT 3 : I installed python 3.6 x64 and now it's working. -Thanks for the help.","Thanks to vsDeus for asking this question. I had the same problem running Linux Mint Mate 19.1 and nothing got tkinter and some other modules working in Pycharm CE. In Eclipse with Pydev all worked just fine but for some reason I would rather work in Pycharm when coding than Eclipse. -The steps outlined here did not work for me but the steps he took handed me the solution. Basically I had to uninstall Pycharm, remove all its configuration files, then reinstall pip3, tkinter and then reinstall Pycharm CE. Finally I reopened previously saved projects and then set the correct interpreter. -When I tried to change the python interpreter before no alternatives appeared. After all these steps the choice became available. Most importantly now tkinter, matplotlib and other modules I wanted to use are available in Pycharm.",0.0,False,1,5705 -2018-09-10 11:25:39.227,how to use Tensorflow seq2seq.GreedyEmbeddingHelper first parameter Embedding in case of using normal one hot vector instead of embedding?,"I am trying to decode one character (represented as c-dimensional one hot vectors) at a time with tensorflow seq2seq model implementations. I am not using any embedding in my case. -Now I am stuck with tf.contrib.seq2seq.GreedyEmbeddingHelper. It requires ""embedding: A callable that takes a vector tensor of ids (argmax ids), or the params argument for embedding_lookup. The returned tensor will be passed to the decoder input."" -How I will define callable? What are inputs (vector tensor if ids(argmax ids)) and outputs of this callable function? Please explain using examples.","embedding = tf.Variable(tf.random_uniform([c-dimensional , -EMBEDDING_DIM])) -here you can create the embedding for you own model. -and this will be trained during your training process to give a vector for your own input. -if you don't want to use it you just can create a matrix where is every column of it is one hot vector represents the character and pass it as embedding. -it will be some thing like that: -[[1,0,0],[0,1,0],[0,0,1]] -here if you have vocabsize of 3 .",0.0,False,1,5706 -2018-09-10 11:43:29.133,"one server, same domain, different apps (example.com/ & example.com/tickets )?","I want advice on how to do the following: -On the same server, I want to have two apps. One WordPress app and one Python app. At the same time, I want the root of my domain to be a static landing page. -Url structure I want to achieve: - -example.com/ => static landing page -example.com/tickets => wordpress -example.com/pythonapp => python app - -I have never done something like this before and searching for solutions didn't help. -Is it even possible? -Is it better to use subdomains? -Is it better to use different servers? -How should I approach this? -Thanks in advance!","It depends on the webserver you want to use. Let's go with apache as it is one of the most used web servers on the internet. - -You install your wordpress installation into the /tickets subdirectory and install word-press as you normally would. This should install wordpress into the subdirectory. -Configure your Python-WSGI App with this configuration: - -WSGIScriptAlias /pythonapp /var/www/path/to/my/wsgi.py",0.2012947653214861,False,1,5707 -2018-09-12 02:15:34.913,How to saving plots and model results to pdf in python?,I know how to save model results to .txt files and saving plots to .png. I also found some post which shows how to save multiple plots on a single pdf file. What I am looking for is generating a single pdf file which can contain both model results/summary and it's related plots. So at the end I can have something like auto generated model report. Can someone suggest me how I can do this?,I’ve had good results with the fpdf module. It should do everything you need it to do and the learning curve isn’t bad. You can install with pip install fpdf.,0.0,False,1,5708 -2018-09-12 06:55:00.637,"Error configuring: unknown option ""-ipadx""","I want to add InPadding to my LabelFrame i'm using AppJar GUI. I try this: -self.app.setLabelFrameInPadding(self.name(""_content""), [20, 20]) -But i get this error: - -appJar:WARNING [Line 12->3063/configureWidget]: Error configuring _content: unknown option ""-ipadx"" - -Any ideas how to fix it?","Because of the way containers are implemented in appJar, padding works slightly differently for labelFrames. -Try calling: app.setLabelFramePadding('name', [20,20])",0.0,False,1,5709 -2018-09-12 13:04:34.793,Two flask Apps same domain IIS,"I want to deploy same flask application as two different instances lets say sandbox instance and testing instance on the same iis server and same machine. having two folders with different configurations (one for testing and one for sandbox) IIS runs whichever is requested first. for example I want to deploy one under www.example.com/test and the other under www.example.com/sandbox. if I requested www.example.com/test first then this app keeps working correctly but whenever I request www.example.com/sandbox it returns 404 and vice versa! -question bottom line: - -how can I make both apps run under the same domain with such URLs? -would using app factory pattern solve this issue? -what blocks both apps from running side by side as I am trying to do? - -thanks a lot in advance",been stuck for a week before asking this question and the neatest way I found was to assign each app a different app pool and now they are working together side by side happily ever after.,1.2,True,1,5710 -2018-09-13 06:51:37.370,Sharing PonyORM's db session across different python module,"I initially started a small python project (Python, Tkinter amd PonyORM) and became larger that is why I decided to divide the code (used to be single file only) to several modules (e.g. main, form1, entity, database). Main acting as the main controller, form1 as an example can contain a tkinter Frame which can be used as an interface where the user can input data, entity contains the db.Enttiy mappings and database for the pony.Database instance along with its connection details. I think problem is that during import, I'm getting this error ""pony.orm.core.ERDiagramError: Cannot define entity 'EmpInfo': database mapping has already been generated"". Can you point me to any existing code how should be done.","Probably you import your modules in a wrong order. Any module which contains entity definitions should be imported before db.generate_mapping() call. -I think you should call db.generate_mapping() right before entering tk.mainloop() when all imports are already done.",1.2,True,1,5711 -2018-09-13 08:55:49.327,Python3 - How do I stop current versions of packages being over-ridden by other packages dependencies,"Building Tensorflow and other such packages from source and especially against GPU's is a fairly long task and often encounters errors, so once built and installed I really dont want to mess with them. -I regularly use virtualenvs, but I am always worried about installing certain packages as sometimes their dependencies will overwrite my own packages I have built from source... -I know I can remove, and then rebuild from my .wheels, but sometimes this is a time consuming task. Is there a way that if I attempt to pip install a package, it first checks against current package versions and doesn't continue before I agree to those changes? -Even current packages dependencies don't show versions with pip show","Is there a way that if I attempt to pip install a package, it first checks against current package versions and doesn't continue before I agree to those changes? - -No. But pip install doesn't touch installed dependencies until you explicitly run pip install -U. So don't use -U/--upgrade option and upgrade dependencies when pip fails with unmet dependencies.",0.0,False,1,5712 -2018-09-14 02:32:31.807,how do I connect sys.argv into my float value?,"I must use ""q"" (which is a degree measure) from the command line and then convert ""q"" to radians and have it write out the value of sin(5q) + sin(6q). Considering that I believe I have to use sys.argv's for this I have no clue where to even begin","you can use following commands -q=sys.argv[1] #you can give the decimal value too in your command line -now q will be string eg. ""1.345"" so you have convert this to float[ using -function q=float(q) .",0.0,False,1,5713 -2018-09-14 10:30:59.240,Scrapy: Difference between simple spider and the one with ItemLoader,"I've been working on scrapy for 3 months. for extracting selectors I use simple response.css or response.xpath.. -I'm asked to switch to ItemLoaders and use add_xpath add_css etc. -I know how ItemLoaders work and ho convinient they are but can anyone compare these 2 w.r.t efficiency? which way is efficient and why ??",Item loaders do exactly the same thing underneath that you do when you don't use them. So for every loader.add_css/add_xpath call there will be responce.css/xpath executed. It won't be any faster and the little amount of additional work they do won't really make things any slower (especially in comparison to xml parsing and network/io load).,0.0,False,1,5714 -2018-09-15 01:56:10.107,Possible to get a file descriptor for Python's StringIO?,"From a Python script, I want to feed some small string data to a subprocess, but said subprocess non-negotiably accepts only a filename as an argument, which it will open and read. I non-negotiably do not want to write this data to disk - it should reside only in memory. -My first instinct was to use StringIO, but I realize that StringIO has no fileno(). mmap(-1, ...) also doesn't seem to create a file descriptor. With those off the table, I'm at a loss as to how to do this. Is this even achievable? The fd would be OS-level visible, but (I would expect) only to the process's children. -tl;dr how to create private file descriptor to a python string/memory that only a child process can see? -P.S. This is all on Linux and doesn't have to be portable in any way.","Reifying @user4815162342's comment as an answer: -The direct way to do this is: - -pass /dev/stdin as the file argument to the process; -use stdin=subprocess.PIPE; -finally, Popen.communicate() to feed the desired contents",0.6730655149877884,False,1,5715 -2018-09-17 15:44:04.130,how to modify txt file properties with python,"I am trying to make a python program that creates and writes in a txt file. -the program works, but I want it to cross the ""hidden"" thing in the txt file's properties, so that the txt can't be seen without using the python program I made. I have no clues how to do that, please understand I am a beginner in python.",I'm not 100% sure but I don't think you can do this in Python. I'd suggest finding a simple Visual Basic script and running it from your Python file.,0.0,False,1,5716 -2018-09-18 15:03:23.547,How can I run code for a certain amount of time?,"I want to play a sound (from a wav file) using winsound's winsound.PlaySound function. I know that winsound.Beep allows me to specify the time in milliseconds, but how can I implement that behavior with winsound.PlaySound? -I tried to use the time.sleep function, but that only delays the function, not specifies the amount of time. -Any help would be appreciated.","Create a thread to play the sound, start it. Create a thread that sleeps the right amount of time and has a handle to the first thread. Have the second thread terminate the first thread when the sleep is over.",1.2,True,1,5717 -2018-09-18 16:35:17.860,Do I need two instances of python-flask?,"I am building a web-app. One part of the app calls a function that starts a tweepy StreamListener on certain track. That functions process a tweet and then it writes a json object to a file or mongodb. -On the other hand I need a process that is reading the file or mongodb and paginates the tweet if some property is in it. The thing is that I don't know how to do that second part. Do I need different threads? -What solutions could there be?","You can certainly do it with a thread or spinning up a new process that will perform the pagination. -Alternatively you can look into a task queue service (Redis queue, celery, as examples). Your web-app can add a task to this queue and your other program can listen to this queue and perform the pagination tasks as they come in.",0.0,False,1,5718 -2018-09-19 22:34:46.480,Celery - how to stop running task when using distributed RabbitMQ backend?,"If I am running Celery on (say) a bank of 50 machines all using a distributed RabbitMQ cluster. -If I have a task that is running and I know the task id, how in the world can Celery figure out which machine its running on to terminate it? -Thanks.","I am not sure if you can actually do it, when you spawn a task you will have a worker, somewhere in you 50 boxes, that executes that and you technically have no control on it as it s a separate process and the only thing you can control is either the asyncResult or the amqp message on the queue.",0.0,False,1,5719 -2018-09-19 23:16:35.920,how to run periodic task in high frequency in flask?,"I want my flask APP to pull updates from a local txt file every 200ms, is it possible to do that? -P.S. I've considered BackgroundScheduler() from apschedulerler, but the granularity of is 1s.",Couldn't you just start a loop in a thread that sleeps for 200 ms before the next iteration?,0.2012947653214861,False,1,5720 -2018-09-20 06:14:37.797,How to search for all existing mongodbs for single GET request,"Suppose I have multiple mongodbs like mongodb_1, mongodb_2, mongodb_3 with same kind of data like employee details of different organizations. -When user triggers GET request to get employee details from all the above 3 mongodbs whose designation is ""TechnicalLead"". then first we need to connect to mongodb_1 and search and then disconnect with mongodb_1 and connect to mongodb_2 and search and repeat the same for all dbs. -Can any one suggest how can we achieve above using python EVE Rest api framework. -Best Regards, -Narendra","First of all, it is not a recommended way to run multiple instances (especially when the servers might be running at the same time) as it will lead to usage of the same config parameters like for example logpath and pidfilepath which in most cases is not what you want. -Secondly for getting the data from multiple mongodb instances you have to create separate get requests for fetching the data. There are two methods of view for the model that can be used: - -query individual databases for data, then assemble the results for viewing on the screen. -Query a central database that the two other databases continously update.",0.0,False,1,5721 -2018-09-20 17:05:30.047,python asyncronous images download (multiple urls),"I'm studying Python for 4/5 months and this is my third project built from scratch, but im not able to solve this problem on my own. -This script downloads 1 image for each url given. -Im not able to find a solution on how to implement Thread Pool Executor or async in this script. I cannot figure out how to link the url with the image number to the save image part. -I build a dict of all the urls that i need to download but how do I actually save the image with the correct name? -Any other advise? -PS. The urls present at the moment are only fake one. -Synchronous version: - - - import requests - import argparse - import re - import os - import logging - - from bs4 import BeautifulSoup - - - parser = argparse.ArgumentParser() - parser.add_argument(""-n"", ""--num"", help=""Book number"", type=int, required=True) - parser.add_argument(""-p"", dest=r""path_name"", default=r""F:\Users\123"", help=""Save to dir"", ) - args = parser.parse_args() - - - - logging.basicConfig(format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', - level=logging.ERROR) - logger = logging.getLogger(__name__) - - - def get_parser(url_c): - url = f'https://test.net/g/{url_c}/1' - logger.info(f'Main url: {url_c}') - responce = requests.get(url, timeout=5) # timeout will raise an exeption - if responce.status_code == 200: - page = requests.get(url, timeout=5).content - soup = BeautifulSoup(page, 'html.parser') - return soup - else: - responce.raise_for_status() - - - def get_locators(soup): # take get_parser - # Extract first/last page num - first = int(soup.select_one('span.current').string) - logger.info(f'First page: {first}') - last = int(soup.select_one('span.num-pages').string) + 1 - - # Extract img_code and extension - link = soup.find('img', {'class': 'fit-horizontal'}).attrs[""src""] - logger.info(f'Locator code: {link}') - code = re.search('galleries.([0-9]+)\/.\.(\w{3})', link) - book_code = code.group(1) # internal code - extension = code.group(2) # png or jpg - - # extract Dir book name - pattern = re.compile('pretty"":""(.*)""') - found = soup.find('script', text=pattern) - string = pattern.search(found.text).group(1) - dir_name = string.split('""')[0] - logger.info(f'Dir name: {dir_name}') - - logger.info(f'Hidden code: {book_code}') - print(f'Extension: {extension}') - print(f'Tot pages: {last}') - print(f'') - - return {'first_p': first, - 'last_p': last, - 'book_code': book_code, - 'ext': extension, - 'dir': dir_name - } - - - def setup_download_dir(path, dir): # (args.path_name, locator['dir']) - # Make folder if it not exist - filepath = os.path.join(f'{path}\{dir}') - if not os.path.exists(filepath): - try: - os.makedirs(filepath) - print(f'Directory created at: {filepath}') - except OSError as err: - print(f""Can't create {filepath}: {err}"") - return filepath - - - def main(locator, filepath): - for image_n in range(locator['first_p'], locator['last_p']): - url = f""https://i.test.net/galleries/{locator['book_code']}/{image_n}.{locator['ext']}"" - logger.info(f'Url Img: {url}') - responce = requests.get(url, timeout=3) - if responce.status_code == 200: - img_data = requests.get(url, timeout=3).content - else: - responce.raise_for_status() # raise exepetion - - with open((os.path.join(filepath, f""{image_n}.{locator['ext']}"")), 'wb') as handler: - handler.write(img_data) # write image - print(f'Img {image_n} - DONE') - - - if __name__ == '__main__': - try: - locator = get_locators(get_parser(args.num)) # args.num ex. 241461 - main(locator, setup_download_dir(args.path_name, locator['dir'])) - except KeyboardInterrupt: - print(f'Program aborted...' + '\n') - - -Urls list: - - - def img_links(locator): - image_url = [] - for num in range(locator['first_p'], locator['last_p']): - url = f""https://i.test.net/galleries/{locator['book_code']}/{num}.{locator['ext']}"" - image_url.append(url) - logger.info(f'Url List: {image_url}') - return image_url","I found the solution in the book fluent python. Here the snippet: - - def download_many(cc_list, base_url, verbose, concur_req): - counter = collections.Counter() - with futures.ThreadPoolExecutor(max_workers=concur_req) as executor: - to_do_map = {} - for cc in sorted(cc_list): - future = executor.submit(download_one, cc, base_url, verbose) - to_do_map[future] = cc - done_iter = futures.as_completed(to_do_map) - if not verbose: - done_iter = tqdm.tqdm(done_iter, total=len(cc_list)) - for future in done_iter: - try: - res = future.result() - except requests.exceptions.HTTPError as exc: - error_msg = 'HTTP {res.status_code} - {res.reason}' - error_msg = error_msg.format(res=exc.response) - except requests.exceptions.ConnectionError as exc: - error_msg = 'Connection error' - else: - error_msg = '' - status = res.status - if error_msg: - status = HTTPStatus.error - counter[status] += 1 - if verbose and error_msg: - cc = to_do_map[future] - print('*** Error for {}: {}'.format(cc, error_msg)) - return counter",1.2,True,1,5722 -2018-09-23 11:46:47.050,How to put a list of arbitrary integers on screen (from lowest to highest) in pygame proportionally?,"Let's say I have a list of 887123, 123, 128821, 9, 233, 9190902. I want to put those strings on screen using pygame (line drawing), and I want to do so proportionally, so that they fit the screen. If the screen is 1280x720, how do I scale the numbers down so that they keep their proportions to each other but fit the screen? -I did try with techniques such as dividing every number by two until they are all smaller than 720, but that is skewed. Is there an algorithm for this sort of mathematical scaling?",I used this algorithm: x = (x / (maximum value)) * (720 - 1),0.3869120172231254,False,1,5723 -2018-09-23 16:36:12.867,Python3.6 and singletons - use case and parallel execution,"I have several unit-tests (only python3.6 and higher) which are importing a helper class to setup some things (eg. pulling some Docker images) on the system before starting the tests. -The class is doing everything while it get instantiate. It needs to stay alive because it holds some information which are evaluated during the runtime and needed for the different tests. -The call of the helper class is very expensive and I wanna speedup my tests the helper class only once. My approach here would be to use a singleton but I was told that in most cases a singleton is not needed. Are there other options for me or is a singleton here actually a good solution? -The option should allow executing all tests at all and every test on his own. -Also I would have some theoretical questions. -If I use a singleton here how is python executing this in parallel? Is python waiting for the first instance to be finish or can there be a race condition? And if yes how do I avoid them?","I can only given an answer on the ""are there other options for me"" part of your question... -The use of such a complex setup for unit-tests (pulling docker images etc.) makes me suspicious: -It can mean that your tests are in fact integration tests rather than unit-tests. Which could be perfectly fine if your goal is to find the bugs in the interactions between the involved components or in the interactions between your code and its system environment. (The fact that your setup involves Docker images gives the impression that you intend to test your system-under-test against the system environment.) If this is the case I wish you luck to get the other aspects of your question answered (parallelization of tests, singletons and thread safety). Maybe it makes sense to tag your question ""integration-testing"" rather than ""unit-testing"" then, in order to attract the proper experts. -On the other hand your complex setup could be an indication that your unit-tests are not yet designed properly and/or the system under test is not yet designed to be easily testable with unit-tests: Unit-tests focus on the system-under-test in isolation - isolation from depended-on-components, but also isolation from the specifics of the system environment. For such tests of a properly isolated system-under-test a complex setup using Docker would not be needed. -If the latter is true you could benefit from making yourself familiar with topics like ""mocking"", ""dependency injection"" or ""inversion of control"", which will help you to design your system-under-test and your unit test cases such that they are independent of the system environment. Then, your complex setup would no longer be necessary and the other aspects of your question (singleton, parallelization etc.) may no longer be relevant.",0.0,False,1,5724 -2018-09-24 09:39:59.467,How to increase the error limit in flake8 and pylint VS Code?,"As mentioned above I would like to know how I can increase the no of errors shown in flake8 and pylint. I have installed both and they work fine when I am working with small files. I am currently working with a very large file (>18k lines) and there is no error highlighting done at the bottom part of the file, I believe the current limit is set to 100 and would like to increase it. -If this isn't possible is there any way I can just do linting for my part of the code? I am just adding a function in this large file and would like to monitor the same.","Can use ""python.linting.maxNumberOfProblems"": 2000 to increase the no of problems being displayed but the limit seems to be set to 1001 so more than 1001 problems can't be displayed.",0.0,False,1,5725 -2018-09-24 11:25:27.520,Knowledge graph in python for NLP,how do I build a knowledge graph in python from structured texts? Do I need to know any graph databases? Any resources would be of great help.,"Knowledge Graph (KG) is just a virtual representation and not an actual graph stored as it is. -To store the data you can use any of the present databases like SQL, MongoDB, etc. But to benefit the fact that we are storing graphs here, I'll suggest better use graph-based databases like node4js.",0.0,False,1,5726 -2018-09-25 08:12:35.473,How to view Opendaylight topology on external webgui,"I'm exploring ODL and mininet and able to run both and populate the network nodes over ODL and I can view the topology via ODL default webgui. -I'm planning to create my own webgui and to start with simple topology view. I need advise and guideline on how I can achieve topology view on my own webgui. Plan to use python and html. Just a simple single page html and python script. Hopefully someone could lead me the way. Please assist and thank you.","If a web GUI for ODL would provide value for you, please consider working to contribute that upstream. The previous GUI (DLUX) has recently been deprecated because no one was supporting it, although it seems many people were using it.",0.0,False,1,5727 -2018-09-26 04:22:21.250,"Python3, calling super's __init__ from a custom exception","I have created custom exception in python 3 and the over all code works just fine. But there is one thing I am not able to wrap my head around is that why do I need to send my message to the Exception class's __init__() and how does it convert the Custom exception into that string message when I try to print the exception since the code in the Exception or even the BaseException does not do much. -Not quite able to understand why call the super().__init__() from custom exception?","This is so that your custom exceptions can start off with the same instance attributes as a BaseException object does, including the value attribute, which stores the exception message, which is needed by certain other methods such as __str__, which allows the exception object to be converted to a string directly. You can skip calling super().__init__ in your subclass's __init__ and instead initialize all the necessary attributes on your own if you want, but then you would not be taking advantage of one of the key benefits of class inheritance. Always call super().__init__ unless you have very specific reasons not to reuse any of the parent class's instance attributes.",0.3869120172231254,False,1,5728 -2018-09-26 21:19:30.580,Interpreter problem (apparently) with a project in PyCharm,"I recently upgraded PyCharm (community version). If it matters, I am running on a Mac OSX machine. After the upgrade, I have one project in which PyCharm cannot find any python modules. It can't find numpy, matplotlib, anything ... I have checked a couple of other projects and they seem to be fine. I noticed that somehow the interpreter for the project in question was not the same as for the others. So I changed it to match the others. But PyCharm still can't find the modules. Any ideas what else I can do? -More generally, something like this happens every time I upgrade to a new PyCharm version. The fix each time is a little different. Any ideas on how I can prevent this in the first place? -EDIT: FWIW, I just now tried to create a new dummy project. It has the same problem. I notice that my two problem projects are created with a ""venv"" sub-directory. My ""good"" projects don't have this thing. Is this a clue to what is going on? -EDIT 2: OK, just realized that when creating a new project, I can select ""New environment"" or ""Existing interpreter"", and I want ""Existing interpreter"". However, I would still like to know how one project that was working fine before is now hosed, and how I can fix it. Thanks.","Your project is most likely pointing to the wrong interpreter. E.G. Using a virtual environment when you want to use a global one. -You must point PyCharm to the correct interpreter that you want to use. -""File/Settings(Preferences On Mac)/Project: ... /Project Interpreter"" takes you to the settings associated with the interpreters. -This window shows all of the modules within the interpreter. -From here you can click the settings wheel in the top right and configure your interpreters. (add virtual environments and what not) -or you can select an existing interpreter from the drop down to use with your project.",0.2012947653214861,False,2,5729 -2018-09-26 21:19:30.580,Interpreter problem (apparently) with a project in PyCharm,"I recently upgraded PyCharm (community version). If it matters, I am running on a Mac OSX machine. After the upgrade, I have one project in which PyCharm cannot find any python modules. It can't find numpy, matplotlib, anything ... I have checked a couple of other projects and they seem to be fine. I noticed that somehow the interpreter for the project in question was not the same as for the others. So I changed it to match the others. But PyCharm still can't find the modules. Any ideas what else I can do? -More generally, something like this happens every time I upgrade to a new PyCharm version. The fix each time is a little different. Any ideas on how I can prevent this in the first place? -EDIT: FWIW, I just now tried to create a new dummy project. It has the same problem. I notice that my two problem projects are created with a ""venv"" sub-directory. My ""good"" projects don't have this thing. Is this a clue to what is going on? -EDIT 2: OK, just realized that when creating a new project, I can select ""New environment"" or ""Existing interpreter"", and I want ""Existing interpreter"". However, I would still like to know how one project that was working fine before is now hosed, and how I can fix it. Thanks.","It seems, when you are creating a new project, you also opt to create a new virtual environment, which then is created (default) in that venv sub-directory. -But that would only apply to new projects, what is going on with your old projects, changing their project interpreter environment i do not understand. -So what i would say is you have some corrupt settings (e.g. in ~/Library/Preferences/PyCharm2018.2 ), which are copied upon PyCharm upgrade. -You might try newly configure PyCharm by moving away those PyCharm preferences, so you can put them back later. -The Project configuration mainly, special the Project interpreter on the other hand is stored inside $PROJECT_ROOT/.idea and thus should not change.",1.2,True,2,5729 -2018-09-27 04:54:42.700,how can i check all the values of dataframe whether have null values in them without a loop,"if all(data_Window['CI']!=np.nan): -I have used the all() function with if so that if column CI has no NA values, then it will do some operation. But i got syntax error.","This gives you all a columns and how many null values they have. -df = pd.DataFrame({0:[1,2,None,],1:[2,3,None]) -df.isnull().sum()",0.0,False,1,5730 -2018-09-27 09:20:03.150,Choosing best semantics for related variables in an untyped language like Python,"Consider the following situation: you work with audio files and soon there are different contexts of what ""an audio"" actually is in same solution. -This on one side is more obvious through typing, though while Python has classes and typing, but it is less explicit in the code like in Java. I think this occurs in any untyped language. -My question is how to have less ambiguous variable names and whether there is something like an official and widely accepted guideline or even a standard like PEP/RFC for that or comparable. -Examples for variables: - -A string type to designate the path/filename of the actual audio file -A file handle for the above to do the I/O -Then, in the package pydub, you deal with the type AudioSegment -While in the package moviepy, you deal with the type AudioFileClip - -Using all the four together, requires in my eyes for a clever naming strategy, but maybe I just oversee something. -Maybe this is a quite exocic example, but if you think of any other media types, this should provide a more broad view angle. Likewise, is a Document a handle, a path or an abstract object?","There is no definitive standard/rfc to name your variables. One option is to prefix/suffix your variables with a (possibly short form) type. For example, you can name a variable as foo_Foo where variable foo_Foo is of type Foo.",0.0,False,1,5731 -2018-09-27 14:44:45.557,Holoviews - network graph - change edge color,I am using holoviews and bokeh with python 3 to create an interactive network graph fro mNetworkx. I can't manage to set the edge color to blank. It seems that the edge_color option does not exist. Do you have any idea how I could do that?,"Problem solved, the option to change edges color is edge_line_color and not edge_color.",0.3869120172231254,False,1,5732 -2018-09-27 15:09:52.837,Make Pipenv create the virtualenv in the same folder,"I want Pipenv to make virtual environment in the same folder with my project (Django). -I searched and found the PIPENV_VENV_IN_PROJECT option but I don't know where and how to use this.","This maybe help someone else.. I find another easy way to solve this! -Just make empty folder inside your project and name it .venv -and pipenv will use this folder.",0.9999999998319656,False,2,5733 -2018-09-27 15:09:52.837,Make Pipenv create the virtualenv in the same folder,"I want Pipenv to make virtual environment in the same folder with my project (Django). -I searched and found the PIPENV_VENV_IN_PROJECT option but I don't know where and how to use this.","For posterity's sake, if you find pipenv is not creating a virtual environment in the proper location, you may have an erroneous Pipfile somewhere, confusing the pipenv shell call - in which case I would delete it form path locations that are not explicitly linked to a repository.",0.1618299653758019,False,2,5733 -2018-09-27 15:26:56.820,Force tkinter listbox to highlight item when selected before task is started,"I have a tkinter listbox, when I select a item it performs a few actions then returns the results, while that is happening the item I selected does not show as selected, is there a way to force it to show selected immediately so it's obvious to the user they selected the correct one while waiting on the returned results? I'm using python 3.4 and I'm on a windows 7 machine.",The item does show as selected right away because the time consuming actions are executed before updating the GUI. You can force the GUI to update before executing the actions by using window.update_idletasks().,0.0,False,1,5734 -2018-09-27 20:02:47.580,In Python DataFrame how to find out number of rows that have valid values of columns,"I want to find the number of rows that have certain values such as None or """" or NaN (basically empty values) in all columns of a DataFrame object. How can I do this?","Use df.isnull().sum() to get number of rows with None and NaN value. -Use df.eq(value).sum() for any kind of values including empty string """".",0.2655860252697744,False,1,5735 -2018-09-28 10:00:04.037,.get + dict variable,"I have a charge object with information in charge['metadata']['distinct_id']. There could be the case that it's not set, therefore I tried it that way which doesn't work charge.get(['metadata']['distinct_id'], None) -Do you know how to do that the right way?","You don't say what the error is, but, two things possibly wrong - -it should be charge.get('metadata', None) -you can't directly do it on two consecutive levels. If the metadata key returns None, you can't go on and ask for the distinct_id key. You could return an empty dict and apply get to that, eg something like charge.get('metadata', {}).get('distinct_id', None)",1.2,True,2,5736 -2018-09-28 10:00:04.037,.get + dict variable,"I have a charge object with information in charge['metadata']['distinct_id']. There could be the case that it's not set, therefore I tried it that way which doesn't work charge.get(['metadata']['distinct_id'], None) -Do you know how to do that the right way?","As @blue_note mentioned you could not user two consecutive levels. However your can try something like -charge.get('metadata', {}).get('distinct_id') -here, you tried to get 'metadata' from charge and if it does not found then it will consider blank dictionary and try to get 'distinct_id' from there (technically it does not exists). In this scenario, you need not to worry about if metadata exists or not. If it exists then it will check for distinct_id from metadata or else it throws None. -Hope this will solve your problem. -Cheers..!",0.1352210990936997,False,2,5736 -2018-09-28 16:43:57.900,PyMongo how to get the last item in the collection?,"In the MongoDB console, I know that you can use $ last and $ natural. In PyMongo, I could not use it, maybe I was doing something wrong?","Another way is: -db.collection.find().limit(1).sort([('$natural',-1)]) -This seemed to work best for me.",0.2012947653214861,False,1,5737 -2018-09-29 12:08:21.843,how can I use Transfer Learning for LSTM?,I intent to implement image captioning. Would it be possible to transfer learning for LSTM? I have used pretrained VGG16(transfer learning) to Extract features as input of the LSTM.,"As I have discovered, we can't use Transfer learning on the LSTM weights. I think the causation is infra-structure of LSTM networks.",1.2,True,1,5738 -2018-09-29 19:52:13.553,Is there any way to retrieve file name using Python?,"In a Linux directory, I have several numbered files, such as ""day1"" and ""day2"". My goal is to write a code that retrieves the number from the files and add 1 to the file that has the biggest number and create a new file. So, for example, if there are files, 'day1', 'day2' and 'day3', the code should read the list of files and add 'day4'. To do so, at least I need to know how to retrieve the numbers on the file name.",Get all files with the os module/package (don't have the exact command handy) and then use regex(package) to get the numbers. If you don't want to look into regex you could remove the letters from your string with replace() and convert that string with int().,0.0,False,1,5739 -2018-09-30 05:33:24.990,"python 3, how print function changes output?","The following were what I did in python shell. Can anyone explain the difference? - - - -datetime.datetime.now() - datetime.datetime(2018, 9, 29, 21, 34, 10, 847635) -print(datetime.datetime.now()) - 2018-09-29 21:34:26.900063","The first is the result of calling repr on the datetime value, the second is the result of calling str on a datetime. -The Python shell calls repr on values other than None before printing them, while print tries str before calling repr (if str fails). -This is not dependent on the Python version.",1.2,True,1,5740 -2018-09-30 17:25:43.813,Python's cmd.Cmd case insensitive commands,"I am using python's CLI module which takes any do_* method and sets it as a command, so a do_show() method will be executed if the user type ""show"". -How can I execute the do_show() method using any variation of capitalization from user input e.g. SHOW, Show, sHoW and so on without giving a Command Not Found error? -I think the answer would be something to do with overriding the Cmd class and forcing it to take the user's input.lower() but idk how to do that :/",You should override onecmd to achieve desired functionality.,1.2,True,1,5741 -2018-10-01 07:38:38.010,Possible ways to embed python matplotlib into my presentation interactively,"I need to present my data in various graphs. Usually what I do is to take a screenshot of my graph (I almost exclusively make them with matplotlib) and paste it into my PowerPoint. -Unfortunately my direct superior seems not to be happy with the way I present them. Sometimes he wants certain things in log scale and sometimes he dislike my color palette. The data is all there, but because its an image there's no way I can change that in the meeting. -My superior seems to really care about those things and spend quite a lot of time telling me how to make plots in every single meeting. He (usually) will not comment on my data before I make a plot the way he wants. -That's where my question becomes relevant. Right now what I have in my mind is to have an interactive canvas embedded in my PowerPoint such that I can change the range of the axis, color of my data point, etc in real time. I have been searching online for such a thing but it comes out empty. I wonder if that could be done and how can it be done? -For some simple graphs Excel plot may work, but usually I have to present things in 1D or 2D histograms/density plots with millions of entries. Sometimes I have to fit points with complicated mathematical formulas and that's something Excel is incapable of doing and I must use scipy and pandas. -The closest thing to this I found online is rise with jupyter which convert a jupyter notebook into a slide show. I think that is a good start which allows me to run python code in real time inside the presentation, but I would like to use PowerPoint related solutions if possible, mostly because I am familiar with how PowerPoint works and I still find certain PowerPoint features useful. -Thank you for all your help. While I do prefer PowerPoint, any other products that allows me to modify plots in my presentation in real time or alternatives of rise are welcomed.","When putting a picture in PowerPoint you can decide whether you want to embed it or link to it. If you decide to link to the picture, you would be free to change it outside of powerpoint. This opens up the possibility for the following workflow: -Next to your presentation you have a Python IDE or Juypter notebook open with the scripts that generate the figures. They all have a savefig command in them to save to exactly the location on disc from where you link the images in PowerPoint. If you need to change the figure, you make the changes in the python code, run the script (or cell) and switch back to PowerPoint where the newly created image is updated. -Note that I would not recommend putting too much effort into finding a better solution to this, but rather spend the time thinking about good visual reprentations of the data, due to the following reasons: 1. If your instrutor's demands are completely unreasonable (""I like blue better than green, so you need to use blue."") than it's not worth spending effort into satisfying their demands at all. 2. If your instrutor's demands are based on the fact that the current reprentation does not allow to interprete the data correctly, this can be prevented by spending more thoughts on good plots prior to the presentation. This is a learning process, which I guess your instructor wants you to internalize. After all, you won't get a degree in computer science for writing a PowerPoint backend to matplotlib, but rather for being able to present your research in a way suited for your subject.",1.2,True,1,5742 -2018-10-01 18:01:04.283,"""No package 'coinhsl' found"": IPOPT compiles and passes test, but pyomo cannot find it?","I don't know if the problem is between me and Pyomo.DAE or between me and IPOPT. I am doing this all from the command-line interface in Bash on Ubuntu on Windows (WSL). When I run: - -JAMPchip@DESKTOP-BOB968S:~/examples/dae$ python3 run_disease.py - -I receive the following output: - -WARNING: Could not locate the 'ipopt' executable, which is required - for solver - ipopt Traceback (most recent call last): File ""run_disease.py"", line 15, in - results = solver.solve(instance,tee=True) File ""/usr/lib/python3.6/site-packages/pyomo/opt/base/solvers.py"", line - 541, in solve - self.available(exception_flag=True) File ""/usr/lib/python3.6/site-packages/pyomo/opt/solver/shellcmd.py"", line - 122, in available - raise ApplicationError(msg % self.name) pyutilib.common._exceptions.ApplicationError: No executable found for - solver 'ipopt' - -When I run ""make test"" in the IPOPT build folder, I reecieved: - -Testing AMPL Solver Executable... - Test passed! Testing C++ Example... - Test passed! Testing C Example... - Test passed! Testing Fortran Example... - Test passed! - -But my one major concern is that in the ""configure"" output was the follwing: - -checking for COIN-OR package HSL... not given: No package 'coinhsl' - found - -There were also a few warning when I ran ""make"". I am not at all sure where the issue lies. How do I make python3 find IPOPT, and how do I tell if I have IPOPT on the system for pyomo.dae to find? I am pretty confident that I have ""coibhsl"" in the HSL folder, so how do I make sure that it is found by IPOPT?","As sascha states, you need to make sure that the directory containing your IPOPT executable (likely the build folder) is in your system PATH. That way, if you were to open a terminal and call ipopt from an arbitrary directory, it would be detected as a valid command. This is distinct from being able to call make test from within the IPOPT build folder.",0.0,False,1,5743 -2018-10-02 13:17:45.260,how to disable printscreen key on windows using python,"Is there any way to disable the print screen key when running a python application? -Maybe editing the windows registry is the way? -Thanks!","printscreen is OS Functionality. -Their is No ASCII code for PrintScreen. -Even their are many ways to take PrintScreen. - -Thus, You can Disable keyboard but its difficult to stop user from taking PrintScreen.",0.0,False,1,5744 -2018-10-04 09:28:55.060,How does scrapy behave when enough resources are not available,"I am running multiple scrapers using the command line which is an automated process. -Python : 2.7.12 -Scrapy : 1.4.0 -OS : Ubuntu 16.04.4 LTS -I want to know how scrapy handles the case when - -There is not enough memory/cpu bandwidth to start a scraper. -There is not enough memory/cpu bandwidth during a scraper run. - -I have gone through the documentation but couldn't find anything. -Anyone answering this, you don't have to know the right answer, if you can point me to the general direction of any resource you know which would be helpful, that would also be appreciated","The operating system kills any process that tries to access more memory than the limit. -Applies to python programs too. and scrapy is no different. -More often than not, bandwidth is the bottleneck in scraping / crawling applications. -Memory would only be a bottleneck if there is a serious memory leak in your application. -Your application would just be very slow if CPU is being shared by many process on the same machine.",1.2,True,1,5745 -2018-10-04 17:55:05.990,how to change raspberry pi ip in flask web service,"I have a raspberry pi 3b+ and i'm showing ip camera stream using the Opencv in python. -My default ip in rasppberry is 169.254.210.x range and I have to put the camera in the same range. -How can i change my raspberry ip? -Suppose if I run the program on a web service such as a flask, can i change the raspberry pi server ip every time?","You can statically change your ip of raspberry pi by editing /etc/network/interfaces -Try editing a line of the file which contains address.",0.0,False,1,5746 -2018-10-04 19:48:49.993,"""No module named 'docx'"" error but ""requirement already satisfied"" when I try to install","From what I've read, it sounds like the issue might be that the module isn't in the same directory as my script. Is that the case? If so, how do I find the module and move it to the correct location? -Edit -In case it's relevant - I installed docx using easy_install, not pip.","Please install python-docx. -Then you import docx (not python-docx)",0.0,False,2,5747 -2018-10-04 19:48:49.993,"""No module named 'docx'"" error but ""requirement already satisfied"" when I try to install","From what I've read, it sounds like the issue might be that the module isn't in the same directory as my script. Is that the case? If so, how do I find the module and move it to the correct location? -Edit -In case it's relevant - I installed docx using easy_install, not pip.","pip show docx -This will show you where it is installed. However, if you're using python3 then - pip install python-docx -might be the one you need.",0.0,False,2,5747 -2018-10-05 16:36:21.090,How can I see what packages were installed using `sudo pip install`?,"I know that installing python packages using sudo pip install is bad a security risk. Unfortunately, I found this out after installing quite a few packages using sudo. -Is there a way to find out what python packages I installed using sudo pip install? The end goal being uninstallment and correctly re-installing them within a virtual environment. -I tried pip list to get information about the packages, but it only gave me their version. pip show gave me more information about an individual package such as where it is installed, but I don't know how to make use of that information.",try the following command: pip freeze,0.0,False,2,5748 -2018-10-05 16:36:21.090,How can I see what packages were installed using `sudo pip install`?,"I know that installing python packages using sudo pip install is bad a security risk. Unfortunately, I found this out after installing quite a few packages using sudo. -Is there a way to find out what python packages I installed using sudo pip install? The end goal being uninstallment and correctly re-installing them within a virtual environment. -I tried pip list to get information about the packages, but it only gave me their version. pip show gave me more information about an individual package such as where it is installed, but I don't know how to make use of that information.","any modules you installed with sudo will be owned by root, so you can open your shell/terminal, cd to site-packages directory & check the directories owner with ls -la, then any that has root in the owner column is the one you want to uninstall.",1.2,True,2,5748 -2018-10-06 20:14:32.777,Is it possible to change the loss function dynamically during training?,"I am working on a machine learning project and I am wondering whether it is possible to change the loss function while the network is training. I'm not sure how to do it exactly in code. -For example, start training with cross entropy loss and then halfway through training, switch to 0-1 loss.",You have to implement your own algorithm. This is mostly possible with Tensorflow.,0.0,False,1,5749 -2018-10-08 17:02:32.603,Keras LSTM Input Dimension understanding each other,"but I have been trying to play around with it for awhile. I've seen a lot of guides on how Keras is used to build LSTM models and how people feed in the inputs and get expected outputs. But what I have never seen yet is, for example stock data, how we can make the LSTM model understand patterns between different dimensions, say close price is much higher than normal because volume is low. -Point of this is that I want to do a test with stock prediction, but make it so that each dimensions are not reliant on previous time steps, but also reliant on other dimensions it haves as well. -Sorry if I am not asking the question correctly, please ask more questions if I am not explaining it clearly.","First: Regressors will replicate if you input a feature that gives some direct intuition about the predicted input might be to secure the error is minimized, rather than trying to actually predict it. Try to focus on binary classification or multiclass classification, whether the closing price go up/down or how much. -Second: Always engineer the raw features to give more explicit patterns to the ML algorithm. Think on inputs as Volume(t) - Volume(t-1), close(t)^2 - close(t-1)^2, technical indicators(RSI, CCI, OBV etc.) Create your own features. You can use the pyti library for technical indicators.",0.0,False,1,5750 -2018-10-09 06:31:10.137,SoftLayer API: order a 128 subnet,"We are trying to order a 128 subnet. But looks like it doesn't work, get an error saying Invalid combination specified for ordering a subnet. The same code works to order a 64 subnet. Any thoughts how to order a 128 subnet? - -network_mgr = SoftLayer.managers.network.NetworkManager(client) -network_mgr.add_subnet(‘private’, 128, vlan_id, test_order=True) - - -Traceback (most recent call last): - File ""subnet.py"", line 11, in - result = nwmgr.add_subnet('private', 128, vlan_id, test_order=True) - File ""/usr/local/lib/python2.7/site-packages/SoftLayer/managers/network.py"", line 154, in add_subnet - raise TypeError('Invalid combination specified for ordering a' -TypeError: Invalid combination specified for ordering a subnet.","Currently it seems not possible to add 128 ip subnet into the order, the package used by the manager to order subnets only allows to add subnets for: 64,32,16,8,4 (capacity), -It is because the package that does not contain any item that has 128 ip addresses subnet, this is the reason why you are getting the error Exception you provided. -You may also verify this through the Portal UI, if you can see 128 ip address option through UI in your account, please update this forum with a screenshot.",0.0,False,1,5751 -2018-10-09 10:19:24.127,Add Python to the Windows path,"If I forget to add the Python to the path while installing it, how can I add it to my Windows path? -Without adding it to the path I am unable to use it. Also if I want to put python 3 as default.","Edit Path in Environment Variables -Add Python's path to the end of the list (these are separated by ';'). -For example: +I want to be able to have one Python script that lives in the Master directory that will do the following when executed: -C:\Users\AppData\Local\Programs\Python\Python36; -C:\Users\AppData\Local\Programs\Python\Python36\Scripts - -and if you want to make it default -you have to edit the system environmental variables -edit the following from the Path - -C:\Windows;C:\Windows\System32;C:\Python27 - -Now Python 3 would have been become the default python in your system -You can check it by python --version",0.3869120172231254,False,1,5752 -2018-10-09 11:15:40.860,"Deploying python with docker, images too big","We've built a large python repo that uses lots of libraries (numpy, scipy, tensor flow, ...) And have managed these dependencies through a conda environment. Basically we have lots of developers contributing and anytime someone needs a new library for something they are working on they 'conda install' it. -Fast forward to today and now we need to deploy some applications that use our repo. We are deploying using docker, but are finding that these images are really large and causing some issues, e.g. 10+ GB. However each individual application only uses a subset of all the dependencies in the environment.yml. -Is there some easy strategy for dealing with this problem? In a sense I need to know the dependencies for each application, but I'm not sure how to do this in an automated way. -Any help here would be great. I'm new to this whole AWS, Docker, and python deployment thing... We're really a bunch of engineers and scientists who need to scale up our software. We have something that works, it just seems like there has to be a better way .","First see if there are easy wins to shrink the image, like using Alpine Linux and being very careful about what gets installed with the OS package manager, and ensuring you only allow installing dependencies or recommended items when truly required, and that you clean up and delete artifacts like package lists, big things you may not need like Java, etc. -The base Anaconda/Ubuntu image is ~ 3.5GB in size, so it's not crazy that with a lot of extra installations of heavy third-party packages, you could get up to 10GB. In production image processing applications, I routinely worked with Docker images in the range of 3GB to 6GB, and those sizes were after we had heavily optimized the container. -To your question about splitting dependencies, you should provide each different application with its own package definition, basically a setup.py script and some other details, including dependencies listed in some mix of requirements.txt for pip and/or environment.yaml for conda. -If you have Project A in some folder / repo and Project B in another, you want people to easily be able to do something like pip install or conda env create -f ProjectB_environment.yml or something, and voila, that application is installed. -Then when you deploy a specific application, have some CI tool like Jenkins build the container for that application using a FROM line to start from your thin Alpine / whatever container, and only perform conda install or pip install for the dependency file for that project, and not all the others. -This also has the benefit that multiple different projects can declare different version dependencies even among the same set of libraries. Maybe Project A is ready to upgrade to the latest and greatest pandas version, but Project B needs some refactoring before the team wants to test that upgrade. This way, when CI builds the container for Project B, it will have a Python dependency file with one set of versions, while in Project A's folder or repo of source code, it might have something different.",1.2,True,1,5753 -2018-10-09 15:27:34.223,Text classification by pattern,"Could you recomend me best way how to do it: i have a list phrases, for example [""free flower delivery"",""flower delivery Moscow"",""color + home delivery"",""flower delivery + delivery"",""order flowers + with delivery"",""color delivery""] and pattern - ""flower delivery"". I need to get list with phrases as close as possible to pattern. -Could you give some advice to how to do it?","Answer given by nflacco is correct.. In addition to that, have you tried edit distance? Try fuzzywuzzy (pip install fuzzywuzzy).. it uses Edit distance to give you a score, how near two sentences are",0.2012947653214861,False,1,5754 -2018-10-10 10:39:12.207,TensorFlow: Correct way of using steps in Stochastic Gradient Descent,"I am currently using TensorFlow tutorial's first_steps_with_tensor_flow.ipynb notebook to learn TF for implementing ML models. In the notebook, they have used Stochastic Gradient Descent (SGD) to optimize the loss function. Below is the snippet of the my_input_function: -def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None): -Here, it can be seen that the batch_size is 1. The notebook uses a housing data set containing 17000 labeled examples for training. This means for SGD, I will be having 17000 batches. -LRmodel = linear_regressor.train(input_fn = lambda:my_input_fn(my_feature, - targets), steps=100) -I have three questions - - -Why is steps=100 in linear_regressor.train method above? Since we have 17000 batches and steps in ML means the count for evaluating one batch, in linear_regressor.train method steps = 17000 should be initialized, right? -Is number of batches equal to the number of steps/iterations in ML? -With my 17000 examples, if I keep my batch_size=100, steps=500, and num_epochs=5, what does this initialization mean and how does it correlate to 170 batches?","step is the literal meaning: means you refresh the parameters in your batch size; so for linear_regessor.train, it will train 100 times for this batch_size 1. -epoch means to refresh the whole data, which is 17,000 in your set.",-0.3869120172231254,False,1,5755 -2018-10-11 15:24:17.267,Writing unit tests in Python,"I have a task in which i have a csv file having some sample data. The task is to convert the data inside the csv file into other formats like JSON, HTML, YAML etc after applying some data validation rules. -Now i am also supposed to write some unit tests for this in pytest or the unittest module in Python. -My question is how do i actually write the unit tests for this since i am converting them to different JSON/HTML files ? Should i prepare some sample files and then do a comparison with them in my unit tests. -I think only the data validation part in the task can be tested using unittest and not the creation of files in different formats right ? -Any ideas would be immensely helpful. -Thanks in advance.","You should do functional tests, so testing the whole pipeline from a csv file to the end result, but unit tests is about checking that individual steps work. -So for instance, can you read a csv file properly? Does it fail as expected when you don't provide a csv file? Are you able to check each validation unit? Are they failing when they should? Are they passing valid data? -And of course, the result must be tested as well. Starting from a known internal representation, is the resulting json valid? Does it contain all the required data? Same for yaml, HTML. You should not test the formatting, but really what was output and if it's correct. -You should always test that valid data passes and that incorrect doesn't at each step of your work flow.",1.2,True,1,5756 -2018-10-12 12:28:16.983,How to get filtered rowCount in a QSortFilterProxyModel,"I use a QSortFilterProxyModel to filter a QSqlTableModel's data, and want to get the filtered rowCount. -But when I call the QSortFilterProxyModel.rowCount method, the QSqlTableModel's rowCount was returned. -So how can I get the filtered rowcount?",You should after set QSortFilterProxyModel filter to call proxymodel.rowCount。,0.0,False,1,5757 -2018-10-13 10:45:57.270,python 3.7 setting environment variable path,"I installed Anaconda 3 and wanted to execute python from the shell. It returned that it's either written wrong or does not exist. Apparently, I have to add a path to the environmentle variable. -Can someone tell how to do this? -Environment: Windows 10, 64 bit and python 3.7 -Ps: I know the web is full with that but I am notoriously afraid to make a mistake. And I did not find an exact entry for my environment. Thanks in advance. -Best Daniel","Windows: - -search for -->Edit the system environment variables -In Advanced tab, click Environment variabless -In System variables, Select PATH and click edit. Now Click new, ADD YOU PATH. -Click Apply and close. - -Now, check in command prompt",1.2,True,1,5758 -2018-10-14 02:29:15.473,"Given two lists of ints, how can we find the closes number in one list from the other one?","Given I have two different lists with ints. -a = [1, 4, 11, 20, 25] and b = [3, 10, 20] -I want to return a list of length len(b) that stores the closest number in a for each ints in b. -So, this should return [4, 11, 20]. -I can do this in brute force, but what is a more efficient way to do this? -EDIT: It would be great if I can do this with standard library, if needed, only.","Use binary search, assuming the lists are in order. -The brute force in this case is only O(n), so I wouldn't worry about it, just use brute force. -EDIT: -yeh it is O(len(a)*len(b)) (roughly O(n^2) -sorry stupid mistake. -Since these aren't necessarily sorted the fastest is still O(len(a)*len(b)) though. Sorting the lists (using timsort) would take O(nlogn), then binary search O(logn), which results in O(nlog^2n)*O(n)=O(n^2log^2n), which is slower then just O(n^2).",0.0,False,1,5759 -2018-10-14 18:17:29.503,Python tasks and DAGs with different conda environments,"Say that most of my DAGs and tasks in AirFlow are supposed to run Python code on the same machine as the AirFlow server. -Can I have different DAGs use different conda environments? If so, how should I do it? For example, can I use the Python Operator for that? Or would that restrict me to using the same conda environment that I used to install AirFlow. -More generally, where/how should I ideally activate the desired conda environment for each DAG or task?","The Python that is running the Airflow Worker code, is the one whose environment will be used to execute the code. -What you can do is have separate named queues for separate execution environments for different workers, so that only a specific machine or group of machines will execute a certain DAG.",1.2,True,1,5760 -2018-10-14 18:54:30.970,Is it possible to make my own encryption when sending data through sockets?,For example in python if I’m sending data through sockets could I make my own encryption algorithm to encrypt that data? Would it be unbreakable since only I know how it works?,"Yes you can. Would it be unbreakable? No. This is called security through obscurity. You're relying on the fact that nobody knows how it works. But can you really rely on that? -Someone is going to receive the data, and they'll have to decrypt it. The code must run on their machine for that to happen. If they have the code, they know how it works. Well, at least anyone with a lot of spare time and nothing else to do can easily reverse engineer it, and there goes your obscurity. -Is it feasable to make your own algorithm? Sure. A bit of XOR here, a bit of shuffling there... eventually you'll have an encryption algorithm. It probably wouldn't be a good one but it would do the job, at least until someone tries to break it, then it probably wouldn't last a day. -Does Python care? Do sockets care? No. You can do whatever you want with the data. It's just bits after all, what they mean is up to you. -Are you a cryptographer? No, otherwise you wouldn't be here asking this. So should you do it? No.",1.2,True,1,5761 -2018-10-14 19:10:42.147,imshow() with desired framerate with opencv,"Is there any workaround how to use cv2.imshow() with a specific framerate? Im capturing the video via VideoCapture and doing some easy postprocessing on them (both in a separeted thread, so it loads all frames in Queue and the main thread isn't slowed by the computation). I tryed to fix the framerate by calculating the time used for ""reading"" the image from the queue and then substract that value from number of miliseconds avalible for one frame: -if I have as input video with 50FPS and i want to playback it in real-time i do 1000/50 => 20ms per frame. -And then wait that time using cv2.WaitKey() -But still I get some laggy output. Which is slower then the source video","I don't believe there is such a function in opencv but maybe you could improve your method by adding a dynamic wait time using timers? timeit.default_timer() -calculate the time taken to process and subtract that from the expected framerate and maybe add a few ms buffer. -eg cv2.WaitKey((1000/50) - (time processing finished - time read started) - 10) -or you could have a more rigid timing eg script start time + frame# * 20ms - time processing finished -I haven't tried this personally so im not sure if it will actually work, also might be worth having a check so the number isnt below 1",1.2,True,1,5762 -2018-10-16 21:43:21.673,"Azure Machine Learning Studio execute python script, Theano unable to execute optimized C-implementations (for both CPU and GPU)","I am execute a python script in Azure machine learning studio. I am including other python scripts and python library, Theano. I can see the Theano get loaded and I got the proper result after script executed. But I saw the error message: - -WARNING (theano.configdefaults): g++ not detected ! Theano will be unable to execute optimized C-implementations (for both CPU and GPU) and will default to Python implementations. Performance will be severely degraded. To remove this warning, set Theano flags cxx to an empty string. - -Did anyone know how to solve this problem? Thanks!","I don't think you can fix that - the Python script environment in Azure ML Studio is rather locked down, you can't really configure it (except for choosing from a small selection of Anaconda/Python versions). -You might be better off using the new Azure ML service, which allows you considerably more configuration options (including using GPUs and the like).",1.2,True,1,5763 -2018-10-17 14:07:06.357,how to use pip install a package if there are two same version of python on windows,"I have two same versions of python on windows. Both are 3.6.4. I installed one of them, and the other one comes with Anaconda. -My question is how do I use pip to install a package for one of them? It looks like the common method will not work since the two python versions are the same.","pip points to only one installation because pip is a script from one python. -If you have one Python in your PATH, then it's that python and that pip that will be used.",0.2012947653214861,False,2,5764 -2018-10-17 14:07:06.357,how to use pip install a package if there are two same version of python on windows,"I have two same versions of python on windows. Both are 3.6.4. I installed one of them, and the other one comes with Anaconda. -My question is how do I use pip to install a package for one of them? It looks like the common method will not work since the two python versions are the same.","Use virtualenv, conda environment or pipenv, it will help with managing packages for different projects.",0.0,False,2,5764 -2018-10-18 03:14:15.287,How can i make computer read a python file instead of py?,"I have a problem with installing numpy with python 3.6 and i have windows 10 64 bit -Python 3.6.6 -But when i typed python on cmd this appears -Python is not recognized as an internal or external command -I typed py it solves problem but how can i install numpy -I tried to type commant set path =c:/python36 -And copy paste the actual path on cmd but it isnt work -I tried also to edit the enviromnent path through type a ; and c:/python 36 and restart but it isnt help this -I used pip install nupy and download pip but it isnt work",Try pip3 install numpy. To install python 3 packages you should use pip3,0.0,False,2,5765 -2018-10-18 03:14:15.287,How can i make computer read a python file instead of py?,"I have a problem with installing numpy with python 3.6 and i have windows 10 64 bit -Python 3.6.6 -But when i typed python on cmd this appears -Python is not recognized as an internal or external command -I typed py it solves problem but how can i install numpy -I tried to type commant set path =c:/python36 -And copy paste the actual path on cmd but it isnt work -I tried also to edit the enviromnent path through type a ; and c:/python 36 and restart but it isnt help this -I used pip install nupy and download pip but it isnt work","On Windows, the py command should be able to launch any Python version you have installed. Each Python installation has its own pip. To be sure you get the right one, use py -3.6 -m pip instead of just pip. - -You can use where pip and where pip3 to see which Python's pip they mean. Windows just finds the first one on your path. - -If you activate a virtualenv, then you you should get the right one for the virtualenv while the virtualenv is active.",0.0,False,2,5765 -2018-10-18 09:53:46.373,Is it possible to manipulate data from csv without the need for producing a new csv file?,"I know how to import and manipulate data from csv, but I always need to save to xlsx or so to see the changes. Is there a way to see 'live changes' as if I am already using Excel? -PS using pandas -Thanks!",This is not possible using pandas. This lib creates copy of your .csv / .xls file and stores it in RAM. So all changes are applied to file stored in you memory not on disk.,1.2,True,1,5766 -2018-10-19 09:04:38.457,how to remove zeros after decimal from string remove all zero after dot,"I have data frame with a object column lets say col1, which has values likes: -1.00, -1, -0.50, -1.54 -I want to have the output like the below: -1, -1, -0.5, -1.54 -basically, remove zeros after decimal values if it does not have any digit after zero. Please note that i need answer for dataframe. pd.set_option and round don't work for me.","A quick-and-dirty solution is to use ""%g"" % value, which will convert floats 1.5 to 1.5 but 1.0 to 1 and so on. The negative side-effect is that large numbers will be represented in scientific notation like 4.44e+07.",0.0,False,1,5767 -2018-10-19 10:34:41.947,Call Python functions from C# in Visual Studio Python support VS 2017,"This is related to new features Visual Studio has introduced - Python support, Machine Learning projects to support. -I have installed support and found that I can create a python project and can run it. However, I could not find how to call a python function from another C# file. -Example, I created a classifier.py from given project samples, Now I want to run the classifier and get results from another C# class. -If there is no such portability, then how is it different from creating a C# Process class object and running the Python.exe with our py file as a parameter.","As per the comments, python support has come in visual studio. Visual studio is supporting running python scripts and debugging. -However, calling one python function from c# function and vice versa is not supported yet. -Closing the thread. Thanks for suggestions.",1.2,True,1,5768 -2018-10-19 10:55:44.210,Running Jenkinsfile with multiple Python versions,"I have a multibranch pipeline set up in Jenkins that runs a Jenkinsfile, which uses pytest for testing scripts, and outputs the results using Cobertura plug-in and checks code quality with Pylint and Warnings plug-in. -I would like to test the code with Python 2 and Python 3 using virtualenv, but I do not know how to perform this in the Jenkinsfile, and Shining Panda plug-in will not work for multibranch pipelines (as far as I know). Any help would be appreciated.","You can do it even using vanilla Jenkins (without any plugins). 'Biggest' problem will be with proper parametrization. But let's start from the beginning. -2 versions of Python -When you install 2 versions of python on a single machine you will have 2 different exec files. For python2 you will have python and for python3 you will have python3. Even when you create virtualenv (use venv) you will have both of them. So you are able to run unittests agains both versions of python. It's just a matter of executing proper command from batch/bash script. -Jenkins -There are many ways of performing it: - -you can prepare separate jobs for both python 2 and 3 versions of tests and run them from jenkins file -you can define the whole pipeline in a single jenkins file where each python test is a different stage (they can be run one after another or concurrently)",0.3869120172231254,False,1,5769 -2018-10-20 01:58:46.053,How to find redundant paths (subpaths) in the trajectory of a moving object?,"I need to track a moving deformable object in a video (but only 2D space). How do I find the paths (subpaths) revisited by the object in the span of its whole trajectory? For instance, if the object traced a path, p0-p1-p2-...-p10, I want to find the number of cases the object traced either p0-...-p10 or a sub-path like p3-p4-p5. Here, p0,p1,...,p10 represent object positions (in (x,y) pixel coordinates at the respective instants). Also, how do I know at which frame(s) these paths (subpaths) are being revisited?","I would first create a detection procedure that outputs a list of points visited along with their video frame number. Then use list exploration functions to know how many redundant suites are found and where. -As you see I don't write your code. If you need anymore advise please ask!",0.0,False,1,5770 -2018-10-20 13:20:10.283,Python - How to run script continuously to look for files in Windows directory,"I got a requirement to parse the message files in .txt format real time as and when they arrive in incoming windows directory. The directory is in my local Windows Virtual Machine something like D:/MessageFiles/ -I wrote a Python script to parse the message files because it's a fixed width file and it parses all the files in the directory and generates the output. Once the files are successfully parsed, it will be moved to archive directory. Now, i would like to make this script run continuously so that it looks for the incoming message files in the directory D:/MessageFiles/ and perform the processing as and when it sees the new files in the path. -Can someone please let me know how to do this?","There are a few ways to do this, it depends on how fast you need it to archive the files. -If the frequency is low, for example every hour, you can try to use windows task scheduler to run the python script. -If we are talking high frequency, or you really want a python script running 24/7, you could put it in a while loop and at the end of the loop do time.sleep() -If you go with this, I would recommend not blindly parsing the entire directory on every run, but instead finding a way to check whether new files have been added to the directory (such as the amount of files perhaps, or the total size). And then if there is a fluctuation you can archive.",1.2,True,1,5771 -2018-10-20 15:04:32.733,PyOpenGL camera system,"I'm confused on how the PyOpenGL camera works or how to implement it. Am I meant to rotate and move the whole world around the camera or is there a different way? -I couldn't find anything that can help me and I don't know how to translate C to python. -I just need a way to transform the camera that can help me understand how it works.","To say it bluntly: There is no such thing as a ""camera"" in OpenGL (neither there is in DirectX, or Vulkan, or in any of the legacy 3D graphics APIs). The effects of a camera is understood as some parameter that contributes to the ultimate placement of geometry inside the viewport volume. -The sooner you understand that all that current GPUs do is offering massively accelerated computational resources to set the values of pixels in a 2D grid, where the region of the pixels changed are mere points, lines or triangles on a 2D plane onto which they are projected from an arbitrarily dimensioned, abstract space, the better. -You're not even moving around the world around the camera. Setting up transformations is actually errecting the stage in which ""the world"" will appear in the first place. Any notion of a ""camera"" is an abstraction created by a higher level framework, like a third party 3D engine or your own creation. -So instead of thinking in terms of a camera, which constrains your thinking, you should think about it this way: -What kind of transformations do I have to chain up, to give a tuple of numbers that are called ""position"" an actual meaning, by letting this position turn up at a certain place on the visible screen? -You really ought to think that way, because that is what's actually happening.",1.2,True,1,5772 -2018-10-21 13:11:30.197,Anaconda Installation on Azure Web App Services,"I install my python modules via pip for my Azure Web Apps. But some of python libraries that I need are only available in conda. I have been trying to install anaconda on Azure Web Apps (windows/linux), no success so far. Any suggestions/examples on how to use conda env on azure web apps?","Currently, Azure App Service only supports the official Python to be installed as extensions. Instead of using the normal App Service, I would suggest you to use a Webapp for Container so that you can deploy your web app as a docker container. I suppose this is the only solution until Microsoft supports Anaconda on App Service.",0.3869120172231254,False,1,5773 -2018-10-21 15:08:58.620,Why tokenize/preprocess words for language analysis?,"I am currently working on a Python tweet analyser and part of this will be to count common words. I have seen a number of tutorials on how to do this, and most tokenize the strings of text before further analysis. -Surely it would be easier to avoid this stage of preprocessing and count the words directly from the string - so why do this?","Perhaps I'm being overly correct, but doesn't tokenization simply refer to splitting up the input stream (of characters, in this case) based on delimiters to receive whatever is regarded as a ""token""? -Your tokens can be arbitrary: you can perform analysis on the word level where your tokens are words and the delimiter is any space or punctuation character. It's just as likely that you analyse n-grams, where your tokens correspond to a group of words and delimiting is done e.g. by sliding a window. -So in short, in order to analyse words in a stream of text, you need to tokenize to receive ""raw"" words to operate on. -Tokenization however is often followed by stemming and lemmatization to reduce noise. This becomes quite clear when thinking about sentiment analysis: if you see the tokens happy, happily and happiness, do you want to treat them each separately, or wouldn't you rather combine them to three instances of happy to better convey a stronger notion of ""being happy""?",1.2,True,2,5774 -2018-10-21 15:08:58.620,Why tokenize/preprocess words for language analysis?,"I am currently working on a Python tweet analyser and part of this will be to count common words. I have seen a number of tutorials on how to do this, and most tokenize the strings of text before further analysis. -Surely it would be easier to avoid this stage of preprocessing and count the words directly from the string - so why do this?","Tokenization is an easy way of understanding the lexicon/vocabulary in text processing. -A basic first step in analyzing language or patterns in text is to remove symbols/punctuations and stop words. With tokenization you are able to split the large chunks of text to identify and remove text which might not add value, in many cases, stop words like 'the','a','and', etc do not add much value in identifying words of interest. -Word frequencies are also very common in understanding the usage of words in text, Google's Ngram allows for language analysis and plots out the popularity/frequency of a word over the years. If you do not tokenize or split the strings, you will not have a basis to count the words that appear in a text. -Tokenization also allows you to run a more advanced analysis, for example tagging the part of speech or assigning sentiments to certain words. Also for machine learning, texts are mostly preprocessed to convert them to arrays which are used in te different layers of neural networks. Without tokenizing, the inputs will all be too distinct to run any analysis on.",0.0,False,2,5774 -2018-10-23 13:07:01.447,Shutdown (a script) one raspberry pi with another raspberry pi,"I am currently working on a school project. We need to be able to shutdown (and maybe restart) a pythonscript that is running on another raspberry pi using a button. -I thought that the easiest thing, might just be to shutdown the pi from the other pi. But I have no experience on this subject. -I don't need an exact guide (I appreciate all the help I can get) but does anyone know how one might do this?","Well first we should ask if the PI you are trying to shutdown is connect to a network ? (LAN or the internet, doesn't matter). -If the answer is yes, you can simply connect to your PI through SSH, and call shutdown.sh. -I don't know why you want another PI, you can do it through any device connected to the same network as your first PI (Wi-Fi or ethernet if LAN, or simply from anytwhere if it's open to the internet). -You could make a smartphone app, or any kind or code that can connect to SSH (all of them).",0.0,False,1,5775 -2018-10-23 15:25:12.317,"python+docker: docker volume mounting with bad perms, data silently missing","I'm running into an issue without volume mounting, combined with the creation of directories in python. -Essentially inside my container, I'm writing to some path /opt/…, and I may have to make the path (which I'm using os.makedirs for) -If I mount a host file path like -v /opt:/opt, with bad ""permissions"" where the docker container does not seem to be able to write to, the creation of the path inside the container DOES NOT FAIL. The makedirs(P) works, because inside the container, it can make the dir just fine, because it has sudo permissions. However, nothing gets written, silently, on the host at /opt/…. The data just isn't there, but no exception is ever raised. -If I mount a path with proper/open permissions, like -v /tmp:/opt, then the data shows up on the host machine at /tmp/… as expected. -So, how do I not silently fail if there are no write permissions on the host on the left side of the -v argument?\ -EDIT: my question is ""how do I detect this bad deployment scenario, crash, and fail fast inside the container, if the person who deploys the container, does it wrong""? Just silently not writing data isn't acceptable.","The bad mount is root on the host right, and the good mount is the user in the Docker group on the host? Can you check the user/group of the mounted /opt? It should be different than that of /tmp.",0.0,False,1,5776 -2018-10-24 06:17:42.420,Building comprehensive scraping program/database for real estate websites,"I have a project I’m exploring where I want to scrape the real estate broker websites in my country (30-40 websites of listings) and keep the information about each property in a database. -I have experimented a bit with scraping in python using both BeautifulSoup and Scrapy. -What I would Ideally like to achieve is a daily updated database that will find new properties and remove properties when they are sold. -Any pointers as to how to achieve this? -I am relatively new to programming and open to learning different languages and resources if python isn’t suitable. -Sorry if this forum isn’t intended for this kind of vague question :-)",Build a scraper and schedule a daily run. You can use scrapy and the daily run will update the database daily.,0.0,False,1,5777 -2018-10-24 09:41:09.793,Using convolution layer trained weights for different image size,"I want to use the first three convolution layers of vgg-16 to generate feature maps. -But i want to use it with variable image size,i.e not imagenet size of 224x224 or 256x256. Such as 480x640or any other randome image dimension. -As convolution layer are independent of image spatial size, how can I use the weights for varying image sizes? -So how do we use the pre-trained weights of vgg-16 upto the first three convolution layers. -Kindly let me know if that is possible.","As convolution layer are independent of image size -Actually it's more complicated than that. The kernel itself is independent of the image size because we apply it on each pixel. And indeed, the training of these kernels can be reused. -But this means that the output size is dependent on the image size, because this is the number of nodes that are fed out of the layer for each input pixel. So the dense layer is not adapted to your image, even if the feature extractors are independent. -So you need to preprocess your image to fit into the size of the first layer or you retrain your dense layers from scratch. -When people talk about ""transfer-learning"" is what people have done in segmentation for decades. You reuse the best feature extractors and then you train a dedicated model with these features.",1.2,True,1,5778 -2018-10-24 18:05:05.703,Display complex numbers in UI when using wxPython,"I know complex math and the necessary operations (either ""native"" Python, or through NumPy). My question has to do with how to display complex numbers in a UI using wxPython. All the questions I found dealing with Python and complex numbers have to do with manipulating complex data. -My original thought was to subclass wx.TextCtrl and override the set and get methods to apply and strip some formatting as needed, and concatenating an i (or j) to the imaginary part. -Am I going down the wrong path? I feel like displaying complex numbers is something that should already be done somewhere. -What would be the recommended pattern for this even when using another UI toolkit, as the problem is similar. Also read my comment below on why I would like to do this.","As Brian considered my first comment good advice, and he got no more answers, I am posting it as an answer. Please refer also to the other question comments discussing the issue. - -In any UI you display strings and you read strings from the user. Why - would you mix the type to string or string to type translation with - widgets functionality? Get them, convert and use, or ""print"" them to - string and show the string in the ui.",0.0,False,1,5779 -2018-10-24 21:37:53.237,Change file metadata using Apache Beam on a cloud database?,"Can you change the file metadata on a cloud database using Apache Beam? From what I understand, Beam is used to set up dataflow pipelines for Google Dataflow. But is it possible to use Beam to change the metadata if you have the necessary changes in a CSV file without setting up and running an entire new pipeline? If it is possible, how do you do it?","You could code Cloud Dataflow to handle this but I would not. A simple GCE instance would be easier to develop and run the job. An even better choice might be UDF (see below). -There are some guidelines for when Cloud Dataflow is appropriate: - -Your data is not tabular and you can not use SQL to do the analysis. -Large portions of the job are parallel -- in other words, you can process different subsets of the data on different machines. -Your logic involves custom functions, iterations, etc... -The distribution of the work varies across your data subsets. - -Since your task involves modifying a database, I am assuming a SQL database, it would be much easier and faster to write a UDF to process and modify the database.",0.0,False,1,5780 -2018-10-25 02:44:34.287,How to use Tensorflow Keras API,"Well I start learning Tensorflow but I notice there's so much confusion about how to use this thing.. -First, some tutorials present models using low level API tf.varibles, scopes...etc, but other tutorials use Keras instead and for example to use tensor board to invoke callbacks. -Second, what's the purpose of having ton of duplicate API, really what's the purpose behind using high level API like Keras when you have low level to build model like Lego blocks? -Finally, what's the true purpose of using eager execution?","You can use these APIs all together. E.g. if you have a regular dense network, but with an special layer you can use higher level API for dense layers (tf.layers and tf.keras) and low level API for your special layer. Furthermore, it is complex graphs are easier to define in low level APIs, e.g. if you want to share variables, etc. -Eager execution helps you for fast debugging, it evaluates tensors directly without a need of invoking a session.",0.0,False,1,5781 -2018-10-25 11:08:14.153,Keras flow_from_dataframe wrong data ordering,"I am using keras's data generator with flow_from_dataframe. for training it works just fine, but when using model.predict_generator on the test set, I discovered that the ordering of the generated results is different than the ordering of the ""id"" column in my dataframe. -shuffle=False does make the ordering of the generator consistent, but it is a different ordering than the dataframe. I also tried different batch sizes and the corresponding correct steps for the predict_generator function. (for example: batch_Size=1, steps=len(data)) -how can I make sure the labels predicted for my test set are ordered in the same way of my dataframe ""id"" column?","While I haven't found a way to decide the order in which the generator produces data, the order can be obtained with the generator.filenames property.",1.2,True,1,5782 -2018-10-25 15:16:07.853,Write python functions to operate over arbitrary axes,"I've been struggling with this problem in various guises for a long time, and never managed to find a good solution. -Basically if I want to write a function that performs an operation over a given, but arbitrary axis of an arbitrary rank array, in the style of (for example) np.mean(A,axis=some_axis), I have no idea in general how to do this. -The issue always seems to come down to the inflexibility of the slicing syntax; if I want to access the ith slice on the 3rd index, I can use A[:,:,i], but I can't generalise this to the nth index.","numpy functions use several approaches to do this: - -transpose axes to move the target axis to a known position, usually first or last; and if needed transpose the result -reshape (along with transpose) to reduce the problem simpler dimensions. If your focus is on the n'th dimension, it might not matter where the (:n) dimension are flattened or not. They are just 'going along for the ride'. -construct an indexing tuple. idx = (slice(None), slice(None), j); A[idx] is the equivalent of A[:,:,j]. Start with a list or array of the right size, fill with slices, fiddle with it, and then convert to a tuple (tuples are immutable). -Construct indices with indexing_tricks tools like np.r_, np.s_ etc. - -Study code that provides for axes. Compiled ufuncs won't help, but functions like tensordot, take_along_axis, apply_along_axis, np.cross are written in Python, and use one or more of these tricks.",1.2,True,1,5783 -2018-10-25 15:26:46.793,Highly variable execution times in Cython functions,"I have a performance measurement issue while executing a migration to Cython from C-compiled functions (through scipy.weave) called from a Python engine. -The new cython functions profiled end-to-end with cProfile (if not necessary I won't deep down in cython profiling) record cumulative measurement times highly variable. -Eg. the cumulate time of a cython function executed 9 times per 5 repetitions (after a warm-up of 5 executions - not took in consideration by the profiling function) is taking: - -in a first round 215,627339 seconds -in a second round 235,336131 seconds - -Each execution calls the functions many times with different, but fixed parameters. -Maybe this variability could depends on CPU loads of the test machine (a cloud-hosted dedicated one), but I wonder if such a variability (almost 10%) could depend someway by cython or lack of optimization (I already use hints on division, bounds check, wrap-around, ...). -Any idea on how to take reliable metrics?",I'm not a performance expert but from my understanding the thing you should be measuring would be the average time it take per execution not the cumulative time? Other than that is your function doing any like reading from disk and/or making network requests?,0.0,False,2,5784 -2018-10-25 15:26:46.793,Highly variable execution times in Cython functions,"I have a performance measurement issue while executing a migration to Cython from C-compiled functions (through scipy.weave) called from a Python engine. -The new cython functions profiled end-to-end with cProfile (if not necessary I won't deep down in cython profiling) record cumulative measurement times highly variable. -Eg. the cumulate time of a cython function executed 9 times per 5 repetitions (after a warm-up of 5 executions - not took in consideration by the profiling function) is taking: - -in a first round 215,627339 seconds -in a second round 235,336131 seconds - -Each execution calls the functions many times with different, but fixed parameters. -Maybe this variability could depends on CPU loads of the test machine (a cloud-hosted dedicated one), but I wonder if such a variability (almost 10%) could depend someway by cython or lack of optimization (I already use hints on division, bounds check, wrap-around, ...). -Any idea on how to take reliable metrics?","First of all, you need to ensure that your measurement device is capable of measuring what you need: specifically, only the system resources you consume. UNIX's utime is one such command, although even that one still includes swap time. Check the documentation of your profiler: it should have capabilities to measure only the CPU time consumed by the function. If so, then your figures are due to something else. -Once you've controlled the external variations, you need to examine the internal. You've said nothing about the complexion of your function. Some (many?) functions have available short-cuts for data-driven trivialities, such as multiplication by 0 or 1. Some are dependent on an overt or covert iteration that varies with the data. You need to analyze the input data with respect to the algorithm. -One tool you can use is a line-oriented profiler to detail where the variations originate; seeing which lines take the extra time should help determine where the ""noise"" comes from.",0.2012947653214861,False,2,5784 -2018-10-25 20:43:10.730,Kernel size change in convolutional neural networks,"I have been working on creating a convolutional neural network from scratch, and am a little confused on how to treat kernel size for hidden convolutional layers. For example, say I have an MNIST image as input (28 x 28) and put it through the following layers. -Convolutional layer with kernel_size = (5,5) with 32 output channels - -new dimension of throughput = (32, 28, 28) - -Max Pooling layer with pool_size (2,2) and step (2,2) - -new dimension of throughput = (32, 14, 14) - -If I now want to create a second convolutional layer with kernel size = (5x5) and 64 output channels, how do I proceed? Does this mean that I only need two new filters (2 x 32 existing channels) or does the kernel size change to be (32 x 5 x 5) since there are already 32 input channels? -Since the initial input was a 2D image, I do not know how to conduct convolution for the hidden layer since the input is now 3 dimensional (32 x 14 x 14).","you need 64 kernel, each with the size of (32,5,5) . -depth(#channels) of kernels, 32 in this case, or 3 for a RGB image, 1 for gray scale etc, should always match the input depth, but values are all the same. -e.g. if you have a 3x3 kernel like this : [-1 0 1; -2 0 2; -1 0 1] and now you want to convolve it with an input with N as depth or say channel, you just copy this 3x3 kernel N times in 3rd dimension, the following math is just like the 1 channel case, you sum all values in all N channels which your kernel window is currently on them after multiplying the kernel values with them and get the value of just 1 entry or pixel. so what you get as output in the end is a matrix with 1 channel:) how much depth you want your matrix for next layer to have? that's the number of kernels you should apply. hence in your case it would be a kernel with this size (64 x 32 x 5 x 5) which is actually 64 kernels with 32 channels for each and same 5x5 values in all cahnnels. -(""I am not a very confident english speaker hope you get what I said, it would be nice if someone edit this :)"")",0.0,False,1,5785 -2018-10-25 21:40:47.257,Python: I can not get pynput to install,"I'm trying to run a program with pynput. I tried installing it through terminal on Mac with pip. However, it still says it's unresolved on my ide PyCharm. Does anyone have any idea of how to install this?","I have three theories, but first: make sure it is installed by running python -c ""import pynput"" - -JetBrain's IDEs typically do not scan for package updates, so try restarting the IDE. -JetBrain's IDE might configure a python environment for you, this might cause you to have to manually import it in your run configuration. -You have two python versions installed and you installed the package on the opposite version you run script on. - -I think either 1 or 3 is the most likely.",0.0,False,1,5786 -2018-10-26 07:04:15.230,How to get the dimension of tensors at runtime?,"I can get the dimensions of tensors at graph construction time via manually printing shapes of tensors(tf.shape()) but how to get the shape of these tensors at session runtime? -The reason that I want shape of tensors at runtime is because at graph construction time shape of some tensors is coming as (?,8) and I cannot deduce the first dimension then.","You have to make the tensors an output of the graph. For example, if showme_tensor is the tensor you want to print, just run the graph like that : -_showme_tensor = sess.run(showme_tensor) -and then you can just print the output as you print a list. If you have different tensors to print, you can just add them like that : -_showme_tensor_1, _showme_tensor_2 = sess.run([showme_tensor_1, showme_tensor_2])",0.0,False,1,5787 -2018-10-27 10:53:32.190,python - pandas dataframe to powerpoint chart backend,"I have a pandas dataframe result which stores a result obtained from a sql query. I want to paste this result onto the chart backend of a specified chart in the selected presentation. Any idea how to do this? -P.S. The presentation is loaded using the module python-pptx","you will need to read a bit about python-pptx. -You need chart's index and slide index of the chart. Once you know them -get your chart object like this-> -chart = presentation.slides[slide_index].shapes[shape_index].chart -replacing data -chart.replace_data(new_chart_data) -reset_chart_data_labels(chart) -then when you save your presentation it will have updated the data. -usually, I uniquely name all my slides and charts in a template and then I have a function that will get me the chart's index and slide's index. (basically, I iterate through all slides, all shapes, and find a match for my named chart). -Here is a screenshot where I name a chart->[![screenshot][1]][1]. Naming slides is a bit more tricky and I will not delve into that but all you need is slide_index just count the slides 0 based and then you have the slide's index. -[1]: https://i.stack.imgur.com/aFQwb.png",0.0,False,1,5788 -2018-10-31 22:26:56.993,How to make Flask app up and running after server restart?,"What is the recommended way to run Flask app (e.g. via Gunicorn?) and how to make it up and running automatically after linux server (redhat) restart? -Thanks",have you looked at supervisord? it works reasonably well and handles restarting processes automatically if they fail as well as looking after error logs nicely,0.0,False,1,5789 -2018-11-01 03:08:27.057,cv2 show video stream & add overlay after another function finishes,"I am current working on a real time face detection project. -What I have done is that I capture the frame using cv2, do detection and then show result using cv2.imshow(), which result in a low fps. -I want a high fps video showing on the screen without lag and a low fps detection bounding box overlay. -Is there a solution to show the real time video stream (with the last detection result bounding box), and once a new detection is finished, show the new bounding box and the background was not delayed by the detection function. -Any help is appreciated! -Thanks!","A common approach would be to create a flag that allows the detection algorithim to only run once every couple of frames and save the predicted reigons of interest to a list, whilst creating bounding boxes for every frame. -So for example you have a face detection algorithim, process every 15th frame to detect faces, but in every frame create a bounding box from the predictions. Even though the predictions get updated every 15 frames. -Another approach could be to add an object tracking layer. Run your heavy algorithim to find the ROIs and then use the object tracking library to hold on to them till the next time it runs the detection algorithim. -Hope this made sense.",1.2,True,1,5790 -2018-11-01 07:22:44.353,What Is the Correct Mimetype (in and out) for a .Py File for Google Drive?,"I have a script that uploads files to Google Drive. I want to upload python files. I can do it manually and have it keep the file as .py correctly (and it's previewable), but no matter what mimetypes I try, I can't get my program to upload it correctly. It can upload the file as a .txt or as something GDrive can't recognize, but not as a .py file. I can't find an explicit mimetype for it (I found a reference for text/x-script.python but it doesn't work as an out mimetype). -Does anyone know how to correctly upload a .py file to Google Drive using REST?",Also this is a valid Python mimetype: text/x-script.python,-0.2012947653214861,False,1,5791 -2018-11-01 09:31:15.857,Running a python file in windows after removing old python files,So I am running python 3.6.5 on a school computer the most things are heavily restricted to do on a school computer and i can only use python on drive D. I cannot use batch either. I had python 2.7 on it last year until i deleted all the files and installed python 3.6.5 after that i couldn't double click on a .py file to open it as it said continue using E:\Python27\python(2.7).exe I had the old python of a USB which is why it asks this but know i would like to change that path the the new python file so how would i do that in windows,Just open your Python IDE and open the file manually.,0.0,False,1,5792 -2018-11-01 22:25:51.750,GROUPBY with showing all the columns,"I want to do a groupby of my MODELS by CITYS with keeping all the columns where i can print the percentage of each MODELS IN THIS CITY. -I put my dataframe in PHOTO below. -And i have written this code but i don""t know how to do ?? -for name,group in d_copy.groupby(['CITYS'])['MODELS']:","Did you try this : d_copy.groupby(['CITYS','MODELS']).mean() to have the average percentage of a model by city. -Then if you want to catch the percentages you have to convert it in DF and select the column : pd.DataFrame(d_copy.groupby(['CITYS','MODELS']).mean())['PERCENTAGE']",0.0,False,1,5793 -2018-11-03 05:34:23.617,Google Data Studio Connector and App Scripts,"I am working on a project for a client in which I need to load a lot of data into data studio. I am having trouble getting the deployment to work with my REST API. -The API has been tested with code locally but I need to know how to make it compatible with the code base in App Scripts. Has anyone else had experience with working around this? The endpoint is a Python Flask application. -Also, is there a limit on the amount of data that you can dump in a single response to the Data Studio? As a solution to my needs(needing to be able to load data for 300+ accounts) I have created a program that caches the data needed from each account and returns the whole payload at once. There are a lot of entries, so I was wondering if they had a limit to what can be uploaded at once. -Thank you in advance","I found the issue, it was a simple case of forgetting to add the url to the whitelist.",0.3869120172231254,False,1,5794 -2018-11-03 15:56:12.343,Multi-Line Combobox in Tkinter,"Is it possible to have a multi-line text entry field with drop down options? -I currently have a GUI with a multi-line Text widget where the user writes some comments, but I would like to have some pre-set options for these comments that the user can hit a drop-down button to select from. -As far as I can tell, the Combobox widget does not allow changing the height of the text-entry field, so it is effectively limited to one line (expanding the width arbitrarily is not an option). Therefore, what I think I need to do is sub-class the Text widget and somehow add functionality for a drop down to show these (potentially truncated) pre-set options. -I foresee a number of challenges with this route, and wanted to make sure I'm not missing anything obvious with the existing built-in widgets that could do what I need.","I don't think you are missing anything. Note that ttk.Combobox is a composite widget. It subclasses ttk.Entry and has ttk.Listbox attached. -To make multiline equivalent, subclass Text. as you suggested. Perhaps call it ComboText. Attach either a frame with multiple read-only Texts, or a Text with multiple entries, each with a separate tag. Pick a method to open the combotext and methods to close it, with or without copying a selection into the main text. Write up an initial doc describing how to operate the thing.",0.2012947653214861,False,1,5795 -2018-11-04 15:50:14.623,"Apache - if file does not exist, run script to create it, then serve it","How can I get this to happen in Apache (with python, on Debian if it matters)? - -User submits a form -Based on the form entries I calculate which html file to serve them (say 0101.html) -If 0101.html exists, redirect them directly to 0101.html -Otherwise, run a script to create 0101.html, then redirect them to it. - -Thanks! -Edit: I see there was a vote to close as too broad (though no comment or suggestion). I am just looking for a minimum working example of the Apache configuration files I would need. If you want the concrete way I think it will be done, I think apache just needs to check if 0101.html exists, if so serve it, otherwise run cgi/myprogram.py with input argument 0101.html. Hope this helps. If not, please suggest how I can make it more specific. Thank you.","Apache shouldn't care. Just serve a program that looks for the file. If it finds it it will read it (or whatever and) return results and if it doesn't find it, it will create and return the result. All can be done with a simple python file.",1.2,True,1,5796 -2018-11-04 18:53:52.133,AWS CLI upload failed: unknown encoding: idna,"I am trying to push some files up to s3 with the AWS CLI and I am running into an error: -upload failed: ... An HTTP Client raised and unhandled exception: unknown encoding: idna -I believe this is a Python specific problem but I am not sure how to enable this type of encoding for my python interpreter. I just freshly installed Python 3.6 and have verified that it being used by powershell and cmd. -$> python --version - Python 3.6.7 -If this isn't a Python specific problem, it might be helpful to know that I also just freshly installed the AWS CLI and have it properly configured. Let me know if there is anything else I am missing to help solve this problem. Thanks.","Even I was facing same issue. I was running it on Windows server 2008 R2. I was trying to upload around 500 files to s3 using below command. - -aws s3 cp sourcedir s3bucket --recursive --acl - bucket-owner-full-control --profile profilename - -It works well and uploads almost all files, but for initial 2 or 3 files, it used to fail with error: An HTTP Client raised and unhandled exception: unknown encoding: idna -This error was not consistent. The file for which upload failed, it might succeed if I try to run it again. It was quite weird. -Tried on trial and error basis and it started working well. -Solution: - -Uninstalled Python 3 and AWS CLI. -Installed Python 2.7.15 -Added python installed path in environment variable PATH. Also added pythoninstalledpath\scripts to PATH variable. -AWS CLI doesnt work well with MS Installer on Windows Server 2008, instead used PIP. - -Command: - -pip install awscli - -Note: for pip to work, do not forget to add pythoninstalledpath\scripts to PATH variable. -You should have following version: -Command: - -aws --version - -Output: aws-cli/1.16.72 Python/2.7.15 Windows/2008ServerR2 botocore/1.12.62 -Voila! The error is gone!",-0.1618299653758019,False,2,5797 -2018-11-04 18:53:52.133,AWS CLI upload failed: unknown encoding: idna,"I am trying to push some files up to s3 with the AWS CLI and I am running into an error: -upload failed: ... An HTTP Client raised and unhandled exception: unknown encoding: idna -I believe this is a Python specific problem but I am not sure how to enable this type of encoding for my python interpreter. I just freshly installed Python 3.6 and have verified that it being used by powershell and cmd. -$> python --version - Python 3.6.7 -If this isn't a Python specific problem, it might be helpful to know that I also just freshly installed the AWS CLI and have it properly configured. Let me know if there is anything else I am missing to help solve this problem. Thanks.","I had the same problem in Windows. -After investigating the problem, I realized that the problem is in the aws-cli installed using the MSI installer (x64). After removing ""AWS Command Line Interface"" from the list of installed programs and installing aws-cli using pip, the problem was solved. -I also tried to install MSI installer x32 and the problem was missing.",1.2,True,2,5797 -2018-11-05 10:20:35.477,Calling a Python function from HTML,"Im writing a webapplication, where im trying to display the connected USB devices. I found a Python function that does exactly what i want but i cant really figure out how to call the function from my HTML code, preferably on the click of a button.","simple answer: you can't. the code would have to be run client-side, and no browser would execute potentially malicious code automatically (and not every system has a python interpreter installed). -the only thing you can execute client-side (without the user taking action, e.g. downloading a program or browser add-on) is javascript.",1.2,True,1,5798 -2018-11-05 18:11:03.353,How to create Graphql server for microservices?,"We have several microservices on Golang and Python, On Golang we are writing finance operations and on Python online store logic, we want to create one API for our front-end and we don't know how to do it. -I have read about API gateway and would it be right if Golang will create its own GraphQL server, Python will create another one and they both will communicate with the third graphql server which will generate API for out front-end.","I do not know much details about your services, but great pattern I successfully used on different projects is as you mentioned GraphQL gateway. -You will create one service, I prefer to create it in Node.js where all requests from frontend will coming through. Then from GraphQL gateway you will request your microservices. This will be basically your only entry point into the backend system. Requests will be authenticated and you are able to unify access to your data and perform also some performance optimizations like implementing data loader's caching and batching to mitigate N+1 problem. In addition you will reduce complexity of having multiple APIs and leverage all the GraphQL benefits. -On my last project we had 7 different frontends and each was using the same GraphQL gateway and I was really happy with our approach. There are definitely some downsides as you need to keep in sync all your frontends and GraphQL gateway, therefore you need to be more aware of your breaking changes, but it is solvable with for example deprecated directive and by performing blue/green deployment with Kubernetes cluster. -The other option is to create the so-called backend for frontend in GraphQL. Right now I do not have enough information which solution would be best for you. You need to decide based on your frontend needs and business domain, but usually I prefer GraphQL gateway as GraphQL has great flexibility and the need to taylor your API to frontend is covered by GraphQL capabilities. Hope it helps David",1.2,True,1,5799 -2018-11-05 18:14:16.803,What should be the 5th dimension for the input to 3D-CNN while working with hyper-spectral images?,"I have a hyperspectral image having dimension S * S * L where S*S is the spatial size and L denotes the number of spectral bands. -Now the shape of my X (image array) is: (1, 145, 145, 200) where 1 is the number of examples, 145 is the length and width of the image and 200 is no. of channels of the image. -I want to input this small windows of this image (having dimension like W * W * L; W < S) into a 3D CNN, but for that, I need to have 5 dimensions in the following format: (batch, length, width, depth, channels). -It seems to me I am missing one of the spatial dimensions, how do I convert my image array into a 5-dimensional array without losing any information? -I am using python and Keras for the above.","What you want is a 2D CNN, not a 3D one. A 2D CNN already supports multiple channels, so you should have no problem using it with a hyperspectral image.",0.2012947653214861,False,2,5800 -2018-11-05 18:14:16.803,What should be the 5th dimension for the input to 3D-CNN while working with hyper-spectral images?,"I have a hyperspectral image having dimension S * S * L where S*S is the spatial size and L denotes the number of spectral bands. -Now the shape of my X (image array) is: (1, 145, 145, 200) where 1 is the number of examples, 145 is the length and width of the image and 200 is no. of channels of the image. -I want to input this small windows of this image (having dimension like W * W * L; W < S) into a 3D CNN, but for that, I need to have 5 dimensions in the following format: (batch, length, width, depth, channels). -It seems to me I am missing one of the spatial dimensions, how do I convert my image array into a 5-dimensional array without losing any information? -I am using python and Keras for the above.","If you want to convolve along the dimension of your channels, you should add a singleton dimension in the position of channel. If you don't want to convolve along the dimension of your channels, you should use a 2D CNN.",1.2,True,2,5800 -2018-11-06 05:41:57.087,Family tree in Python,"I need to model a four generational family tree starting with a couple. After that if I input a name of a person and a relation like 'brother' or 'sister' or 'parent' my code should output the person's brothers or sisters or parents. I have a fair bit of knowledge of python and self taught in DSA. I think I should model the data as a dictionary and code for a tree DS with two root nodes(i.e, the first couple). But I am not sure how to start. I just need to know how to start modelling the family tree and the direction of how to proceed to code. Thank you in advance!","There's plenty of ways to skin a cat, but I'd suggest to create: - -A Person class which holds relevant data about the individual (gender) and direct relationship data (parents, spouse, children). -A dictionary mapping names to Person elements. - -That should allow you to answer all of the necessary questions, and it's flexible enough to handle all kinds of family trees (including non-tree-shaped ones).",0.9999092042625952,False,1,5801 -2018-11-06 07:03:33.130,Tensorflow MixtureSameFamily and gaussian mixture model,"I am really new to Tensorflow as well as gaussian mixture model. -I have recently used tensorflow.contrib.distribution.MixtureSameFamily class for predicting probability density function which is derived from gaussian mixture of 4 components. -When I plotted the predicted density function using ""prob()"" function as Tensorflow tutorial explains, I found the plotted pdf with only one mode. I expected to see 4 modes as the mixture components are 4. -I would like to ask whether Tensorflow uses any global mode predicting algorithm in their MixtureSameFamily class. If not, I would also like to know how MixtureSameFamily class forms the pdf with statistical values. -Thank you very much.","I found an answer for above question thanks to my collegue. -The 4 components of gaussian mixture have had very similar means that the mixture seems like it has only one mode. -If I put four explicitly different values as means to the MixtureSameFamily class, I could get a plot of gaussian mixture with 4 different modes. -Thank you very much for reading this.",0.0,False,1,5802 -2018-11-07 04:43:09.720,How to run pylint plugin in Intellij IDEA?,"I have installed pylint plugin and restarted the Intellij IDEA. It is NOT external tool (so please avoid providing answers on running as an external tool as I know how to). -However I have no 'pylint' in the tool menu or the code menu. -Is it invoked by running 'Analyze'? or is there a way to run the pylint plugin on py files?","This is for the latest IntelliJ IDEA version 2018.3.5 (Community Edition): - -Type ""Command ,"" or click ""IntelliJ IDEA -> Preferences..."" -From the list on the left of the popped up window select ""Plugins"" -Make sure that on the right top the first tab ""Marketplace"" is picked if it's not -Search for ""Pylint"" and when the item is found, click the greed button ""Install"" associated with the found item - -The plugin should then be installed properly. -One can then turn on/off real-time Pylint scan via the same window by navigating in the list on the left: ""Editor -> Inspections"", then in the list on the right unfolding ""Pylint"" and finally checking/unchecking the corresponding checkbox on the right of the unfolded item. -One can also in the same window go the very last top-level item within the list on the left named ""Other Settings"" and unfold it. -Within it there's an item called ""Pylint"", click on it. -On the top right there should be a button ""Test"", click on it. -If in a few seconds to the left of the ""Test"" text there appears a green checkmark, then Pylint is installed correctly. -Finally, to access the actual Pylint window, click ""View""->""Tool Windows""->""Pylint""! -Enjoy!",0.9999092042625952,False,1,5803 -2018-11-08 02:59:54.810,nltk bags of words showing emotions,"i am working on NLP using python and nltk. -I was wondering whether is there any dataset which have bags of words which shows keywords relating to emotions such as happy, joy, anger, sadness and etc -from what i dug up in the nltk corpus, i see there are some sentiment analysis corpus which contain positive and negative review which doesn't exactly related to keywords showing emotions. -Is there anyway which i could build my own dictionary containing words which shows emotion for this purpose? is so, how do i do it and is there any collection of such words? -Any help would be greatly appreciated","I'm not aware of any dataset that associates sentiments to keywords, but you can easily built one starting from a generic sentiment analysis dataset. -1) Clean the datasets from the stopwords and all the terms that you don't want to associate to a sentiment. -2)Compute the count of each words in the two sentiment classes and normalize it. In this way you will associate a probability to each word to belong to a class. Let's suppose that you have 300 times the word ""love"" appearing in the positive sentences and the same word appearing 150 times in the negative sentences. Normalizing you have that the word ""love"" belongs with a probability of 66% (300/(150+300)) to the positive class and 33% to the negative one. -3) In order to make the dictionary more robust to the borderline terms you can set a threshold to consider neutral all the words with the max probability lower than the threshold. -This is an easy approach to build the dictionary that you are looking for. You could use more sophisticated approach as Term Frequency-Inverse Document Frequency.",0.0,False,1,5804 -2018-11-09 01:48:39.963,Operating the Celery Worker in the ECS Fargate,"I am working on a project using AWS ECS. I want to use Celery as a distributed task queue. Celery Worker can be build up as EC2 type, but because of the large amount of time that the instance is in the idle state, I think it would be cost-effective for AWS Fargate to run the job and quit immediately. -Do you have suggestions on how to use the Celery Worker efficiently in the AWS cloud?","Fargate launch type is going to take longer to spin up than EC2 launch type, because AWS is doing all the ""host things"" for you when you start the task, including the notoriously slow attaching of an ENI, and likely downloading the image from a Docker repo. Right now there's no contest, EC2 launch type is faster every time. -So it really depends on the type of work you want the workers to do. You can expect a new Fargate task to take a few minutes to enter a RUNNING state for the aforementioned reasons. EC2 launch, on the other hand, because the ENI is already in place on your host and the image is already downloaded (at best) or mostly downloaded (likely worst), will move from PENDING to RUNNING very quickly. - -Use EC2 launch type for steady workloads, use Fargate launch type for burst capacity -This is the current prevailing wisdom, often discussed as a cost factor because Fargate can't take advantage of the typical EC2 cost savings mechanisms like reserved instances and spot pricing. It's expensive to run Fargate all the time, compared to EC2. -To be clear, it's perfectly fine to run 100% in Fargate (we do), but you have to be willing to accept the downsides of doing that - slower scaling and cost. -Note you can run both launch types in the same cluster. Clusters are logical anyway, just a way to organize your resources. - -Example cluster -This example shows a static EC2 launch type service running 4 celery tasks. The number of tasks, specs, instance size and all doesn't really matter, do it up however you like. The important thing is - EC2 launch type service doesn't need to scale; the Fargate launch type service is able to scale from nothing running (during periods where there's little or no work to do) to as many workers as you can handle, based on your scaling rules. -EC2 launch type Celery service -Running 1 EC2 launch type t3.medium (2vcpu/4GB). -Min tasks: 2, Desired: 4, Max tasks: 4 -Running 4 celery tasks at 512/1024 in this EC2 launch type. -No scaling policies -Fargate launch type Celery service -Min tasks: 0, Desired: (x), Max tasks: 32 -Running (x) celery tasks (same task def as EC2 launch type) at 512/1024 -Add scaling policies to this service",1.2,True,1,5805 -2018-11-09 07:20:23.930,how do I insert some rows that I select from remote MySQL database to my local MySQL database,"My remote MySQL database and local MySQL database have the same table structure, and the remote and local MySQL database is utf-8charset.","You'd better merge value and sql template string and print it , make sure the sql is correct.",0.0,False,1,5806 -2018-11-09 16:42:21.617,Run external Python script that could only read/write only a subset of main app variables,"I have a Python application that simulates the behaviour of a system, let's say a car. -The application defines a quite large set of variables, some corresponding to real world parameters (the remaining fuel volume, the car speed, etc.) and others related to the simulator internal mechanics which are of no interest to the user. -Everything works fine, but currently the user can have no interaction with the simulation whatsoever during its execution: she just sets simulation parameters, lauchs the simulation, and waits for its termination. -I'd like the user (i.e. not the creator of the application) to be able to write Python scripts, outside of the app, that could read/write the variables associated with the real world parameters (and only these variables). -For instance, at t=23s (this condition I know how to check for), I'd like to execute user script gasLeak.py, that reads the remaining fuel value and sets it to half its current value. -To sum up, how is it possible, from a Python main app, to execute user-written Python scripts that can access and modifiy only a pre-defined subset of the main script variables. In a perfect world, I'd also like that modifications applied to user scripts during the running of the app to be taken into account without having to restart said app (something along the reloading of a module).",Make the user-written scripts read command-line arguments and print to stdout. Then you can call them with the subprocess module with the variables they need to know about as arguments and read their responses with subprocess.check_output.,0.0,False,1,5807 -2018-11-09 23:03:45.930,pytest-xdist generate random & uniqe ports for each test,"I'm using pytest-xdist plugin to run some test using the @pytest.mark.parametrize to run the same test with different parameters. -As part of these tests, I need to open/close web servers and the ports are generated at collection time. -xdist does the test collection on the slave and they are not synchronised, so how can I guarantee uniqueness for the port generation. -I can use the same port for each slave but I don't know how to archive this.","I figured that I did not give enough information regarding my issue. -What I did was to create one parameterized test using @pytest.mark.parametrize and before the test, I collect the list of parameters, the collection query a web server and receive a list of ""jobs"" to process. -Each test contains information on a port that he needs to bind to, do some work and exit because the tests are running in parallel I need to make sure that the ports will be different. -Eventually, I make sure that the job ids will be in the rand on 1024-65000 and used that for the port.",1.2,True,1,5808 -2018-11-10 23:45:59.803,how to detect if photo is mostly a document?,I think i am looking for something simpler than detecting a document boundaries in a photo. I am only trying to flag photos which are mostly of documents rather than just a normal scene photo. is this an easier problem to solve?,"Are the documents mostly white? If so, you could analyse the images for white content above a certain percentage. Generally text documents only have about 10% printed content on them in total.",0.0,False,1,5809 -2018-11-11 14:15:01.157,"Sending data to Django backend from RaspberryPi Sensor (frequency, bulk-update, robustness)","I’m currently working on a Raspberry Pi/Django project slightly more complex that i’m used to. (i either do local raspberry pi projects, or simple Django websites; never the two combined!) -The idea is two have two Raspberry Pi���s collecting information running a local Python script, that would each take input from one HDMI feed (i’ve got all that part figured out - I THINK) using image processing. Now i want these two Raspberry Pi’s (that don’t talk to each other) to connect to a backend server that would combine, store (and process) the information gathered by my two Pis -I’m expecting each Pi to be working on one frame per second, comparing it to the frame a second earlier (only a few different things he is looking out for) isolate any new event, and send it to the server. I’m therefore expecting no more than a dozen binary timestamped data points per second. -Now what is the smart way to do it here ? - -Do i make contact to the backend every second? Every 10 seconds? -How do i make these bulk HttpRequests ? Through a POST request? Through a simple text file that i send for the Django backend to process? (i have found some info about “bulk updates” for django but i’m not sure that covers it entirely) -How do i make it robust? How do i make sure that all data what successfully transmitted before deleting the log locally ? (if one call fails for a reason, or gets delayed, how do i make sure that the next one compensates for lost info? - -Basically, i’m asking advise for making a IOT based project, where a sensor gathers bulk information and want to send it to a backend server for processing, and how should that archiving process be designed. -PS: i expect the image processing part (at one fps) to be fast enough on my Pi Zero (as it is VERY simple); backlog at that level shouldn’t be an issue. -PPS: i’m using a django backend (even if it seems a little overkill) - a/ because i already know the framework pretty well - b/ because i’m expecting to build real-time performance indicators from the combined data points gathered, using django, and displaying them in (almost) real-time on a webpage. -Thank you very much !","This partly depends on just how resilient you need it to be. If you really can't afford for a single update to be lost, I would consider using a message queue such as RabbitMQ - the clients would add things directly to the queue and the server would pop them off in turn, with no need to involve HTTP requests at all. -Otherwise it would be much simpler to just POST each frame's data in some serialized format (ie JSON) and Django would simply deserialize and iterate through the list, saving each entry to the db. This should be fast enough for the rate you describe - I'd expect saving a dozen db entries to take significantly less than half a second - but this still leaves the problem of what to do if things get hung up for some reason. Setting a super-short timeout on the server will help, as would keeping the data to be posted until you have confirmation that it has been saved - and creating unique IDs in the client to ensure that the request is idempotent.",0.6730655149877884,False,1,5810 -2018-11-12 08:56:25.160,run python from Microsoft Dynamics,"I know i can access a Dynamics instance from a python script by using the oData API, but what about the other way around? Is it possible to somehow call a python script from within Dynamics and possible even pass arguments? -Would this require me to use custom js/c#/other code within Dynamics?","You won't be able to nativley execute a python script within Dynamics. -I would approach this by placing the Python script in a service that can be called via a web service call from Dynamics. You could make the call from form JavaScript or a Plugin using C#.",1.2,True,1,5811 -2018-11-12 20:04:05.643,Extracting URL from inside docx tables,"I'm pretty much stuck right now. -I wrote a parser in python3 using the python-docx library to extract all tables found in an existing .docx and store it in a python datastructure. -So far so good. Works as it should. -Now I have the problem that there are hyperlinks in these tables which I definitely need! Due to the structure (xml underneath) the docx library doesn't catch these. Neither the url nor the display text provided. I found many people having similar concerns about this, but most didn't seem to have 'just that' dilemma. -I thought about unpacking the .docx and scan the _ref document for the corresponding 'rid' and fill the actual data I have with the links found in the _ref xml. -Either way it seems seriously weary to do it that way, so I was wondering if there is a more pythonic way to do it or if somebody got good advise how to tackle this problem?","You can extract the links by parsing xml of docx file. -You can extract all text from the document by using document.element.getiterator() -Iterate all the tags of xml and extract its text. You will get all the missing data which python-docx failed to extract.",0.0,False,1,5812 -2018-11-12 23:39:45.557,"openpyxl how to read formula result after editing input data on the sheet? data_only=True gives me a ""None"" result","Using openpyxl, I'm able to read 2 numbers on a sheet, and also able to read their sum by loading the sheet with data_only=True. -However, when I alter the 2 numbers using openpyxl and then try to read the answer using data_only=True, it returns no output. How do I do this?",You can have either the value or the formula in openpyxl. It is precisely to avoid the confusion that this kind of edit could introduce that the library works like this. To evaluate the changed formulae you'll need to load the file in an app like MS Excel or LibreOffice that can evaluate the formulae and store the results.,0.1352210990936997,False,1,5813 -2018-11-13 01:35:08.450,inception v3 using tf.data?,"I'm using a bit of code that is derived from inception v3 as distributed by the Google folks, but it's now complaining that the queue runners used to read the data are deprecated (tf.train.string_input_producer in image_processing.py, and similar). Apparently I'm supposed to switch to tf.data for this kind of stuff. -Unfortunately, the documentation on tf.data isn't doing much to relieve my concern that I've got too much data to fit in memory, especially given that I want to batch it in a reusable way, etc. I'm confident that the tf.data stuff can do this; I just don't know how to do it. Can anyone point me to a full example of code that uses tf.data to deal with batches of data that won't all fit in memory? Ideally, it would simply be an updated version of the inception-v3 code, but I'd be happy to try and work with anything. Thanks!","Well, I eventually got this working. The various documents referenced in the comment on my question had what I needed, and I gradually figured out which parameters passed to queuerunners corresponded to which parameters in the tf.data stuff. -There was one gotcha that took a while for me to sort out. In the inception implementation, the number of examples used for validation is rounded up to be a multiple of the batch size; presumably the validation set is reshuffled and some examples are used more than once. (This does not strike me as great practice, but generally the number of validation instances is way larger than the batch size, so only a relative few are double counted.) -In the tf.data stuff, enabling shuffling and reuse is a separate thing and I didn't do it on the validation data. Then things broke because there weren't enough unique validation instances, and I had to track that down. -I hope this helps the next person with this issue. Unfortunately, my code has drifted quite far from Inception v3 and I doubt that it would be helpful for me to post my modification. Thanks!",0.3869120172231254,False,1,5814 -2018-11-13 20:39:25.877,how to reformat a text paragrath using python,"Hi I was wondering how I could format a large text file by adding line breaks after certain characters or words. For instance, everytime a comma was in the paragraph could I use python to make this output an extra linebreak.","you can use the ''.replace() method like so: -'roses can be blue, red, white'.replace(',' , ',\n') gives -'roses can be blue,\n red,\n white' efectively inserting '\n' after every ,",0.0,False,1,5815 -2018-11-14 23:48:25.957,Python detecting different extensions on files,"How do i make python listen for changes to a folder on my desktop, and every time a file was added, the program would read the file name and categorize it it based on the extension? -This is a part of a more detailed program but I don't know how to get started on this part. This part of the program detects when the user drags a file into a folder on his/her desktop and then moves that file to a different location based on the file extension.","Periodically read the files in the folder and compare to a set of files remaining after the last execution of your script. Use os.listdir() and isfile(). -Read the extension of new files and copy them to a directory based on internal rules. This is a simple string slice, e.g., filename[-3:] for 3-character extensions. -Remove moved files from your set of last results. Use os.rename() or shutil.move(). -Sleep until next execution is scheduled.",1.2,True,1,5816 -2018-11-15 02:12:27.683,How do I configure settings for my Python Flask app on GoDaddy,"This app is working fine on heroku but how do i configure it on godaddy using custom domain. -When i navigate to custom domain, it redirects to mcc.godaddy.com. -What all settings need to be changed.","The solution is to add a correct CNAME record and wait till the value you entered has propagated. -Go to DNS management and make following changes: -In the 'Host' field enter 'www' and in 'Points to' field add 'yourappname.herokuapp.com'",0.0,False,1,5817 -2018-11-15 03:51:30.570,Compare stock indices of different sizes Python,"I am using Python to try and do some macroeconomic analysis of different stock markets. I was wondering about how to properly compare indices of varying sizes. For instance, the Dow Jones is around 25,000 on the y-axis, while the Russel 2000 is only around 1,500. I know that the website tradingview makes it possible to compare these two in their online charter. What it does is shrink/enlarge a background chart so that it matches the other on a new y-axis. Is there some statistical method where I can do this same thing in Python?","I know that the website tradingview makes it possible to compare these two in their online charter. What it does is shrink/enlarge a background chart so that it matches the other on a new y-axis. - -These websites rescale them by fixing the initial starting points for both indices at, say, 100. I.e. if Dow is 25000 points and S&P is 2500, then Dow is divided by 250 to get to 100 initially and S&P by 25. Then you have two indices that start at 100 and you then can compare them side by side. -The other method (works good only if you have two series) - is to set y-axis on the right hand side for one series, and on the left hand side for the other one.",1.2,True,1,5818 -2018-11-15 06:53:57.707,How to convert 2D matrix to 3D tensor without blending corresponding entries?,"I have data with the shape of (3000, 4), the features are (product, store, week, quantity). Quantity is the target. -So I want to reconstruct this matrix to a tensor, without blending the corresponding quantities. -For example, if there are 30 product, 20 stores and 5 weeks, the shape of the tensor should be (5, 20, 30), with the corresponding quantity. Because there won't be an entry like (store A, product X, week 3) twice in entire data, so every store x product x week pair should have one corresponding quantity. -Any suggestions about how to achieve this, or there is any logical error? Thanks.","You can first go through each of your first three columns and count the number of different products, stores and weeks that you have. This will give you the shape of your new array, which you can create using numpy. Importantly now, you need to create a conversion matrix for each category. For example, if product is 'XXX', then you want to know to which row of the first dimension (as product is the first dimension of your array) 'XXX' corresponds; same idea for store and week. Once you have all of this, you can simply iterate through all lines of your existing array and assign the value of quantity to the correct location inside your new array based on the indices stored in your conversion matrices for each value of product, store and week. As you said, it makes sense because there is a one-to-one correspondence.",0.0,False,1,5819 -2018-11-15 11:02:06.533,Installing packages to Anaconda Environments,"I've been having an issue with Anaconda, on two separate Windows machines. -I've downloaded and installed Anaconda. I know the commands, how to install libraries, I've even installed tensorflow-gpu (which works). I also use Jupyter notebook and I'm quite familiar with it by this point. -The issue: -For some reason, when I create new environments and install libraries to that environment... it ALWAYS installs them to (base). Whenever I try to run code in a jupyter notebook that is located in an environment other than (base), it can't find any of the libraries I need... because it's installing them to (base) by default. -I always ensure that I've activated the correct environment before installing any libraries. But it doesn't seem to make a difference. -Can anyone help me with this... am I doing something wrong?","Kind of fixed my problem. It is to do with launching Jupyter notebook. -After switching environment via command prompt... the command 'jupyter notebook' runs jupyter notebook via the default python environment, regardless. -However, if I switch environments via anaconda navigator and launch jupyter notebook from there, it works perfectly. -Maybe I'm missing a command via the prompt?",1.2,True,1,5820 -2018-11-15 11:25:13.747,How Do I store downloaded pdf files to Mongo DB,"I download the some of pdf and stored in directory. Need to insert them into mongo database with python code so how could i do these. Need to store them by making three columns (pdf_name, pdf_ganerateDate, FlagOfWork)like that.","You can use GridFS. Please check this url http://api.mongodb.com/python/current/examples/gridfs.html. -It will help you to store any file to mongoDb and get them. In other collection you can save file metadata.",0.3869120172231254,False,1,5821 -2018-11-15 15:28:09.797,how to use pipenv to run file in current folder,"Using pipenv to create a virtual environment in a folder. -However, the environment seems to be in the path: - -/Users/....../.local/share/virtualenvs/...... - -And when I run the command pipenv run python train.py, I get the error: - -can't open file 'train.py': [Errno 2] No such file or directory - -How to run a file in the folder where I created the virtual environment?","You need to be in the same directory of the file you want to run then use: -pipenv run python train.py -Note: +Loop through all the subdirectories (and their subdirectories if they exist) +Run every Python script named train.py in each of them, in whatever order necessary + +I know how to execute a given python script from another file (given its name), but I want to create a script that will execute whatever train.py scripts it encounters. Because the train.py scripts are subject to being moved around and being duplicated/deleted, I want to create an adaptable script that will run all those that it finds. +How can I do this?","If you are using Windows you could try running them from a PowerShell script. You can run two python scripts at once with just this: +python Test1.py +python Folder/Test1.py +And then add a loop and or a function that goes searching for the files. Because it's Windows Powershell, you have a lot of power when it comes to the filesystem and controlling Windows in general.",0.1352210990936997,False,2,6807 +2020-06-06 23:06:30.737,Running all Python scripts with the same name across many directories,"I have a file structure that looks something like this: +Master: -You may be at the project main directory while the file you need to run is inside a directory inside your project directory -If you use django to create your project, it will create two folders inside each other with the same name so as a best practice change the top directory name to 'yourname-project' then inside the directory 'yourname' run the pipenv run python train.py command",1.2,True,1,5822 -2018-11-15 20:21:37.897,xgboost feature importance of categorical variable,"I am using XGBClassifier to train in python and there are a handful of categorical variables in my training dataset. Originally, I planed to convert each of them into a few dummies before I throw in my data, but then the feature importance will be calculated for each dummy, not the original categorical ones. Since I also need to order all of my original variables (including numerical + categorical) by importance, I am wondering how to get importance of my original variables? Is it simply adding up?","You could probably get by with summing the individual categories' importances into their original, parent category. But, unless these features are high-cardinality, my two cents would be to report them individually. I tend to err on the side of being more explicit with reporting model performance/importance measures.",0.0,False,1,5823 -2018-11-15 20:22:49.817,How to run a briefly running Docker container on Azure on a daily basis?,"In the past, I've been using WebJobs to schedule small recurrent tasks that perform a specific background task, e.g., generating a daily summary of user activities. For each task, I've written a console application in C# that was published as an Azure Webjob. -Now I'd like to daily execute some Python code that is already working in a Docker container. I think I figured out how to get a container running in Azure. Right now, I want to minimize the operation cost since the container will only run for a duration of 5 minutes. Therefore, I'd like to somehow schedule that my container starts once per day (at 1am) and shuts down after completion. How can I achieve this setup in Azure?",I'd probably write a scheduled build job on vsts\whatever to run at 1am daily to launch a container on Azure Container Instances. Container should shutdown on its own when the program exists (so your program has to do that without help from outside).,1.2,True,1,5824 -2018-11-16 16:47:57.803,MongoDB - how can i set a documents limit to my capped collection?,"I'm fairly new to MongoDB. I need my Python script to query new entries from my Database in real time, but the only way to do this seems to be replica sets, but my Database is not a replica set, or with a Tailable cursor, which is only for capped collections. -From what i understood, a capped collection has a limit, but since i don't know how big my Database is gonna be and for when i'm gonna need to send data there, i am thinking of putting the limit to 3-4 million documents. Would this be possible?. -How can i do that?.","so do you want to increase the size of capped collection ? -if yes then if you know average document size then you may define size like: -db.createCollection(""sample"", { capped : true, size : 10000000, max : 5000000 } ) here 5000000 is max documents with size limit of 10000000 bytes",0.3869120172231254,False,1,5825 -2018-11-17 02:57:21.293,Import aar of Android library in Python,"I have wrote an Android library and build an aar file. And I want to write a python program to use the aar library. Is it possible to do that? If so, how to do that? Thanks",There is no way to include all dependencies to your aar file. So According to the open source licenses you can add their sources to your project.,0.0,False,1,5826 -2018-11-17 12:15:24.270,GraphQL/Graphene for backend calls in Django's templates,"I just installed Graphene on my Django project and would like to use it also for the back-end, templating. So far, I find just tutorials how to use it only for front-end, no mention about back-end. - -Should I suppose that it is not a good idea to use it instead of a SQL database? If yes, then why? Is there a downside in the speed in the comparison to a SQL databases like MySQL? -What's the best option how to retrieve the data for templates in Python? I mean, best for the performance. - -Thnx.","GraphQL is an API specification. It doesn't specify how data is stored, so it is not a replacement for a database. -If you're using GraphQL, you don't use Django templates to specify the GraphQL output, because GraphQL specifies the entire HTTP response from the web service, so this question doesn't make sense.",0.6730655149877884,False,1,5827 -2018-11-17 18:20:40.807,How to use F-score as error function to train neural networks?,"I am pretty new to neural networks. I am training a network in tensorflow, but the number of positive examples is much much less than negative examples in my dataset (it is a medical dataset). -So, I know that F-score calculated from precision and recall is a good measure of how well the model is trained. -I have used error functions like cross-entropy loss or MSE before, but they are all based on accuracy calculation (if I am not wrong). But how do I use this F-score as an error function? Is there a tensorflow function for that? Or I have to create a new one? -Thanks in advance.","the loss value and accuracy is a different concept. The loss value is used for training the NN. However, accuracy or other metrics is to value the training result.",0.0,False,1,5828 -2018-11-17 20:57:16.567,How to determine file path in Google colab?,"I mounted my drive using this : -from google.colab import drive -drive.mount('/content/drive/') -I have a file inside a folder that I want the path of how do I determine the path? -Say the folder that contains the file is named 'x' inside my drive",The path will be /content/drive/My\ Drive/x/the_file.,1.2,True,2,5829 -2018-11-17 20:57:16.567,How to determine file path in Google colab?,"I mounted my drive using this : -from google.colab import drive -drive.mount('/content/drive/') -I have a file inside a folder that I want the path of how do I determine the path? -Say the folder that contains the file is named 'x' inside my drive","The path as parameter for a function will be /content/drive/My Drive/x/the_file, so without backslash inside My Drive",0.5457054096481145,False,2,5829 -2018-11-17 23:12:26.597,virtualenv - Birds Eye View of Understanding,"Using Windows -Learning about virtualenv. Here is my understanding of it and a few question that I have. Please correct me if my understanding is incorrect. -virtualenv are environments where your pip dependencies and its selected version are stored for a particular project. A folder is made for your project and inside there are the dependencies. - -I was told you would not want to save your .py scripts in side of virtual ENV, if that's the case how do I access the virtual env when I want to run that project? Open it up in the command line under source ENV/bin/activate then cd my way to where my script is stored? -By running pip freeze that creates a requirements.txt file in that project folder that is just a txt. copy of the dependencies of that virtual env? -If I'm in a second virutalenv who do I import another virtualenv's requirements? I've been to the documentation but I still don't get it. -$ env1/bin/pip freeze > requirements.txt -$ env2/bin/pip install -r requirements.txt - -Guess I'm confused on the ""requirements"" description. Isn't best practice to always call our requirements, requirements.txt? If that's the case how does env2 know I'm want env1 requirements? -Thank you for any info or suggestions. Really appreciate the assistance. -I created a virtualenv C:\Users\admin\Documents\Enviorments>virtualenv django_1 -Using base prefix'c:\\users\\admin\\appdata\\local\\programs\\python\\python37-32' -New python executable in C:\Users\admin\Documents\Enviorments\django_1\Scripts\python.exe Installing setuptools, pip, wheel...done. -How do I activate it? source django_1/bin/activate doesn't work? -I've tried: source C:\Users\admin\Documents\Enviorments\django_1/bin/activate Every time I get : 'source' is not recognized as an internal or external command, operable program or batch file.","* disclaimer * I mainly use conda environments instead of virtualenv, but I believe that most of this is the same across both of them and is true to your case. - -You should be able to access your scripts from any environment you are in. If you have virtenvA and virtenvB then you can access your script from inside either of your environments. All you would do is activate one of them and then run python /path/to/my/script.py, but you need to make sure any dependent libraries are installed. -Correct, but for clarity the requirements file contains a list of the dependencies by name only. It doesn't contain any actual code or packages. You can print out a requirements file but it should just be a list which says package names and their version numbers. Like pandas 1.0.1 numpy 1.0.1 scipy 1.0.1 etc. -In the lines of code you have here you would export the dependencies list of env1 and then you would install these dependencies in env2. If env2 was empty then it will now just be a copy of env1, otherwise it will be the same but with all the packages of env1 added and if it had a different version number of some of the same packages then this would be overwritten",0.0,False,2,5830 -2018-11-17 23:12:26.597,virtualenv - Birds Eye View of Understanding,"Using Windows -Learning about virtualenv. Here is my understanding of it and a few question that I have. Please correct me if my understanding is incorrect. -virtualenv are environments where your pip dependencies and its selected version are stored for a particular project. A folder is made for your project and inside there are the dependencies. - -I was told you would not want to save your .py scripts in side of virtual ENV, if that's the case how do I access the virtual env when I want to run that project? Open it up in the command line under source ENV/bin/activate then cd my way to where my script is stored? -By running pip freeze that creates a requirements.txt file in that project folder that is just a txt. copy of the dependencies of that virtual env? -If I'm in a second virutalenv who do I import another virtualenv's requirements? I've been to the documentation but I still don't get it. -$ env1/bin/pip freeze > requirements.txt -$ env2/bin/pip install -r requirements.txt - -Guess I'm confused on the ""requirements"" description. Isn't best practice to always call our requirements, requirements.txt? If that's the case how does env2 know I'm want env1 requirements? -Thank you for any info or suggestions. Really appreciate the assistance. -I created a virtualenv C:\Users\admin\Documents\Enviorments>virtualenv django_1 -Using base prefix'c:\\users\\admin\\appdata\\local\\programs\\python\\python37-32' -New python executable in C:\Users\admin\Documents\Enviorments\django_1\Scripts\python.exe Installing setuptools, pip, wheel...done. -How do I activate it? source django_1/bin/activate doesn't work? -I've tried: source C:\Users\admin\Documents\Enviorments\django_1/bin/activate Every time I get : 'source' is not recognized as an internal or external command, operable program or batch file.","virtualenv simply creates a new Python environment for your project. Think of it as another copy of Python that you have in your system. Virutual environment is helpful for development, especially if you will need different versions of the same libraries. -Answer to your first question is, yes, for each project that you use virtualenv, you need to activate it first. After activating, when you run python script, not just your project's scripts, but any python script, will use dependencies and configuration of the active Python environment. -Answer to the second question, pip freeze > requirements.txt will create requirements file in active folder, not in your project folder. So, let's say in your cmd/terminal you are in C:\Desktop, then the requirements file will be created there. If you're in C\Desktop\myproject folder, the file will be created there. Requirements file will contain the packages installed on active virtualenv. -Answer to 3rd question is related to second. Simply, you need to write full path of the second requirements file. So if you are in first project and want to install packages from second virtualenv, you run it like env2/bin/pip install -r /path/to/my/first/requirements.txt. If in your terminal you are in active folder that does not have requirements.txt file, then running pip install will give you an error. So, running the command does not know which requirements file you want to use, you specify it. -I created a virtualenv -C:\Users\admin\Documents\Enviorments>virtualenv django_1 Using base prefix 'c:\\users\\admin\\appdata\\local\\programs\\python\\python37-32' New python executable in C:\Users\admin\Documents\Enviorments\django_1\Scripts\python.exe Installing setuptools, pip, wheel...done. -How do I activate it? source django_1/bin/activate doesn't work? -I've tried: source C:\Users\admin\Documents\Enviorments\django_1/bin/activate Every time I get : 'source' is not recognized as an internal or external command, operable program or batch file.",0.0,False,2,5830 -2018-11-19 08:19:34.017,How do I efficiently understand a framework with sparse documentation?,"I have the problem that for a project I need to work with a framework (Python), that has a poor documentation. I know what it does since it is the back end of a running application. I also know that no framework is good if the documentation is bad and that I should prob. code it myself. But, I have a time constraint. Therefore my question is: Is there a cooking recipe on how to understand a poorly documented framework? -What I tried until now is checking some functions and identify the organizational units in the framework but I am lacking a system to do it more effectively.","If I were you, with time constaraints, and bound to use a specific framework. I'll go in the following manner: - -List down the use cases I desire to implement using the framework -Identify the APIs provided by the framework that helps me implement the use cases -Prototype the usecases based on the available documentation and reading - -The prototyping is not implementing the entire use case, but to identify the building blocks around the case and implementing them. e.g., If my usecase is to fetch the Students, along with their courses, and if I were using Hibernate to implement, I would prototype the database accesss, validating how easily am I able to access the database using Hibernate, or how easily I am able to get the relational data by means of joining/aggregation etc. -The prototyping will help me figure out the possible limitations/bugs in the framework. If the limitations are more of show-stoppers, I will implement the supporting APIs myself; or I can take a call to scrap out the entire framework and write one for myself; whichever makes more sense.",0.3869120172231254,False,1,5831 -2018-11-20 02:45:03.200,Python concurrent.futures.ThreadPoolExecutor max_workers,"I am searching for a long time on net. But no use. Please help or try to give me some ideas how to achieve this. -When I use python module concurrent.futures.ThreadPoolExecutor(max_workers=None), I want to know the max_workers how much the number of suitable. -I've read the official document. -I still don't know the number of suitable when I coding. - -Changed in version 3.5: If max_worker is None or not give, it will default to the number of processors on the machine, multiplied by 5, assuming that ThreadPoolExecutor is often used to overlap I/O instead of CPU work and the number of workers should be higher than the number of workers for ProcessPoolExecutor. - -How to understand ""max_workers"" better? -For the first time to ask questions, thank you very much.","max_worker, you can take it as threads number. -If you want to make the best of CPUs, you should keep it running (instead of sleeping). -Ideally if you set it to None, there will be ( CPU number * 5) threads at most. On average, each CPU has 5 thread to schedule. Then if one of them falls into sleep, another thread will be scheduled.",0.9999092042625952,False,1,5832 -2018-11-20 20:23:47.973,wget with subprocess.call(),"I'm working on a domain fronting project. Basically I'm trying to use the subprocess.call() function to interpret the following command: -wget -O - https://fronteddomain.example --header 'Host: targetdomain.example' -With the proper domains, I know how to domain front, that is not the problem. Just need some help with writing using the python subprocess.call() function with wget.","I figured it out using curl: -call([""curl"", ""-s"", ""-H"" ""Host: targetdomain.example"", ""-H"", ""Connection: close"", ""frontdomain.example""])",1.2,True,1,5833 -2018-11-20 23:58:45.450,How to install Poppler to be used on AWS Lambda,"I have to run pdf2image on my Python Lambda Function in AWS, but it requires poppler and poppler-utils to be installed on the machine. -I have tried to search in many different places how to do that but could not find anything or anyone that have done that using lambda functions. -Would any of you know how to generate poppler binaries, put it on my Lambda package and tell Lambda to use that? -Thank you all.","Hi @Alex Albracht thanks for compiled easy instructions! They helped a lot. But I really struggled with getting the lambda function find the poppler path. So, I'll try to add that up with an effort to make it clear. -The binary files should go in a zip folder having structure as: -poppler.zip -> bin/poppler -where poppler folder contains the binary files. This zip folder can be then uploaded as a layer in AWS lambda. -For pdf2image to work, it needs poppler path. This should be included in the lambda function in the format - ""/opt/bin/poppler"". -For example, -poppler_path = ""/opt/bin/poppler"" -pages = convert_from_path(PDF_file, 500, poppler_path=poppler_path)",0.0,False,1,5834 -2018-11-21 13:30:25.713,"CPLEX Error 1016: Promotional version , use academic version CPLEX","I am using python with clpex, when I finished my model I run the program and it throws me the following error: -CplexSolverError: CPLEX Error 1016: Promotional version. Problem size limits exceeded. -I have the IBM Academic CPLEX installed, how can I make python recognize this and not the promotional version?","you can go to the direction you install CPLEX. For Example, D:\Cplex -After that you will see a foler name cplex, then you click on that, --> python --> choose the version of your python ( Ex: 3.6 ), then choose the folder x64_win64, you will see another file name cplex. -You copy this file into your python site packakges ^^ and then you will not be restricted",1.2,True,1,5835 -2018-11-23 22:49:20.307,How can i create a persistent data chart with Flask and Javascript?,"I want to add a real-time chart to my Flask webapp. This chart, other than current updated data, should contain historical data too. -At the moment i can create the chart and i can make it real time but i have no idea how to make the data 'persistent', so i can't see what the chart looked like days or weeks ago. -I'm using a Javascript charting library, while Data is being sent from my Flask script, but what it's not really clear is how i can ""store"" my data on Javascript. At the moment, indeed, the chart will reset each time the page is loaded. -How would it be possible to accomplish that? Is there an example for it?","You can try to store the data in a database and or in a file and extract from there . -You can also try to use dash or you can make on the right side a menu with dates like 21 september and see the chart from that day . -For dash you can look on YouTube at Sentdex",0.0,False,1,5836 -2018-11-25 13:55:55.643,How do I count how many items are in a specific row in my RDD,"as you can tell I’m fairly new to using Pyspark Python my RDD is set out as follows: -(ID, First name, Last name, Address) -(ID, First name, Last name, Address) -(ID, First name, Last name, Address) -(ID, First name, Last name, Address) -(ID, First name, Last name, Address) - Is there anyway I can count how many of these records I have stored within my RDD such as count all the IDs in the RDD. So that the output would tell me I have 5 of them. -I have tried using RDD.count() but that just seems to return how many items I have in my dataset in total.","If you have RDD of tuples like RDD[(ID, First name, Last name, Address)] then you can perform below operation to do different types of counting. - -Count the total number of elements/Rows in your RDD. -rdd.count() -Count Distinct IDs from your above RDD. Select the ID element and then do a distinct on top of it. -rdd.map(lambda x : x[0]).distinct().count() - -Hope it helps to do the different sort of counting. -Let me know if you need any further help here. -Regards, -Neeraj",0.0,False,1,5837 -2018-11-25 19:22:29.680,Adding charts to a Flask webapp,"I created a web app with Flask where I'll be showing data, so I need charts for it. -The problem is that I don't really know how to do that, so I'm trying to find the best way to do that. I tried to use a Javascript charting library on my frontend and send the data to the chart using SocketIO, but the problem is that I need to send that data frequently and at a certain point I'll be having a lot of data, so sending each time a huge load of data through AJAX/SocketIO would not be the best thing to do. -To solve this, I had this idea: could I generate the chart from my backend, instead of sending data to the frontend? I think it would be the better thing to do, since I won't have to send the data to the frontend each time and there won't be a need to generate a ton of data each time the page is loaded, since the chart will be processed on the frontend. -So would it be possible to generate a chart from my Flask code in Python and visualize it on my webpage? Is there a good library do that?",Try to use dash is a python library for web charts,1.2,True,1,5838 -2018-11-25 22:35:57.257,How to strip off left side of binary number in Python?,"I got this binary number 101111111111000 -I need to strip off the 8 most significant bits and have 11111000 at the end. -I tried to make 101111111111000 << 8, but this results in 10111111111100000000000, it hasn't the same effect as >> which strips the lower bits. So how can this be done? The final result MUST BE binary type.","To achieve this for a number x with n digits, one can use this -x&(2**(len(bin(x))-2-8)-1) --2 to strip 0b, -8 to strip leftmost -Simply said it ands your number with just enough 1s that the 8 leftmost bits are set to 0.",0.0,False,1,5839 -2018-11-26 06:17:56.463,how do I clear a printed line and replace it with updated variable IDLE,"I need to clear a printed line, but so far I have found no good answers for using python 3.7, IDLE on windows 10. I am trying to make a simple code that prints a changing variable. But I don't want tons of new lines being printed. I want to try and get it all on one line. -Is it possible to print a variable that has been updated later on in the code? -Do remember I am doing this in IDLE, not kali or something like that. -Thanks for all your help in advance.","The Python language definition defines when bytes will be sent to a file, such as sys.stdout, the default file for print. It does not define what the connected device does with the bytes. -When running code from IDLE, sys.stdout is initially connected to IDLE's Shell window. Shell is not a terminal and does not interpret terminal control codes other than '\n'. The reasons are a) IDLE is aimed at program development, by programmers, rather than program running by users, and developers sometimes need to see all the output from a program; and b) IDLE is cross-platform, while terminal behaviors are various, depending on the system, settings, and current modes (such as insert versus overwrite). -However, I am planning to add an option to run code in an IDLE editor with sys.stdout directed to the local system terminal/console.",0.3869120172231254,False,1,5840 -2018-11-27 09:51:12.057,how to run python in eclipse with both py2 and py3?,"pre: - -I installed both python2.7 and python 3.70 -eclipse installed pydev, and configured two interpreters for each py version -I have a project with some py scripts - -question: -I choose one py file, I want run it in py2, then i want it run in py3(manually). -I know that each file cound has it's run configuration, but it could only choose one interpreter a time. -I also know that py.exe could help you get the right version of python. -I tried to add an interpreter with py.exe, but pydev keeps telling me that ""python stdlibs"" is necessary for a interpreter while only python3's lib shows up. -so, is there a way just like right click the file and choose ""run use interpreter xxx""? -or, does pydev has the ability to choose interpreters by ""#! python2""/""#! python3"" at file head?","I didn't understand what's the actual workflow you want... -Do you want to run each file on a different interpreter (say you have mod1.py and want to run it always on py2 and then mod2.py should be run always on py3) or do you want to run the same file on multiple interpreters (i.e.: you have mod1.py and want to run it both on py2 and py3) or something else? -So, please give more information on what's your actual problem and what you want to achieve... - -Options to run a single file in multiple interpreters: - -Always run with the default interpreter (so, make a regular run -- F9 to run the current editor -- change the default interpreter -- using Ctrl+shift+Alt+I -- and then rerun with Ctrl+F11). -Create a .sh/.bat which will always do 2 launches (initially configure it to just be a wrapper to launch with one python, then, after properly configuring it inside of PyDev that way change it to launch python 2 times, one with py2 and another with py3 -- note that I haven't tested, but it should work in theory).",0.3869120172231254,False,1,5841 -2018-11-27 23:32:32.593,Python regex to identify capitalised single word lines in a text abstract,"I am looking for a way to extract words from text if they match the following conditions: -1) are capitalised -and -2) appear on a new line on their own (i.e. no other text on the same line). -I am able to extract all capitalised words with this code: - caps=re.findall(r""\b[A-Z]+\b"", mytext) -but can't figure out how to implement the second condition. Any help will be greatly appreciated.",please try following statements \r\n at the begining of your regex expression,-0.2012947653214861,False,1,5842 -2018-11-28 12:15:31.400,Python and Dart Integration in Flutter Mobile Application,"Can i do these two things: - -Is there any library in dart for Sentiment Analysis? -Can I use Python (for Sentiment Analysis) in dart? - -My main motive for these questions is that I'm working on an application in a flutter and I use sentiment analysis and I have no idea that how I do that. -Can anyone please help me to solve this Problem.? -Or is there any way that I can do text sentiment analysis in the flutter app?","You can create an api using Python then serve it your mobile app (FLUTTER) using http requests. -I",0.6730655149877884,False,1,5843 -2018-11-28 15:25:07.900,Why is LocationLocal: Relative Alt dropping into negative values on a stationary drone?,"I'm running the Set_Attitude_Target example on an Intel Aero with Ardupilot. The code is working as intended but on top of a clear sensor error, that becomes more evident the longer I run the experiment. -In short, the altitude report from the example is reporting that in LocationLocal there is a relative altitude of -0.01, which gets smaller and smaller the longer the drone stays on. -If the drone takes off, say, 1 meter, then the relative altitude is less than that, so the difference is being taken out. -I ran the same example with the throttle set to a low value so the drone would stay stationary while ""trying to take off"" with insufficient thrust. For the 5 seconds that the drone was trying to take off, as well as after it gave up, disarmed and continued to run the code, the console read incremental losses to altitude, until I stopped it at -1 meter. -Where is this sensor error coming from and how do I remedy it?","As per Agustinus Baskara's comment on the original post, it would appear the built-in sensor is simply that bad - it can't be improved upon with software.",0.0,False,1,5844 -2018-11-29 00:38:11.560,The loss function and evaluation metric of XGBoost,"I am confused now about the loss functions used in XGBoost. Here is how I feel confused: - -we have objective, which is the loss function needs to be minimized; eval_metric: the metric used to represent the learning result. These two are totally unrelated (if we don't consider such as for classification only logloss and mlogloss can be used as eval_metric). Is this correct? If I am, then for a classification problem, how you can use rmse as a performance metric? -take two options for objective as an example, reg:logistic and binary:logistic. For 0/1 classifications, usually binary logistic loss, or cross entropy should be considered as the loss function, right? So which of the two options is for this loss function, and what's the value of the other one? Say, if binary:logistic represents the cross entropy loss function, then what does reg:logistic do? -what's the difference between multi:softmax and multi:softprob? Do they use the same loss function and just differ in the output format? If so, that should be the same for reg:logistic and binary:logistic as well, right? - -supplement for the 2nd problem -say, the loss function for 0/1 classification problem should be -L = sum(y_i*log(P_i)+(1-y_i)*log(P_i)). So if I need to choose binary:logistic here, or reg:logistic to let xgboost classifier to use L loss function. If it is binary:logistic, then what loss function reg:logistic uses?","'binary:logistic' uses -(y*log(y_pred) + (1-y)*(log(1-y_pred))) -'reg:logistic' uses (y - y_pred)^2 -To get a total estimation of error we sum all errors and divide by number of samples. - -You can find this in the basics. When looking on Linear regression VS Logistic regression. -Linear regression uses (y - y_pred)^2 as the Cost Function -Logistic regression uses -(y*log(y_pred) + (y-1)*(log(1-y_pred))) as the Cost function - -Evaluation metrics are completely different thing. They design to evaluate your model. You can be confused by them because it is logical to use some evaluation metrics that are the same as the loss function, like MSE in regression problems. However, in binary problems it is not always wise to look at the logloss. My experience have thought me (in classification problems) to generally look on AUC ROC. -EDIT +First -according to xgboost documentation: -reg:linear: linear regression +train.py +other1.py +Second -reg:logistic: logistic regression +train.py +other2.py -binary:logistic: logistic regression for binary classification, output -probability +Third -So I'm guessing: -reg:linear: is as we said, (y - y_pred)^2 -reg:logistic is -(y*log(y_pred) + (y-1)*(log(1-y_pred))) and rounding predictions with 0.5 threshhold -binary:logistic is plain -(y*log(y_pred) + (1-y)*(log(1-y_pred))) (returns the probability) -You can test it out and see if it do as I've edited. If so, I will update the answer, otherwise, I'll just delete it :<",0.9999665971563038,False,1,5845 -2018-11-29 09:16:08.143,"After I modified my Python code in Pycharm, how to deploy the change to my Portainer?","Perhaps it is a basic question but I am really not a profession in Portainer. -I have a local Portainer, a Pycharm to manage the Python code. What should I do after I modified my code and deploy this change to the local Portainer? -Thx","If you have mounted the folder where your code resides directly in the container the changes will be also be applied in your container so no further action is required. -If you have not mounted the folder to your container (for example if you copy the code when you build the image), you would have to rebuild the image. Of course this is a lot more work so I would recommend using the mounted volumes.",0.0,False,1,5846 -2018-11-30 04:23:07.330,"Sqlalchemy before_execute event - how to pass some external variable, say app user id?","I am trying to obtain an application variable (app user id) in before_execute(conn, clauseelement, multiparam, param) method. The app user id is stored in python http request object which I do not have any access to in the db event. -Is there any way to associate a piece of sqlalchemy external data somewhere to fetch it in before_execute event later? -Appreciate your time and help.","Answering my own question here with a possible solution :) -From http request copied the piece of data to session object -Since the session binding was at engine level, copied the data from session to connection object in SessionEvent.after_begin(session, transaction, connection). [Had it been Connection level binding, we could have directly set the objects from session object to connection object.] +train.py +other3.py -Now the data is available in connection object and in before_execute() too.",0.0,False,1,5847 -2018-11-30 05:17:50.717,Session cookie is too large flask application,"I'm trying to load certain data using sessions (locally) and it has been working for some time but, now I get the following warning and my data that was loaded through sessions is no longer being loaded. -The ""b'session'"" cookie is too large: the value was 13083 bytes but - the header required 44 extra bytes. The final size was 13127 bytes but - the limitis 4093 bytes. Browsers may silently ignore cookies larger - than this. +I want to be able to have one Python script that lives in the Master directory that will do the following when executed: -I have tried using session.clear(). I also opened up chrome developer tools and tried deleting the cookies associated with 127.0.0.1:5000. I have also tried using a different secret key to use with the session. -It would be greatly appreciated if I could get some help on this, since I have been searching for a solution for many hours. -Edit: -I am not looking to increase my limit by switching to server-side sessions. Instead, I would like to know how I could clear my client-side session data so I can reuse it. -Edit #2: -I figured it out. I forgot that I pushed way more data to my database, so every time a query was performed, the session would fill up immediately.","It looks like you are using the client-side type of session that is set by default with Flask which has a limited capacity of 4KB. You can use a server-side type session that will not have this limit, for example, by using a back-end file system (you save the session data in a file system in the server, not in the browser). To do so, set the configuration variable 'SESSION_TYPE' to 'filesystem'. -You can check other alternatives for the 'SESSION_TYPE' variable in the Flask documentation.",1.2,True,1,5848 -2018-11-30 12:32:27.360,not having to load a dataset over and over,"Currently in R, once you load a dataset (for example with read.csv), Rstudio saves it as a variable in the global environment. This ensures you don't have to load the dataset every single time you do a particular test or change. -With Python, I do not know which text editor/IDE will allow me to do this. E.G - I want to load a dataset once, and then subsequently do all sorts of things with it, instead of having to load it every time I run the script. -Any points as to how to do this would be very useful","It depends how large your data set is. -For relatively smaller datasets you could look at installing Anaconda Python Jupyter notebooks. Really great for working with data and visualisation once the dataset is loaded. For larger datasets you can write some functions / generators to iterate efficiently through the dataset.",0.0,False,1,5849 -2018-11-30 14:16:09.813,pymysql - Get value from a query,"I am executing the query using pymysql in python. - -select (sum(acc_Value)) from accInfo where acc_Name = 'ABC' - -The purpose of the query is to get the sum of all the values in acc_Value column for all the rows matchin acc_Name = 'ABC'. -The output i am getting when using cur.fetchone() is - -(Decimal('256830696'),) - -Now how to get that value ""256830696"" alone in python. -Thanks in advance.","It's a tuple, just take the 0th index",-0.3869120172231254,False,1,5850 -2018-12-01 14:09:56.980,Saving objects from tk canvas,"I'm trying to make a save function in a program im doing for bubbling/ballooning drawings. The only thing I can't get to work is save a ""work copy"". As if a drawing gets revision changes, you don't need to redo all the work. Just load the work copy, and add/remove/re-arrage bubbles. -I'm using tkinter and canvas. And creates ovals and text for bubbles. But I can't figure out any good way to save the info from the oval/text objects. -I tried to pickle the whole canvas, but that seems like it won't work after some googeling. -And pickle every object when created seems to only save the object id. 1, 2 etc. And that also won't work since some bubbles will be moved and receive new coordinates. They might also have a different color, size etc. -In my next approach I'm thinking of saving the whole ""can.create_oval( x1, y1, x2, y2, fill = fillC, outli...."" as a string to a txt and make the function to recreate a with eval() -Any one have any good suggestion on how to approach this?","There is no built-in way to save and restore the canvas. However, the canvas has methods you can use to get all of the information about the items on the canvas. You can use these methods to save this information to a file and then read this file back and recreate the objects. - -find_all - will return an ordered list of object ids for all objects on the canvas -type - will return the type of the object as a string (""rectangle"", ""circle"", ""text"", etc) -itemconfig - returns a dictionary with all of the configuration values for the object. The values in the dictionary are a list of values which includes the default value of the option at index 3 and the current value at index 4. You can use this to save only the option values that have been explicitly changed from the default. -gettags - returns a list of tags associated with the object",1.2,True,1,5851 -2018-12-03 01:15:30.087,Different sized vectors in word2vec,"I am trying to generate three different sized output vectors namely 25d, 50d and 75d. I am trying to do so by training the same dataset using the word2vec model. I am not sure how I can get three vectors of different sizes using the same training dataset. Can someone please help me get started on this? I am very new to machine learning and word2vec. Thanks","You run the code for one model three times, each time supplying a different vector_size parameter to the model initialization.",1.2,True,1,5852 -2018-12-03 03:23:29.990,data-item-url is on localhost instead of pythonanywhere (wagtail + snipcart project),"So instead of having data-item-url=""https://miglopes.pythonanywhere.com/ra%C3%A7%C3%A3o-de-c%C3%A3o-purina-junior-10kg/"" -it keeps on appearing -data-item-url=""http://localhost/ra%C3%A7%C3%A3o-de-c%C3%A3o-purina-junior-10kg/"" -how do i remove the localhost so my snipcart can work on checkout?","Without more details of where this tag is coming from it's hard to know for sure... but most likely you need to update your site's hostname in the Wagtail admin, under Settings -> Sites.",0.0,False,1,5853 -2018-12-03 21:09:40.843,Using MFCC's for voice recognition,"I'm currently using the Fourier transformation in conjunction with Keras for voice recogition (speaker identification). I have heard MFCC is a better option for voice recognition, but I am not sure how to use it. -I am using librosa in python (3) to extract 20 MFCC features. My question is: which MFCC features should I use for speaker identification? -In addition to this I am unsure on how to implement these features. What I would do is to get the necessary features and make one long vector input for a neural network. However, it is also possible to display colors, so could image recognition also be possible, or is this more aimed at speech, and not speaker recognition? -In short, I am unsure where I should start, as I am not very experienced with image recognition and have no idea where to start. -Thanks in advance!!","You can use MFCCs with dense layers / multilayer perceptron, but probably a Convolutional Neural Network on the mel-spectrogram will perform better, assuming that you have enough training data.",0.0,False,1,5854 -2018-12-04 18:22:55.240,How to add text to a file in python3,"Let's say i have the following file, -dummy_file.txt(contents below) -first line -third line -how can i add a line to that file right in the middle so the end result is: -first line -second line -third line -I have looked into opening the file with the append option, however that adds the line to the end of the file.","The standard file methods don't support inserting into the middle of a file. You need to read the file, add your new data to the data that you read in, and then re-write the whole file.",1.2,True,1,5855 -2018-12-05 08:13:04.893,DataFrame view in PyCharm when using pyspark,"I create a pyspark dataframe and i want to see it in the SciView tab in PyCharm when i debug my code (like I used to do when i have worked with pandas). -It says ""Nothing to show"" (the dataframe exists, I can see it when I use the show() command). -someone knows how to do it or maybe there is no integration between pycharm and pyspark dataframe in this case?","Pycharm does not support spark dataframes, you should call the toPandas() method on the dataframe. As @abhiieor mentioned in a comment, be aware that you can potentially collect a lot of data, you should first limit() the number of rows returned.",1.2,True,1,5856 -2018-12-08 01:12:11.607,"Is it possible to trigger a script or program if any data is updated in a database, like MySQL?","It doesn't have to be exactly a trigger inside the database. I just want to know how I should design this, so that when changes are made inside MySQL or SQL server, some script could be triggered.","One Way would be to keep a counter on the last updated row in the database, and then you need to keep polling(Checking) the database through python for new records in short intervals. -If the value in the counter is increased then you could use the subprocess module to call another Python script.",0.0,False,1,5857 -2018-12-09 22:47:38.660,Error for word2vec with GoogleNews-vectors-negative300.bin,"the version of python is 3.6 -I tried to execute my code but, there are still some errors as below: -Traceback (most recent call last): - -File - ""C:\Users\tmdgu\Desktop\NLP-master1\NLP-master\Ontology_Construction.py"", - line 55, in - , binary=True) -File ""E:\Program - Files\Python\Python35-32\lib\site-packages\gensim\models\word2vec.py"", - line 1282, in load_word2vec_format - raise DeprecationWarning(""Deprecated. Use gensim.models.KeyedVectors.load_word2vec_format instead."") -DeprecationWarning: Deprecated. Use - gensim.models.KeyedVectors.load_word2vec_format instead. - -how to fix the code? or is the path to data wrong?","This is just a warning, not a fatal error. Your code likely still works. -""Deprecation"" means a function's use has been marked by the authors as no longer encouraged. -The function typically still works, but may not for much longer – becoming unreliable or unavailable in some future library release. Often, there's a newer, more-preferred way to do the same thing, so you don't trigger the warning message. -Your warning message points you at the now-preferred way to load word-vectors of that format: use KeyedVectors.load_word2vec_format() instead. -Did you try using that, instead of whatever line of code (not shown in your question) that you were trying before seeing the warning?",0.6730655149877884,False,1,5858 -2018-12-11 00:40:44.053,Use of Breakpoint Method,"I am new to python and am unsure of how the breakpoint method works. Does it open the debugger for the IDE or some built-in debugger? -Additionally, I was wondering how that debugger would be able to be operated. -For example, I use Spyder, does that mean that if I use the breakpoint() method, Spyder's debugger will open, through which I could the Debugger dropdown menu, or would some other debugger open? -I would also like to know how this function works in conjunction with the breakpointhook() method.","No, debugger will not open itself automatically as a consequence of setting a breakpoint. -So you have first set a breakpoint (or more of them), and then manually launch a debugger. -After this, the debugger will perform your code as usually, but will stop performing instructions when it reaches a breakpoint - the instruction at the breakpoint itself it will not perform. It will pause just before it, given you an opportunity to perform some debug tasks, as - -inspect variable values, -set variables manually to other values, -continue performing instructions step by step (i. e. only the next instruction), -continue performing instructions to the next breakpoint, -prematurely stop debugging your program. - -This is the common scenario for all debuggers of all programming languages (and their IDEs). -For IDEs, launching a debugger will - -enable or reveal debugging instructions in their menu system, -show a toolbar for them and will, -enable hot keys for them. - -Without setting at least one breakpoint, most debuggers perform the whole program without a pause (as launching it without a debugger), so you will have no opportunity to perform any debugging task. -(Some IDEs have an option to launch a debugger in the ""first instruction, then a pause"" mode, so you need not set breakpoints in advance in this case.) - -Yes, the breakpoint() built-in function (introduced in Python 3.7) stops executing your program, enters it in the debugging mode, and you may use Spyder's debugger drop-down menu. -(It isn't a Spyders' debugger, only its drop-down menu; the used debugger will be still the pdb, i. e. the default Python DeBugger.) -The connection between the breakpoint() built-in function and the breakpointhook() function (from the sys built-in module) is very straightforward - the first one directly calls the second one. -The natural question is why we need two functions with the exactly same behavior? -The answer is in the design - the breakpoint() function may be changed indirectly, by changing the behavior of the breakpointhook() function. -For example, IDE creators may change the behavior of the breakpointhook() function so that it will launch their own debugger, not the pdb one.",1.2,True,1,5859 -2018-12-11 01:14:39.167,Is there an appropriate version of Pygame for Python 3.7 installed with Anaconda?,"I'm new to programming and I just downloaded Anaconda a few days ago for Windows 64-bit. I came across the Invent with Python book and decided I wanted to work through it so I downloaded that too. I ended up running into a couple issues with it not working (somehow I ended up with Spyder (Python 2.7) and end=' ' wasn't doing what it was supposed to so I uninstalled and reinstalled Anaconda -- though originally I did download the 3.7 version). It looked as if I had the 2.7 version of Pygame. I'm looking around and I don't see a Pygame version for Python 3.7 that is compatible with Anaconda. The only ones I saw were for Mac or not meant to work with Anaconda. This is all pretty new to me so I'm not sure what my options are. Thanks in advance. -Also, how do I delete the incorrect Pygame version?","just use pip install pygame & python will look for a version compatible with your installation. -If you're using Anaconda and pip doesn't work on CMD prompt, try using the Anaconda prompt from start menu.",0.6730655149877884,False,1,5860 -2018-12-11 17:54:00.677,python-hypothesis: Retrieving or reformatting a falsifying example,"Is it possible to retrieve or reformat the falsifying example after a test failure? The point is to show the example data in a different format - data generated by the strategy is easy to work with in the code but not really user friendly, so I'm looking at how to display it in a different form. Even a post-mortem tool working with the example database would be enough, but there does not seem to be any API allowing that, or am I missing something?","Even a post-mortem tool working with the example database would be enough, but there does not seem to be any API allowing that, or am I missing something? - -The example database uses a private format and only records the choices a strategy made to generate the falsifying example, so there's no way to extract the data of the example short of re-running the test. -Stuart's recommendation of hypothesis.note(...) is a good one.",0.0,False,1,5861 -2018-12-11 19:43:33.823,Template rest one day from the date,"In my view.py I obtain a date from my MSSQL database in this format 2018-12-06 00:00:00.000 so I pass that value as context like datedb and in my html page I render it like this {{datedb|date:""c""}} but it shows the date with one day less like this: - -2018-12-05T18:00:00-06:00 - -Is the 06 not the 05 day. -why is this happening? how can I show the right date?","One way of solve the problem was chage to USE_TZ = False has Willem said in the comments, but that gives another error so I found the way to do it just adding in the template this {% load tz %} and using the flter |utc on the date variables like datedb|utc|date:'Y-m-d'.",1.2,True,1,5862 -2018-12-12 12:15:09.190,Add full anaconda package list to existing conda environment,"I know how to add single packages and I know that the conda create command supports adding a new environment with all anaconda packages installed. -But how can I add all anaconda packages to an existing environment?","I was able to solve the problem as following: - -Create a helper env with anaconda: conda create -n env_name anaconda -Activate that env conda activate env_name -Export packages into specification file: conda list --explicit > spec-file.txt -Activate the target environment: activate target_env_name -Import that specification file: conda install --file spec-file.txt",0.3869120172231254,False,1,5863 -2018-12-12 17:20:31.293,how to compare two text document with tfidf vectorizer?,"I have two different text which I want to compare using tfidf vectorization. -What I am doing is: - -tokenizing each document -vectorizing using TFIDFVectorizer.fit_transform(tokens_list) - -Now the vectors that I get after step 2 are of different shape. -But as per the concept, we should have the same shape for both the vectors. Only then the vectors can be compared. -What am I doing wrong? Please help. -Thanks in advance.","As G. Anderson already pointed out, and to help the future guys on this, when we use the fit function of TFIDFVectorizer on document D1, it means that for the D1, the bag of words are constructed. -The transform() function computes the tfidf frequency of each word in the bag of word. -Now our aim is to compare the document D2 with D1. It means we want to see how many words of D1 match up with D2. Thats why we perform fit_transform() on D1 and then only the transform() function on D2 would apply the bag of words of D1 and count the inverse frequency of tokens in D2. -This would give the relative comparison of D1 against D2.",1.2,True,1,5864 -2018-12-13 13:43:34.987,"python, dictionaries how to get the first value of the first key","So basically I have a dictionary with x and y values and I want to be able to get only the x value of the first coordinate and only the y value of the first coordinate and then the same with the second coordinate and so on, so that I can use it in an if-statement.","if the values are ordered in columns just use - -x=your_variable[:,0] y=your_variable[:,1] - -i think",0.3869120172231254,False,1,5865 -2018-12-15 21:55:17.020,how to install tkinter with Pycharm?,"I used sudo apt-get install python3.6-tk and it works fine. Tkinter works if I open python in terminal, but I cannot get it installed on my Pycharm project. pip install command says it cannot find Tkinter. I cannot find python-tk in the list of possible installs either. -Is there a way to get Tkinter just standard into every virtualenv when I make a new project in Pycharm? -Edit: on Linux Mint -Edit2: It is a clear problem of Pycharm not getting tkinter guys. If I run my local python file from terminal it works fine. Just that for some reason Pycharm cannot find anything tkinter related.","Python already has tkinter installed. It is a base module, like random or time, therefore you don't need to install it.",-0.0679224682270276,False,1,5866 -2018-12-18 01:57:32.877,Print output to console while redirect the output to a file in linux,"I am using python in linux and tried to use command line to print out the output log while redirecting the output and error to a txt.file. However, after I searched and tried the methods such as -python [program] 2>&1 | tee output.log -But it just redirected the output the the output.log and the print content disappeared. I wonder how I could print the output to console while save/redirect them to output.log ? It would be useful if we hope to tune the parameter while having notice on the output loss and parameter.","You can create a screen like this: screen -L and then run the python script in this screen which would give the output to the console and also would save it the file: screenlog.0. You could leave the screen by using Ctrl+A+D while the script is running and check the script output by reattaching to the screen by screen -r. Also, in the screen, you won't be able to scroll past the current screen view.",0.0,False,1,5867 -2018-12-18 10:17:19.160,Regex for Sentences in python,"I have one more Query -here is two sentences - -[1,12:12] call basic_while1() Error Code: 1046. No database selected -[1,12:12] call add() Asdfjgg Error Code: 1046. No database selected -[1,12:12] call add() -[1,12:12] -Error Code: 1046. No database selected -now I want to get output like this -['1','12:12',""call basic_while1""] , ['1','12:12', 'call add() Asdfjgg'],['1','12:12', 'call add()'],['1','12:12'],['','','',' Error Code: 1046. No database selected'] - -I used this r'^\[(\d+),(\s[0-9:]+)\]\s+(.+) this is my main regex then as per my concern I modified it but It didn't help me -I want to cut everything exact before ""Error Code"" -how to do that?","basically you asked to get everything before the ""Error Code"" - -I want to cut everything exact before ""Error Code"" - -so it is simple, try: find = re.search('((.)+)(\sError Code)*',s) and find.group(1) will give you '[1,12:12] call add() Asdfjgg' which is what you wanted. -if after you got that string you want list that you requested : -desired_list = find.group(1).replace('[','').replace(']','').replace(',',' ').split()",0.0,False,1,5868 -2018-12-18 23:09:13.550,install numpy on python 3.5 Mac OS High sierra,"I wanted to install the numpy package for python 3.5 on my Mac OS High Sierra, but I can't seem to make it work. -I have it on python2.7, but I would also like to install it for the next versions. -Currently, I have installed python 2.7, python 3.5, and python 3.7. -I tried to install numpy using: - -brew install numpy --with-python3 (no error) -sudo port install py35-numpy@1.15.4 (no error) -sudo port install py37-numpy@1.15.4 (no error) -pip3.5 install numpy (gives ""Could not find a version that satisfies the requirement numpy (from versions: ) -No matching distribution found for numpy"" ) - -I can tell that it is not installed because when I type python3 and then import numpy as np gives ""ModuleNotFoundError: No module named 'numpy'"" -Any ideas on how to make it work? -Thanks in advance.","First, you need to activate the virtual environment for the version of python you wish to run. After you have done that then just run ""pip install numpy"" or ""pip3 install numpy"". -If you used Anaconda to install python then, after activating your environment, type conda install numpy.",1.2,True,2,5869 -2018-12-18 23:09:13.550,install numpy on python 3.5 Mac OS High sierra,"I wanted to install the numpy package for python 3.5 on my Mac OS High Sierra, but I can't seem to make it work. -I have it on python2.7, but I would also like to install it for the next versions. -Currently, I have installed python 2.7, python 3.5, and python 3.7. -I tried to install numpy using: - -brew install numpy --with-python3 (no error) -sudo port install py35-numpy@1.15.4 (no error) -sudo port install py37-numpy@1.15.4 (no error) -pip3.5 install numpy (gives ""Could not find a version that satisfies the requirement numpy (from versions: ) -No matching distribution found for numpy"" ) - -I can tell that it is not installed because when I type python3 and then import numpy as np gives ""ModuleNotFoundError: No module named 'numpy'"" -Any ideas on how to make it work? -Thanks in advance.","If running pip3.5 --version or pip3 --version works, what is the output when you run pip3 freeze? If there is no output, it indicates that there are no packages installed for the Python 3 environment and you should be able to install numpy with pip3 install numpy.",0.0,False,2,5869 -2018-12-19 15:33:16.960,Python Vscode extension - can't change remote jupyter notebook kernel,"I've got the updated Python VSCode extension installed and it works great. I'm able to use the URL with the token to connect to a remote Jupyter notebook. I just cannot seem to figure out how to change the kernel on the remote notebook for use in VSCode. -If I connect to the remote notebook through a web browser, I can see my two environments through the GUI and change kernels. Is there a similar option in the VSCode extension?","Run the following command in vscode: -Python: Select interpreter to start Jupyter server -It will allow you to choose the kernel that you want.",0.0,False,2,5870 -2018-12-19 15:33:16.960,Python Vscode extension - can't change remote jupyter notebook kernel,"I've got the updated Python VSCode extension installed and it works great. I'm able to use the URL with the token to connect to a remote Jupyter notebook. I just cannot seem to figure out how to change the kernel on the remote notebook for use in VSCode. -If I connect to the remote notebook through a web browser, I can see my two environments through the GUI and change kernels. Is there a similar option in the VSCode extension?","The command that worked for me in vscode: -Notebook: Select Notebook Kernel",0.0,False,2,5870 -2018-12-21 02:43:43.240,Backtesting a Universe of Stocks,"I would like to develop a trend following strategy via back-testing a universe of stocks; lets just say all NYSE or S&P500 equities. I am asking this question today because I am unsure how to handle the storage/organization of the massive amounts of historical price data. -After multiple hours of research I am here, asking for your experience and awareness. I would be extremely grateful for any information/awareness you can share on this topic - -Personal Experience background: --I know how to code. Was a Electrical Engineering major, not a CS major. --I know how to pull in stock data for individual tickers into excel. -Familiar with using filtering and custom studies on ThinkOrSwim. -Applied Context: -From 1995 to today lets evaluate the best performing equities on a relative strength/momentum basis. We will look to compare many technical characteristics to develop a strategy. The key to this is having data for a universe of stocks that we can run backtests on using python, C#, R, or any other coding language. We can then determine possible strategies by assesing the returns, the omega ratio, median excess returns, and Jensen's alpha (measured weekly) of entries and exits that are technical driven. - -Here's where I am having trouble figuring out what the next step is: --Loading data for all S&P500 companies into a single excel workbook is just not gonna work. Its too much data for excel to handle I feel like. Each ticker is going to have multiple MB of price data. --What is the best way to get and then store the price data for each ticker in the universe? Are we looking at something like SQL or Microsoft access here? I dont know; I dont have enough awareness on the subject of handling lots of data like this. What are you thoughts? - -I have used ToS to filter stocks based off of true/false parameters over a period of time in the past; however the capabilities of ToS are limited. -I would like a more flexible backtesting engine like code written in python or C#. Not sure if Rscript is of any use. - Maybe, there are libraries out there that I do not have awareness of that would make this all possible? If there are let me know. -I am aware that Quantopia and other web based Quant platforms are around. Are these my best bets for backtesting? Any thoughts on them? - -Am I making this too complicated? -Backtesting a strategy on a single equity or several equities isnt a problem in excel, ToS, or even Tradingview. But with lots of data Im not sure what the best option is for storing that data and then using a python script or something to perform the back test. - -Random Final thought:-Ultimately would like to explore some AI assistance with optimizing strategies that were created based off parameters. I know this is a thing but not sure where to learn more about this. If you do please let me know. - -Thank you guys. I hope this wasn't too much. If you can share any knowledge to increase my awareness on the topic I would really appreciate it. -Twitter:@b_gumm","The amout of data is too much for EXCEL or CALC. Even if you want to screen only 500 Stocks from S&P 500, you will get 2,2 Millions of rows (approx. 220 days/year * 20 years * 500 stocks). For this amount of data, you should use a SQL Database like MySQL. It is performant enough to handle this amount of data. But you have to find a way for updating. If you get the complete time series daily and store it into your database, this process can take approx. 1 hour. You could also use delta downloads but be aware of corporate actions (e.g. splits). -I don't know Quantopia, but I know a similar backtesting service where I have created a python backtesting script last year. The outcome was quite different to what I have expected. The research result was that the backtesting service was calculating wrong results because of wrong data. So be cautious about the results.",0.0,False,1,5871 -2018-12-21 11:15:31.803,Date Range for Facebook Graph API request on posts level,"I am working on a tool for my company created to get data from our Facebook publications. It has not been working for a while, so I have to get all the historical data from June to November 2018. -My two scripts (one that get title and type of publication, and the other that get the number of link clicks) are working well to get data from last pushes, but when I try to add a date range in my Graph API request, I have some issues: - -the regular query is [page_id]/posts?fields=id,created_time,link,type,name -the query for historical data is [page_id]/posts?fields=id,created_time,link,type,name,since=1529280000&until=1529712000, as the API is supposed to work with unixtime -I get perfect results for regular use, but the results for historical data only shows video publications in Graph API Explorer, with a debug message saying: - - -The since field does not exist on the PagePost object. - -Same for ""until"" field when not using ""since"". I tried to replace ""posts/"" with ""feed/"" but it returned the exact same result... -Do you have any idea of how to get all the publications from a Page I own on a certain date range?","So it seems that it is not possible to request this kind of data unfortunately, third party services must be used...",0.0,False,1,5872 -2018-12-23 03:14:14.787,Pyautogui mouse click on different resolution,"I'm writing a script for automatizing some tasks at my job. However, I need to make my script portable and try it on different screen resolution. -So far right now I've tried to multiply my coordinate with the ratio between the old and new resolutions, but this doesn't work properly. -Do you know how I can convert my X, Y coordinates for mouse's clicks make it works on different resolution?","Quick question: Are you trying to get it to click on certain buttons? (i.e. buttons that look the same on every computer you plug it into) And by portable, do you mean on a thumb drive (usb)? -You may be able to take an image of the button (i.e. cropping a screenshot), pass it on to the opencv module, one of the modules has an Image within Image searching ability. you can pass that image along with a screenshot (using pyautogui.screenshot()) and it will return the (x,y) coordinates of the button, pass that on to pyautogui.moveto(x,y) and pyautogui.click(), it might be able to work. you might have to describe the action you are trying to get Pyautogui to do a little better.",0.3869120172231254,False,1,5873 -2018-12-24 13:58:52.250,extracting text just after a particular tag using beautifulsoup?,"I need to extract the text just after strong tag from html page given below? how can i do it using beautiful soup. It is causing me problem as it doesn't have any class or id so only way to select this tag is using text. -{strong}Name:{/strong} Sam smith{br} -Required result -Sam smith","Thanks for all your answers but i was able to do this by following: -b_el = soup.find('strong',text='Name:') -print b_el.next_sibling -This works fine for me. This prints just next sibling how can i print next 2 sibling is there anyway ?",-0.3869120172231254,False,1,5874 -2018-12-25 10:26:24.547,How to train your own model in AWS Sagemaker?,"I just started with AWS and I want to train my own model with own dataset. I have my model as keras model with tensorflow backend in Python. I read some documentations, they say I need a Docker image to load my model. So, how do I convert keras model into Docker image. I searched through internet but found nothing that explained the process clearly. How to make docker image of keras model, how to load it to sagemaker. And also how to load my data from a h5 file into S3 bucket for training? Can anyone please help me in getting clear explanation?","You can convert your Keras model to a tf.estimator and train using the TensorFlow framework estimators in Sagemaker. -This conversion is pretty basic though, I reimplemented my models in TensorFlow using the tf.keras API which makes the model nearly identical and train with the Sagemaker TF estimator in script mode. -My initial approach using pure Keras models was based on bring-your-own-algo containers similar to the answer by Matthew Arthur.",0.0,False,1,5875 -2018-12-25 21:14:39.453,Installing Python Dependencies locally in project,"I am coming from NodeJS and learning Python and was wondering how to properly install the packages in requirements.txt file locally in the project. -For node, this is done by managing and installing the packages in package.json via npm install. However, the convention for Python project seems to be to add packages to a directory called lib. When I do pip install -r requirements.txt I think this does a global install on my computer, similar to nodes npm install -g global install. How can I install the dependencies of my requirements.txt file in a folder called lib?","use this command -pip install -r requirements.txt -t ",1.2,True,1,5876 -2018-12-26 11:44:32.850,P4Python check if file is modified after check-out,I need to check-in the file which is in client workspace. Before check-in i need to verify if the file has been changed. Please tell me how to check this.,Use the p4 diff -sr command. This will do a diff of opened files and return the names of ones that are unchanged.,1.2,True,1,5877 -2018-12-26 21:26:16.360,How can I source two paths for the ROS environmental variable at the same time?,"I have a problem with using the rqt_image_view package in ROS. Each time when I type rqt_image_view or rosrun rqt_image_view rqt_image_view in terminal, it will return: - -Traceback (most recent call last): - File ""/opt/ros/kinetic/bin/rqt_image_view"", line 16, in - plugin_argument_provider=add_arguments)) - File ""/opt/ros/kinetic/lib/python2.7/dist-packages/rqt_gui/main.py"", line 59, in main - return super(Main, self).main(argv, standalone=standalone, plugin_argument_provider=plugin_argument_provider, plugin_manager_settings_prefix=str(hash(os.environ['ROS_PACKAGE_PATH']))) - File ""/opt/ros/kinetic/lib/python2.7/dist-packages/qt_gui/main.py"", line 338, in main - from python_qt_binding import QT_BINDING - ImportError: cannot import name QT_BINDING - -In the /.bashrc file, I have source : - -source /opt/ros/kinetic/setup.bash - source /home/kelu/Dropbox/GET_Lab/leap_ws/devel/setup.bash --extend - source /eda/gazebo/setup.bash --extend - -They are the default path of ROS, my own working space, the robot simulator of our university. I must use all of them. I have already finished many projects with this environmental variable setting. However, when I want to use the package rqt_image_view today, it returns the above error info. -When I run echo $ROS_PACKAGE_PATH, I get the return: - -/eda/gazebo/ros/kinetic/share:/home/kelu/Dropbox/GET_Lab/leap_ws/src:/opt/ros/kinetic/share - -And echo $PATH - -/usr/local/cuda/bin:/opt/ros/kinetic/bin:/usr/local/cuda/bin:/usr/local/cuda/bin:/home/kelu/bin:/home/kelu/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin - -Then I only source the /opt/ros/kinetic/setup.bash ,the rqt_image_view package runs!! -It seems that, if I want to use rqt_image_view, then I can not source both /opt/ros/kinetic/setup.bash and /home/kelu/Dropbox/GET_Lab/leap_ws/devel/setup.bash at the same time. -Could someone tell me how to fix this problem? I have already search 5 hours in google and haven't find a solution.","Different solutions to try: - -It sounds like the first path /eda/gazebo/ros/kinetic/share or /home/kelu/Dropbox/GET_Lab/leap_ws/src has an rqt_image_view package that is being used. Try to remove that dependency. -Have you tried switching the source files being sourced? This depends on how the rqt_image_view package was built, such as by source or through a package manager. - -Initially, it sounds like there is a problem with the paths being searched or wrong package being run since the package works with the default ROS environment setup.",0.0,False,1,5878 -2018-12-27 09:49:47.840,how to constrain scipy curve_fit in positive result,"I'm using scipy curve_fit to curve a line for retention. however, I found the result line may produce negative number. how can i add some constrain? -the 'bounds' only constrain parameters not the results y","One of the simpler ways to handle negative value in y, is to make a log transformation. Get the best fit for log transformed y, then do exponential transformation for actual error in the fit or for any new value prediction.",0.0,False,1,5879 -2018-12-27 10:57:53.617,Vpython using Spyder : how to prevent browser tab from opening?,"I am using vpython library in spyder. After importing the library when I call simple function like print('x') or carry out any assignment operation and execute the program, immediately a browser tab named localhost and port address opens up and I get the output in console {if I used print function}. -I would like to know if there is any option to prevent the tab from opening and is it possible to make the tab open only when it is required. -PS : I am using windows 10, chrome as browser, python 3.5 and spyder 3.1.4.",There is work in progress to prevent the opening of a browser tab when there are no 3D objects or graph to display. I don't know when this will be released.,0.0,False,1,5880 -2018-12-27 16:54:21.267,ImportError: cannot import name 'AFAVSignature',"I get this error after already having installed autofocus when I try to run a .py file from the command line that contains the line: -from autofocus import Autofocus2D -Output: -ImportError: cannot import name 'AFAVSignature' -Is anyne familiar with this package and how to import it? -Thanks","It doesn't look like the library is supported for python 3. I was getting the same error, but removed that line from init.py and found that there was another error with of something like 'print e' not working, so I put the line back in and imported with python2 and it worked.",0.0,False,1,5881 -2018-12-28 00:04:02.473,how can I find out which python virtual environment I am using?,I have several virtual environment in my computer and sometimes I am in doubt about which python virtual environment I am using. Is there an easy way to find out which virtual environment I am connected to?,"Usually it's set to display in your prompt. You can also try typing in which python or which pip in your terminal to see if it points to you venv location, and which one. (Use where instead of which on Windows.)",0.9974579674738372,False,2,5882 -2018-12-28 00:04:02.473,how can I find out which python virtual environment I am using?,I have several virtual environment in my computer and sometimes I am in doubt about which python virtual environment I am using. Is there an easy way to find out which virtual environment I am connected to?,"From a shell prompt, you can just do echo $VIRTUAL_ENV (or in Windows cmd.exe, echo %VIRTUAL_ENV%). -From within Python, sys.prefix provides the root of your Python installation (the virtual environment if active), and sys.executable tells you which Python executable is running your script.",0.9903904942256808,False,2,5882 -2018-12-30 14:34:30.510,how to delete django relation and rebuild model,"ive made a mistake with my django and messed up my model -I want to delete it & then recreate it - how do I do that? -I get this when I try to migrate - i just want to drop it -relation ""netshock_todo"" already exists -Thanks in advance","Delete all of your migrations file except __init__.py -Then go to database and find migrations table, delete all row in migrations table. Then run makemigrations and migrate command",1.2,True,1,5883 -2018-12-31 14:33:34.473,Scrapy shell doesn't crawl web page,"I am trying to use Scrapy shell to try and figure out the selectors for zone-h.org. I run scrapy shell 'webpage' afterwards I tried to view the content to be sure that it is downloaded. But all I can see is a dash icon (-). It doesn't download the page. I tried to enter the website to check if my connection to the website is somehow blocked, but it was reachable. I tried setting user agent to something more generic like chrome but no luck there either. The website is blocking me somehow but I don't know how can I bypass it. I digged through the the website if they block crawling and it doesn't say it is forbidden to crawl it. Can anyone help out?","Can you use scrapy shell ""webpage"" on another webpage that you know works/doesn't block scraping? -Have you tried using the view(response) command to open up what scrapy sees in a web browser? -When you go to the webpage using a normal browser, are you redirected to another, final homepage? -- if so, try using the final homepage's URL in your scrapy shell command -Do you have firewalls that could interfere with a Python/commandline app from connecting to the internet?",0.0,False,1,5884 -2019-01-03 23:22:36.667,How to add to pythonpath in virtualenvironment,"On my windows machine I created a virtual environement in conda where I run python 3.6. I want to permanently add a folder to the virtual python path environment. If I append something to sys.path it is lost on exiting python. -Outside of my virtual enviroment I can just add to user variables by going to advanced system settings. I have no idea how to do this within my virtual enviroment. -Any help is much appreciated.","If you are on Windows 10+, this should work: -1) Click on the Windows button on the screen or on the keyboard, both in the bottom left section. -2) Type ""Environment Variables"" (without the quotation marks, of course). -3) Click on the option that says something like ""Edit the System Environment Variables"" -4) Click on the ""Advanced Tab,"" and then click ""Environment Variables"" (Near the bottom) -5) Click ""Path"" in the top box - it should be the 3rd option - and then click ""Edit"" (the top one) -6) Click ""New"" at the top, and then add the path to the folder you want to create. -7) Click ""Ok"" at the bottom of all the pages that were opened as a result of the above-described actions to save. -That should work, please let me know in the comments if it doesn't.",-0.2012947653214861,False,1,5885 -2019-01-04 08:03:05.297,Do Dash apps reload all data upon client log in?,"I'm wondering about how a dash app works in terms of loading data, parsing and doing initial calcs when serving to a client who logs onto the website. -For instance, my app initially loads a bunch of static local csv data, parses a bunch of dates and loads them into a few pandas data frames. This data is then displayed on a map for the client. -Does the app have to reload/parse all of this data every time a client logs onto the website? Or does the dash server load all the data only the first time it is instantiated and then just dish it out every time a client logs on? -If the data reloads every time, I would then use quick parsers like udatetime, but if not, id prefer to use a convenient parser like pendulum which isn't as efficient (but wouldn't matter if it only parses once). -I hope that question makes sense. Thanks in advance!","The only thing that is called on every page load is the function you can assign to app.layout. This is useful if you want to display dynamic content like the current date on your page. -Everything else is just executed once when the app is starting. -This means if you load your data outside the app.layout (which I assume is the case) everything is loaded just once.",1.2,True,1,5886 -2019-01-05 23:50:56.660,How do i implement Logic to Django?,"So I have an assignment to build a web interface for a smart sensor, -I've already written the python code to read the data from the sensor and write it into sqlite3, control the sensor etc. -I've built the HTML, CSS template and implemented it into Django. -My goal is to run the sensor reading script pararel to the Django interface on the same server, so the server will do all the communication with the sensor and the user will be able to read and configure the sensor from the web interface. (Same logic as modern routers - control and configure from a web interface) -Q: Where do I put my sensor_ctl.py script in my Django project and how I make it to run independent on the server. (To read sensor data 24/7) -Q: Where in my Django project I use my classes and method from sensor_ctl.py to write/read data to my djangos database instead of the local sqlite3 database (That I've used to test sensor_ctl.py)","Place your code in app/appname/management/commands folder. Use Official guide for management commands. Then you will be able to use your custom command like this: -./manage getsensorinfo -So when you will have this command registered, you can just put in in cron and it will be executed every minute. -Secondly you need to rewrite your code to use django ORM models like this: -Stat.objects.create(temp1=60,temp2=70) instead of INSERT into....",1.2,True,1,5887 -2019-01-06 02:49:09.817,How does selenium work with hosting services?,"I have a Flask app that uses selenium to get data from a website. I have spent 10+ hours trying to get heroku to work with it, but no success. My main problem is selenium. with heroku, there is a ""buildpack"" that you use to get selenium working with it, but with all the other hosting services, I have found no information. I just would like to know how to get selenium to work with any other recommended service than heroku. Thank you.","You need hosting service that able to install Chrome, chromedriver and other dependencies. Find for Virtual Private hosting (VPS), or Dedicated Server or Cloud Hosting but not Shared hosting.",0.0,False,1,5888 -2019-01-06 10:28:46.997,How do I root in python (other than square root)?,"I'm trying to make a calculator in python, so when you type x (root) y it will give you the x root of y, e.g. 4 (root) 625 = 5. -I'm aware of how to do math.sqrt() but is there a way to do other roots?","If you want to 625^(1/4){which is the same as 4th root of 625} -then you type 625**(1/4) -** is the operator for exponents in python. -print(625**(1/4)) -Output: -5.0 -To generalize: -if you want to find the xth root of y, you do: -y**(1/x)",0.6730655149877884,False,1,5889 -2019-01-08 17:44:43.800,TF-IDF + Multiple Regression Prediction Problem,"I have a dataset of ~10,000 rows of vehicles sold on a portal similar to Craigslist. The columns include price, mileage, no. of previous owners, how soon the car gets sold (in days), and most importantly a body of text that describes the vehicle (e.g. ""accident free, serviced regularly""). -I would like to find out which keywords, when included, will result in the car getting sold sooner. However I understand how soon a car gets sold also depends on the other factors especially price and mileage. -Running a TfidfVectorizer in scikit-learn resulted in very poor prediction accuracy. Not sure if I should try including price, mileage, etc. in the regression model as well, as it seems pretty complicated. Currently am considering repeating the TF-IDF regression on a particular segment of the data that is sufficiently huge (perhaps Toyotas priced at $10k-$20k). -The last resort is to plot two histograms, one of vehicle listings containing a specific word/phrase and another for those that do not. The limitation here would be that the words that I choose to plot will be based on my subjective opinion. -Are there other ways to find out which keywords could potentially be important? Thanks in advance.","As you mentioned you could only so much with the body of text, which signifies the amount of influence of text on selling the cars. -Even though the model gives very poor prediction accuracy, you could ahead to see the feature importance, to understand what are the words that drive the sales. -Include phrases in your tfidf vectorizer by setting ngram_range parameter as (1,2) -This might gives you a small indication of what phrases influence the sales of a car. -If would also suggest you to set norm parameter of tfidf as None, to check if has influence. By default, it applies l2 norm. -The difference would come based the classification model, which you are using. Try changing the model also as a last option.",1.2,True,1,5890 -2019-01-09 15:12:08.163,"Linux Jupyter Notebook : ""The kernel appears to have died. It will restart automatically""","I am using the PYNQ Linux on Zedboard and when I tried to run a code on Jupyter Notebook to load a model.h5 I got an error message: -""The kernel appears to have died. It will restart automatically"" -I tried to upgrade keras and Jupyter but still have the same error -I don't know how to fix this problem ?",Model is too large to be loaded into memory so kernel has died.,0.0,False,1,5891 -2019-01-09 22:59:39.340,Difference between Python Interpreter and IDLE?,"For homework in my basic python class, we have to start python interpreter in interactive mode and type a statement. Then, we have to open IDLE and type a statement. I understand how to write statements in both, but can't quite tell them apart? I see that there are to different desktop apps for python, one being the python 3.7 (32-bit), and the other being IDLE. Which one is the interpreter, and how do I get it in interactive mode? Also, when I do open IDLE do I put my statement directly in IDLE or, do I open a 'new file' and do it like that? I'm just a bit confused about the differences between them all. But I do really want to learn this language! Please help!","Python unlike some languages can be written one line at a time with you getting feedback after every line . This is called interactive mode. You will know you are in interactive mode if you see "">>>"" on the far left side of the window. This mode is really only useful for doing small tasks you don't think will come up again. -Most developers write a whole program at once then save it with a name that ends in "".py"" and run it in an interpreter to get the results.",1.2,True,1,5892 -2019-01-10 15:30:10.413,How to handle SQL dump with Python,"I received a data dump of the SQL database. -The data is formatted in an .sql file and is quite large (3.3 GB). I have no idea where to go from here. I have no access to the actual database and I don't know how to handle this .sql file in Python. -Can anyone help me? I am looking for specific steps to take so I can use this SQL file in Python and analyze the data. -TLDR; Received an .sql file and no clue how to process/analyze the data that's in the file in Python. Need help in necessary steps to make the .sql usable in Python.","Eventually I had to install MAMP to create a local mysql server. I imported the SQL dump with a program like SQLyog that let's you edit SQL databases. -This made it possible to import the SQL database in Python using SQLAlchemy, MySQLconnector and Pandas.",0.3869120172231254,False,2,5893 -2019-01-10 15:30:10.413,How to handle SQL dump with Python,"I received a data dump of the SQL database. -The data is formatted in an .sql file and is quite large (3.3 GB). I have no idea where to go from here. I have no access to the actual database and I don't know how to handle this .sql file in Python. -Can anyone help me? I am looking for specific steps to take so I can use this SQL file in Python and analyze the data. -TLDR; Received an .sql file and no clue how to process/analyze the data that's in the file in Python. Need help in necessary steps to make the .sql usable in Python.","It would be an extraordinarily difficult process to try to construct any sort of Python program that would be capable of parsing the SQL syntax of any such of a dump-file and to try to do anything whatsoever useful with it. -""No. Absolutely not. Absolute nonsense."" (And I have over 30 years of experience, including senior management.) You need to go back to your team, and/or to your manager, and look for a credible way to achieve your business objective ... because, ""this isn't it."" -The only credible thing that you can do with this file is to load it into another mySQL database ... and, well, ""couldn't you have just accessed the database from which this dump came?"" Maybe so, maybe not, but ""one wonders."" -Anyhow – your team and its management need to ""circle the wagons"" and talk about your credible options. Because, the task that you've been given, in my professional opinion, ""isn't one."" Don't waste time – yours, or theirs.",0.2012947653214861,False,2,5893 -2019-01-10 18:42:54.360,Interfacing a QR code recognition to a django database,"I'm coming to you with the following issue: -I have a bunch of physical boxes onto which I still stick QR codes generated using a python module named qrcode. In a nutshell, what I would like to do is everytime someone wants to take the object contained in a box, he scans the qr code with his phone, then takes it and put it back when he is done, not forgetting to scan the QR code again. -Pretty simple, isn't it? -I already have a django table containing all my objects. -Now my question is related to the design. I suspect the easiest way to achieve that is to have a POST request link in the QR code which will create a new entry in a table with the name of the object that has been picked or put back, the time (I would like to store this information). -If that's the correct way to do, how would you approach it? I'm not too sure I see how to make a POST request with a QR code. Would you have any idea? -Thanks. -PS: Another alternative I can think of would be to a link in the QR code to a form with a dummy button the user would click on. Once clicked the button would update the database. But I would fine a solution without any button more convenient...","The question boils down to a few choices: (a) what data do you want to encode into the QR code; (b) what app will you use to scan the QR code; and (c) how do you want the app to use / respond to the encoded data. -If you want your users to use off-the-shelf QR code readers (like free smartphone apps), then encoding a full URL to the appropriate API on your backend makes sense. Whether this should be a GET or POST depends on the QR code reader. I'd expect most to use GET, but you should verify that for your choice of app. That should be functionally fine, if you don't have any concerns about who should be able to scan the code. -If you want more control, e.g. you'd like to keep track of who scanned the code or other info not available to the server side just from a static URL request, you need a different approach. Something like, store the item ID (not URL) in the QR code; create your own simple QR code scanner app (many good examples exist) and add a little extra logic to that client, like requiring the user to log in with an ID + password, and build the URL dynamically from the item ID and the user ID. Many security variations possible (like JWT token) -- how you do that won't be dictated by the contents of the QR code. You could do a lot of other things in that QR code scanner / client, like add GPS location, ask the user to indicate why or where they're taking the item, etc. -So you can choose between a simple way with no controls, and a more complex way that would allow you to layer in whatever other controls and extra data you need.",1.2,True,1,5894 -2019-01-11 08:09:37.980,How can I read a file having different column for each rows?,"my data looks like this. -0 199 1028 251 1449 847 1483 1314 23 1066 604 398 225 552 1512 1598 -1 1214 910 631 422 503 183 887 342 794 590 392 874 1223 314 276 1411 -2 1199 700 1717 450 1043 540 552 101 359 219 64 781 953 -10 1707 1019 463 827 675 874 470 943 667 237 1440 892 677 631 425 -How can I read this file structure in python? I want to extract a specific column from rows. For example, If I want to extract value in the second row, second column, how can I do that? I've tried 'loadtxt' using data type string. But it requires string index slicing, so that I could not proceed because each column has different digits. Moreover, each row has a different number of columns. Can you guys help me? -Thanks in advance.","Use something like this to split it -split2=[] -split1=txt.split(""\n"") -for item in split1: - split2.append(item.split("" ""))",0.0,False,1,5895 -2019-01-11 11:02:30.650,How to align training and test set when using pandas `get_dummies` with `drop_first=True`?,"I have a data set from telecom company having lots of categorical features. I used the pandas.get_dummies method to convert them into one hot encoded format with drop_first=True option. Now how can I use the predict function, test input data needs to be encoded in the same way, as the drop_first=True option also dropped some columns, how can I ensure that encoding takes place in similar fashion. -Data set shape before encoding : (7043, 21) -Data set shape after encoding : (7043, 31)","When not using drop_first=True you have two options: - -Perform the one-hot encoding before splitting the data in training and test set. (Or combine the data sets, perform the one-hot encoding, and split the data sets again). -Align the data sets after one-hot encoding: an inner join removes the features that are not present in one of the sets (they would be useless anyway). train, test = train.align(test, join='inner', axis=1) - -You noted (correctly) that method 2 may not do what you expect because you are using drop_first=True. So you are left with method 1.",0.3869120172231254,False,1,5896 -2019-01-11 19:30:04.483,Python anytree application challenges with my jupyter notebook ​,"I am working in python 3.7.0 through a 5.6.0 jupyter notebook inside Anaconda Navigator 1.9.2 running in a windows 7 environment. It seems like I am assuming a lot of overhead, and from the jupyter notebook, python doesn’t see the anytree application module that I’ve installed. (Anytree is working fine with python from my command prompt.) -I would appreciate either 1) IDE recommendations or 2) advise as to how to make my Anaconda installation better integrated. -​","The core problem with my python IDE environment was that I could not utilize the functions in the anytree module. The anytree functions worked fine from the command prompt python, but I only saw error messages from any of the Anaconda IDE portals. -Solution: -1) From the windows start menu, I opened Anaconda Navigator, ""run as administrator."" -2) Select Environments. My application only has the single environment, “base”, -3.) Open selection “terminal”, and you then have a command terminal window in that environment. -4.) Execute [ conda install -c techtron anytree ] and the anytree module functions are now available. -5.) Execute [ conda update –n base –all ] and all the modules are updated to be current.",1.2,True,1,5897 -2019-01-12 03:01:39.153,How do I get VS Code to recognize modules in virtual environment?,"I set up a virtual environment in python 3.7.2 using ""python -m venv foldername"". I installed PIL in that folder. Importing PIL works from the terminal, but when I try to import it in VS code, I get an ImportError. Does anyone know how to get VS code to recognize that module? -I've tried switching interpreters, but the problem persists.","I ended up changing the python.venvpath setting to a different folder, and then moving the virtual env folder(The one with my project in it) to that folder. After restarting VS code, it worked.",0.0,False,1,5898 -2019-01-15 06:52:45.623,Good resources for video processing in Python?,"I am using the yolov3 model running on several surveillance cameras. Besides this I also run tensorflow models on these surveillaince streams. I feel a little lost when it comes to using anything but opencv for rtsp streaming. -So far I haven't seen people use anything but opencv in python. Are there any places I should be looking into. Please feel free to chime in. -Sorry if the question is a bit vague, but I really don't know how to put this better. Feel free to edit mods.",Of course are the alternatives to OpenCV in python if it comes to video capture but in my experience none of them preformed better,1.2,True,1,5899 -2019-01-15 06:54:00.607,Automate File loading from s3 to snowflake,"In s3 bucket daily new JSON files are dumping , i have to create solution which pick the latest file when it arrives PARSE the JSON and load it to Snowflake Datawarehouse. may someone please share your thoughts how can we achieve","There are some aspects to be considered such as is it a batch or streaming data , do you want retry loading the file in case there is wrong data or format or do you want to make it a generic process to be able to handle different file formats/ file types(csv/json) and stages. -In our case we have built a generic s3 to Snowflake load using Python and Luigi and also implemented the same using SSIS but for csv/txt file only.",0.0,False,1,5900 -2019-01-15 20:16:34.613,pythonnet clr is not recognized in jupyter notebook,"I have installed pythonnet to use clr package for a specific API, which only works with clr in python. Although in my python script (using command or regular .py files) it works without any issues, in jupyter notebook, import clr gives this error, ModuleNotFoundError: No module named 'clr'. Any idea how to address this issue?",Here is simple suggestion: compare sys.path in both cases and see the differences. Your ipython kernel in jupyter is probably searching in different directories than in normal python process.,1.2,True,2,5901 -2019-01-15 20:16:34.613,pythonnet clr is not recognized in jupyter notebook,"I have installed pythonnet to use clr package for a specific API, which only works with clr in python. Although in my python script (using command or regular .py files) it works without any issues, in jupyter notebook, import clr gives this error, ModuleNotFoundError: No module named 'clr'. Any idea how to address this issue?","since you are intended to use clr in jupyter, in jupyter cell, you could also -!pip install pythonnet for the first time and every later time if the vm is frequently nuked",0.0,False,2,5901 -2019-01-15 20:47:18.657,"Tried importing Java 8 JDK for PySpark, but PySpark still won't let me start a session","Ok here's my basic information before I go on: -MacBook Pro: OS X 10.14.2 -Python Version: 3.6.7 -Java JDK: V8.u201 -I'm trying to install the Apache Spark Python API (PySpark) on my computer. I did a conda installation: conda install -c conda-forge pyspark -It appeared that the module itself was properly downloaded because I can import it and call methods from it. However, opening the interactive shell with myuser$ pyspark gives the error: -No Java runtime present, requesting install. -Ok that's fine. I went to Java's download page to get the current JDK, in order to have it run, and downloaded it on Safari. Chrome apparently doesn't support certain plugins for it to work (although initially I did try to install it with Chrome). Still didn't work. -Ok, I just decided to start trying to use it. -from pyspark.sql import SparkSession It seemed to import the module correctly because it was auto recognizing SparkSession's methods. However, -spark = SparkSession.builder.getOrCreate() gave the error: -Exception: Java gateway process exited before sending its port number -Reinstalling the JDK doesn't seem to fix the issue, and now I'm stuck with a module that doesn't seem to work because of an issue with Java that I'm not seeing. Any ideas of how to fix this problem? Any and all help is appreciated.",This problem is coming with spark 2.4. please try spark 2.3.,0.0,False,1,5902 -2019-01-16 08:53:00.437,Install python packages offline on server,I want to install some packages on the server which does not access to internet. so I have to take packages and send them to the server. But I do not know how can I install them.,"Download the package from website and extract the tar ball. -run python setup.py install",-0.2012947653214861,False,1,5903 -2019-01-17 08:51:46.440,Dask: delayed vs futures and task graph generation,"I have a few basic questions on Dask: - -Is it correct that I have to use Futures when I want to use dask for distributed computations (i.e. on a cluster)? -In that case, i.e. when working with futures, are task graphs still the way to reason about computations. If yes, how do I create them. -How can I generally, i.e. no matter if working with a future or with a delayed, get the dictionary associated with a task graph? - -As an edit: -My application is that I want to parallelize a for loop either on my local machine or on a cluster (i.e. it should work on a cluster). -As a second edit: -I think I am also somewhat unclear regarding the relation between Futures and delayed computations. -Thx","1) Yup. If you're sending the data through a network, you have to have some way of asking the computer doing the computing for you how's that number-crunching coming along, and Futures represent more or less exactly that. -2) No. With Futures, you're executing the functions eagerly - spinning up the computations as soon as you can, then waiting for the results to come back (from another thread/process locally, or from some remote you've offloaded the job onto). The relevant abstraction here would be a Queque (Priority Queque, specifically). -3) For a Delayed instance, for instance, you could do some_delayed.dask, or for an Array, Array.dask; optionally wrap the whole thing in either dict() or vars(). I don't know for sure if it's reliably set up this way for every single API, though (I would assume so, but you know what they say about what assuming makes of the two of us...). -4) The simplest analogy would probably be: Delayed is essentially a fancy Python yield wrapper over a function; Future is essentially a fancy async/await wrapper over a function.",1.2,True,1,5904 -2019-01-19 00:00:55.483,Python how to get labels of a generated adjacency matrix from networkx graph?,"If i've an networkx graph from a python dataframe and i've generated the adjacency matrix from it. -So basically, how to get labels of that adjacency matrix ?","Assuming you refer to nodes' labels, networkx only keeps the the indices when extracting a graph's adjacency matrix. Networkx represents each node as an index, and you can add more attributes if you wish. All node's attributes except for the index are kept in a dictionary. When generating graph's adjacency matrix only the indices are kept, so if you only wish to keep a single string per node, consider indexing nodes by that string when generating your graph.",1.2,True,2,5905 -2019-01-19 00:00:55.483,Python how to get labels of a generated adjacency matrix from networkx graph?,"If i've an networkx graph from a python dataframe and i've generated the adjacency matrix from it. -So basically, how to get labels of that adjacency matrix ?","If the adjacency matrix is generated without passing the nodeList, then you can call G.nodes to obtain the default NodeList, which should correspond to the rows of the adjacency matrix.",-0.2012947653214861,False,2,5905 -2019-01-20 12:48:34.697,How to wait for some time between user inputs in tkinter?,"I am making a GUI program where the user can draw on a canvas in Tkinter. What I want to do is that I want the user to be able to draw on the canvas and when the user releases the Mouse-1, the program should wait for 1 second and clear the canvas. If the user starts drawing within that 1 second, the canvas should stay as it is. -I am able to get the user input fine. The draw function in my program is bound to B1-Motion. -I have tried things like inducing a time delay but I don't know how to check whether the user has started to draw again. -How do I check whether the user has started to draw again?","You can bind the mouse click event to a function that sets a bool to True or False, then using after to call a function after 1 second which depending on that bool clears the screen.",1.2,True,1,5906 -2019-01-21 21:13:07.617,Persistent Machine Learning,"I have a super basic machine learning question. I've been working through various tutorials and online classes on machine learning and the various techniques to learning how to use it, but what I'm not seeing is the persistent application piece. -So, for example, I train a network to recognize what a garden gnome looks like, but, after I run the training set and validate with test data, how do I persist the network so that I can feed it an individual picture and have it tell me whether the picture is of a garden gnome or not? Every tutorial seems to have you run through the training/validation sets without any notion as of how to host the network in a meaningful way for future use. -Thanks!",Use python pickle library to dump your trained model on your hard drive and load model and test for persistent results.,0.0,False,1,5907 -2019-01-21 23:31:10.607,Is it possible to extract an SSRS report embedded in the body of an email and export to csv?,"We currently are receiving reports via email (I believe they are SSRS reports) which are embedded in the email body rather than attached. The reports look like images or snapshots; however, when I copy and paste the ""image"" of a report into Excel, the column/row format is retained and it pastes into Excel perfectly, with the columns and rows getting pasted into distinct columns and rows accordingly. So it isn't truly an image, as there is a structure to the embedded report. -Right now, someone has to manually copy and paste each report into excel (step 1), then import the report into a table in SQL Server (step 2). There are 8 such reports every day, so the manual copy/pasting from the email into excel is very time consuming. -The question is: is there a way - any way - to automate step 1 so that we don't have to manually copy and paste each report into excel? Is there some way to use python or some other language to detect the format of the reports in the emails, and extract them into .csv or excel files? -I have no code to show as this is more of a question of - is this even possible? And if so, any hints as to how to accomplish it would be greatly appreciated.","The most efficient solution is to have the SSRS administrator (or you, if you have permissions) set the subscription to send as CSV. To change this in SSRS right click the report and then click manage. Select ""Subscriptions"" on the left and then click edit next to the subscription you want to change. Scroll down to Delivery Options and select CSV in the Render Format dropdown. Viola, you receive your report in the correct format and don't have to do any weird extraction.",0.0,False,1,5908 -2019-01-22 05:44:57.673,How to install sympy package in python,"I am a beginner to python, I wanted to symbolic computations. I came to know with sympy installation into our pc we can do symbolic computation. I have installed python 3.6 and I am using anaconda nagavitor, through which I am using spyder as an editor. now I want to install symbolic package sympy how to do that. -I checked some post which says use 'conda install sympy'. but where to type this? I typed this in spyder editor and I am getting syntax error. thankyou","In anaconda navigator: - -Click Environments (on the left) -Choose your environment (if you have more than one) -On the middle pick ""All"" from dropbox (""installed"" by default) -Write sympy in search-box on the right -Check the package that showed out -Click apply",0.1352210990936997,False,2,5909 -2019-01-22 05:44:57.673,How to install sympy package in python,"I am a beginner to python, I wanted to symbolic computations. I came to know with sympy installation into our pc we can do symbolic computation. I have installed python 3.6 and I am using anaconda nagavitor, through which I am using spyder as an editor. now I want to install symbolic package sympy how to do that. -I checked some post which says use 'conda install sympy'. but where to type this? I typed this in spyder editor and I am getting syntax error. thankyou","To use conda install, open the Anaconda Prompt and enter the conda install sympy command. -Alternatively, navigate to the scripts sub-directory in the Anaconda directory, and run pip install sympy.",0.0,False,2,5909 -2019-01-22 18:26:43.977,tkinter.root.destroy and cv2.imshow - X Windows system error,"I found this rather annoying bug and I couldn’t find anything other than a unanswered question on the opencv website, hopefully someone with more knowledge about the two libraries will be able to point me in the right direction. -I won’t provide code because that would be beside the point of learning what causes the crash. -If I draw a tkinter window and then root.destroy() it, trying to draw a cv2.imshow window will result in a X Window System error as soon as the cv2.waitKey delay is over. I’ve tried to replicate in different ways and it always gets to the error (error_code 3 request_code 15 minor_code 0). -It is worth noting that a root.quit() command won’t cause the same issue (as it is my understanding this method will simply exit the main loop rather than destroying the widgets). Also, while any cv2.imshow call will fail, trying to draw a new tkinter window will work just fine. -What resources are being shared among the two libraries? What does root.destroy() cause in the X environment to prevent any cv2 window to be drawn? -Debian Jessie - Python 3.4 - OpenCV 3.2.0","When you destroy the root window, it destroys all children windows as well. If cv2 uses a tkinter window or child window of the root window, it will fail if you destroy the root window.",0.0,False,1,5910 -2019-01-22 23:09:52.430,How do I use Pyinstaller to make a Mac file on Windows?,"I am on Windows and I am trying to figure how to use Pyinstaller to make a file (on Windows) for a Mac. -I have no trouble with Windows I am just not sure how I would make a file for another OS on it. -What I tried in cmd was: pyinstaller -F myfile.py and I am not sure what to change to make a Mac compatible file.",Not Possible without using a Virtual Machine,0.0,False,1,5911 -2019-01-23 03:02:55.387,Parsing list of URLs with regex patterns,"I have a large text file of URLs (>1 million URLs). The URLs represent product pages across several different domains. -I'm trying to parse out the SKU and product name from each URL, such as: - -www.amazon.com/totes-Mens-Mike-Duck-Boot/dp/B01HQR3ODE/ - - -totes-Mens-Mike-Duck-Boot -B01HQR3ODE - -www.bestbuy.com/site/apple-airpods-white/5577872.p?skuId=5577872 - - -apple-airpods-white -5577872 - - -I already have the individual regex patterns figured out for parsing out the two components of the URL (product name and SKU) for all of the domains in my list. This is nearly 100 different patterns. -While I've figured out how to test this one URL/pattern at a time, I'm having trouble figuring out how to architect a script which will read in my entire list, then go through and parse each line based on the relevant regex pattern. Any suggestions how to best tackle this? -If my input is one column (URL), my desired output is 4 columns (URL, domain, product_name, SKU).","While it is possible to roll this all into one massive regex, that might not be the easiest approach. Instead, I would use a two-pass strategy. Make a dict of domain names to the regex pattern that works for that domain. In the first pass, detect the domain for the line using a single regex that works for all URLs. Then use the discovered domain to lookup the appropriate regex in your dict to extract the fields for that domain.",0.2012947653214861,False,1,5912 -2019-01-24 09:30:09.097,Python Azure function processing blob storage,"I am trying to make a pipeline using Data Factory In MS Azure of processing data in blob storage and then running a python processing code/algorithm on the data and then sending it to another source. -My question here is, how can I do the same in Azure function apps? Or is there a better way to do it? -Thanks in advance. -Shyam",I created a Flask API and called my python code through that. And then put it in Azure as a web app and called the blob.,0.0,False,1,5913 -2019-01-24 11:46:02.647,Django Admin Interface - Privileges On Development Server,"I have an old project running (Django 1.6.5, Python 2.7) live for several years. I have to make some changes and have set up a working development environment with all the right django and python requirements (packages, versions, etc.) -Everything is running fine, except when I am trying to make changes inside the admin panel. I can log on fine and looking at the database (sqlite3) I see my user has superuser privileges. However django says ""You have no permissions to change anything"" and thus not even displaying any of the models registered for the admin interface. -I am using the same database that is running on the live server. There I have no issues at all (Live server also running in development mode with DEBUG=True has no issues) -> I can only see the history (My Change Log) - Nothing else -I have also created a new superuser - but same problem here. -I'd appreciate any pointers (Maybe how to debug this?)","Finally, I found the issue: -admin.autodiscover() -was commented out in the project's urls.py for some reason. (I may have done that trying to get the project to work in a more recent version of django) - So admin.site.register was never called and the app_dict never filled. index.html template of django.contrib.admin then returns - -You don't have permission to edit anything. - -or it's equivalent translation (which I find confusing, given that the permissions are correct, only no models were added to the admin dictionary. -I hope this may help anyone running into a similar problem",0.0,False,1,5914 -2019-01-24 19:31:18.407,How to handle EULA pop-up window that appears only on first login?,"I am new to Selenium. The web interface of our product pops up a EULA agreement which the user has to scroll down and accept before proceeding. This happens ONLY on initial login using that browser for that user. -I looked at the Selenium API but I am unable to figure out which one to use and how to use it. -Would much appreciate any suggestions in this regard. -I have played around with the IDE for Chrome but even over there I don't see anything that I can use for this. I am aware there is an 'if' command but I don't know how to use it to do something like: -if EULA-pops-up: - Scroll down and click 'accept' -proceed with rest of test.","You may disable the EULA if that is an option for you, I am sure there is a way to do it in registries as well : -C:\Program Files (x86)\Google\Chrome\Application there should be a file called master_preferences. -Open the file and setting: -require_eula to false",0.0,False,1,5915 -2019-01-25 09:21:41.077,Predicting values using trained MNB Classifier,"I am trying to train a model for sentiment analysis and below is my trained Multinomial Naive Bayes Classifier returning an accuracy of 84%. -I have been unable to figure out how to use the trained model to predict the sentiment of a sentence. For example, I now want to use the trained model to predict the sentiment of the phrase ""I hate you"". -I am new to this area and any help is highly appreciated.","I don't know the dataset and what is semantic of individual dictionaries, but you are training your model on a dataset which has form as follows: -[[{""word"":True, ""word2"": False}, 'neg'], [{""word"":True, ""word2"": False}, 'pos']] - -That means your input is in form of a dictionary, and output in form of 'neg' label. If you want to predict you need to input a dictionary in a form: - -{""I"": True, ""Hate"": False, ""you"": True}. - -Then: - -MNB_classifier.classify({""love"": True}) ->> 'neg' -or -MNB_classifier.classify_many([{""love"": True}]) ->> ['neg']",1.2,True,1,5916 -2019-01-25 11:29:23.027,Deliver python external libraries with script,"I want to use my script that uses pandas library on another linux machine where is no internet access or pip installed. -Is there a way how to deliver the script with all dependencies? -Thanks",or set needed dependices in script manually by appending sys.modules and pack together all the needed files.,0.0,False,1,5917 -2019-01-26 14:14:21.693,importing an entire folder of .py files into google colab,"I have a folder of . py files(a package made by me) which i have uploaded into my google drive. -I have mounted my google drive in colab but I still can not import the folder in my notebook as i do in my pc. -I know how to upload a single .py file into google colab and import it into my code, but i have no idea about how to upload a folder of .py files and import it in notebook and this is what i need to do. -This is the code i used to mount drive: - -from google.colab import drive -drive.mount('/content/drive') -!ls 'drive/My Drive'","I found how to do it. -after uploading all modules and packages into the directory which my notebook file is in, I changed colab's directory from ""/content"" to this directory and then i simply imported the modules and packages(folder of .py files) into my code",1.2,True,1,5918 -2019-01-27 06:38:41.497,How to redirect -progress option output of ffmpeg to stderr?,"I'm writing my own wraping for ffmpeg on Python 3.7.2 now and want to use it's ""-progress"" option to read current progress since it's highly machine-readable. The problem is ""-progress"" option of ffmpeg accepts as its parameter file names and urls only. But I don't want to create additional files not to setup the whole web-server for this purpose. -I've google a lot about it, but all the ""progress bars for ffmpeg"" projects rely on generic stderr output of ffmpeg only. Other answers here on Stackoverflow and on Superuser are being satisfied with just ""-v quiet -stats"", since ""progress"" is not very convenient name for parameter to google exactly it's cases. -The best solution would be to force ffmpeg write it's ""-progress"" output to separate pipe, since there is some useful data in stderr as well regarding file being encoded and I don't want to throw it away with ""-v quiet"". Though if there is a way to redirect ""-progress"" output to stderr, it would be cool as well! Any pipe would be ok actually, I just can't figure out how to make ffmpeg write it's ""-progress"" not to file in Windows. I tried ""ffmpeg -progress stderr ..."", but it just create the file with this name.","-progress pipe:1 will write out to stdout, pipe:2 to stderr. If you aren't streaming from ffmpeg, use stdout.",1.2,True,1,5919 -2019-01-28 14:38:40.990,How can I check how often all list elements from a list B occur in a list A?,"I have a python list A and a python list B with words as list elements. I need to check how often the list elements from list B are contained in list A. Is there a python method or how can I implement this efficient? -The python intersection method only tells me that a list element from list B occurs in list A, but not how often.","You could convert list B to a set, so that checking if the element is in B is faster. -Then create a dictionary to count the amount of times that the element is in A if the element is also in the set of B -As mentioned in the comments collections.Counter does the ""heavy lifting"" for you",0.0,False,1,5920 -2019-01-29 07:42:00.640,Can't install packages via pip or npm,"I'm trying to install some packages globally on my Mac. But I'm not able to install them via npm or pip, because I'll always get the message that the packages does not exist. For Python, I solved this by always using a virtualenv. But now I'm trying to install the @vue/cli via npm, but I'm not able to access it. The commands are working fine, but I'm just not able to access it. I think it has something to do with my $PATH, but I don't know how to fix that. -If I look in my Finder, I can find the @vue folder in /users/.../node_modules/. Does someone know how I can access this folder with the vue command in Terminal?","If it's a PATH problem: -1) Open up Terminal. -2) Run the following command: -sudo nano /etc/paths -3) Enter your password, when prompted. -4) Check if the correct paths exist in the file or not. -5) Fix, if needed -6) Hit Control-X to quit. -7) Enter “Y” to save the modified buffer. -Everything, should work fine now. If it doesn't try re-installing NPM/PIP.",1.2,True,1,5921 -2019-01-31 10:19:40.180,"How to get disk space total, used and free using Python 2.7 without PSUtil","Is there a way I get can the following disk statistics in Python without using PSUtil? - -Total disk space -Used disk space -Free disk space - -All the examples I have found seem to use PSUtil which I am unable to use for this application. -My device is a Raspberry PI with a single SD card. I would like to get the total size of the storage, how much has been used and how much is remaining. -Please note I am using Python 2.7.",You can do this with the os.statvfs function.,0.2012947653214861,False,1,5922 -2019-02-01 14:09:13.800,How can a same entity function as a parameter as well as an object?,"In the below operation, we are using a as an object as well as an argument. -a = ""Hello, World!"" - -print(a.lower()) -> a as an object -print(len(a)) -> a as a parameter - -May I know how exactly each operations differs in the way they are accessing a?","Everything in python (everything that can go on the rhs of an assignment) is an object, so what you can pass as an argument to a function IS an object, always. Actually, those are totally orthogonal concepts: you don't ""use"" something ""as an object"" - it IS an object - but you can indeed ""use it"" (pass it) as an argument to a function / method / whatever callable. - -May I know how exactly each operations differs in the way they are accessing a? - -Not by much actually (except for the fact they do different things with a)... -a.lower() is only syntactic sugar for str.lower(a) (obj.method() is syntactic sugar for type(obj).method(obj), so in both cases you are ""using a as an argument"".",0.3869120172231254,False,1,5923 -2019-02-02 02:41:43.413,Loading and using a trained TensorFlow model in Python,"I trained a model in TensorFlow using the tf.estimator API, more specifically using tf.estimator.train_and_evaluate. I have the output directory of the training. How do I load my model from this and then use it? -I have tried using the tf.train.Saver class by loading the most recent ckpt file and restoring the session. However, then to call sess.run() I need to know what the name of the output node of the graph is so I can pass this to the fetches argument. What is the name/how can I access this output node? Is there a better way to load and use the trained model? -Note that I have already trained and saved the model in a ckpt file, so please do not suggest that I use the simple_save function.","(Answering my own question) I realized that the easiest way to do this was to use the tf.estimator API. By initializing an estimator that warm starts from the model directory, it's possible to just call estimator.predict and pass the correct args (predict_fn) and get the predictions immediately. It's not required to deal with the graph variables in any way.",0.0,False,1,5924 -2019-02-02 08:14:24.520,Best way to map words with multiple spellings to a list of key words?,"I have a pile of ngrams of variable spelling, and I want to map each ngram to it's best match word out of a list of known desired outputs. -For example, ['mob', 'MOB', 'mobi', 'MOBIL', 'Mobile] maps to a desired output of 'mobile'. -Each input from ['desk', 'Desk+Tab', 'Tab+Desk', 'Desktop', 'dsk'] maps to a desired output of 'desktop' -I have about 30 of these 'output' words, and a pile of about a few million ngrams (much fewer unique). -My current best idea was to get all unique ngrams, copy and paste that into Excel and manually build a mapping table, took too long and isn't extensible. -Second idea was something with fuzzy (fuzzy-wuzzy) matching but it didn't match well. -I'm not experienced in Natural Language terminology or libraries at all so I can't find an answer to how this might be done better, faster and more extensibly when the number of unique ngrams increases or 'output' words change. -Any advice?","The classical approach would be, to build a ""Feature Matrix"" for each ngram. Each word maps to an Output which is a categorical value between 0 and 29 (one for each class) -Features can for example be the cosine similarity given by fuzzy wuzzy but typically you need many more. Then you train a classification model based on the created features. This model can typically be anything, a neural network, a boosted tree, etc.",0.1352210990936997,False,1,5925 -2019-02-04 21:09:00.383,Use VRAM (graphics card memory) in pygame for images,"I'm programming a 2D game with Python and Pygame and now I want to use my internal graphics memory to load images to. -I have an Intel HD graphics card (2GB VRAM) and a Nvidia GeForce (4GB VRAM). -I want to use one of them to load images from the hard drive to it (to use the images from there). -I thought it might be a good idea as I don't (almost) need the VRAM otherwise. -Can you tell me if and how it is possible? I do not need GPU-Acceleration.","You have to create your window with the FULLSCREEN, DOUBLEBUF and HWSURFACE flags. -Then you can create and use a hardware surface by creating it with the HWSURFACE flag. -You'll also have to use pygame.display.flip() instead of pygame.display.update(). -But even pygame itself discourages using hardware surfaces, since they have a bunch of disadvantages, like -- no mouse cursor -- only working in fullscreen (at least that's what pygame's documentation says) -- you can't easily manipulate the surfaces -- they may not work on all platforms -(and I never got transparency to work with them). -And it's not even clear if you really get a notable performance boot. -Maybe they'll work better in a future pygame release when pygame switches to SDL 2 and uses SDL_TEXTURE instead of SDL_HWSURFACE, who knows....",1.2,True,1,5926 -2019-02-05 02:42:03.343,Installed Anaconda to macOS that has Python2.7 and 3.7. Pandas only importing to 2.7; how can I import to 3.7?,"New to coding; I just downloaded the full Anaconda package for Python 3.7 onto my Mac. However, I can't successfully import Pandas into my program on SublimeText when running my Python3.7 build. It DOES work though, when I change the build to Python 2.7. Any idea how I can get it to properly import when running 3.7 on SublimeText? I'd just like to be able to execute the code within Sublime. -Thanks!","Uninstall python 2.7. Unless you use it, its better to uninstall it.",0.0,False,1,5927 -2019-02-05 12:40:24.703,How to check learning feasibility on a binary classification problem with Hoeffding's inequality/VC dimension with Python?,"I have a simple binary classification problem, and I want to assess the learning feasibility using Hoeffding's Inequality and also if possible VC dimension. -I understand the theory but, I am still stuck on how to implement it in Python. -I understand that In-sample Error (Ein) is the training Error. Out of sample Error(Eout) is the error on the test subsample I guess. -But how do I plot the difference between these two errors with the Hoeffdings bound?","Well here is how I handled it : I generate multiple train/test samples, run the algorithm on them, calculate Ein as the train set error, Eout estimated by the test set error, calculate how many times their differnces exceeds the value of epsilon (for a range of epsilons). And then I plot the curve of these rates of exceeding epsilon and the curve of the right side of the Hoeffding's /VC inequality so I see if the differences curve is always under the Hoeffding/VC's Bound curve, this informs me about the learning feasiblity.",1.2,True,1,5928 -2019-02-06 20:20:54.933,python keeps saying that 'imput is undefined. how do I fix this?,"Please help me with this. I'd really appreciate it. I have tried alot of things but nothing is working, Please suggest any ideas you have. -This is what it keeps saying: - name = imput('hello') -NameError: name 'imput' is not defined","You misspelled input as imput. imput() is not a function that python recognizes - thus, it assumes it's the name of some variable, searches for wherever that variable was declared, and finds nothing. So it says ""this is undefined"" and raises an error.",1.2,True,1,5929 -2019-02-07 02:36:18.047,Understanding each component of a web application architecture,"Here is a scenario for a system where I am trying to understand what is what: -I'm Joe, a novice programmer and I'm broke. I've got a Flask app and one physical machine. Since I'm broke, I cannot afford another machine for each piece of my system, thus the web server, application and database all live on my one machine. -I've never deployed an app before, but I know that a server can refer to a machine or software. From here on, lets call the physical machine the Rack. I've loaded an instance of MongoDB on my machine and I know that is the Database Server. In order to handle API requests, I need something on the rack that will handle HTTP/S requests, so I install and run an instance of NGINX on it and I know that this is the Web Server. However, my web server doesnt know how to run the app, so I do some research and learn about WSGI and come to find out I need another component. So I install and run an instance of Gunicorn and I know that this is the WSGI Server. -At this point I have a rack that is home to a web server to handle API calls (really just acts as a reverse proxy and pushes requests to the WSGI server), a WSGI server that serves up dynamic content from my app and a database server that stores client information used by the app. -I think I've got my head on straight, then my friend asks ""Where is your Application Server?"" -Is there an application server is this configuration? Do I need one?","Any basic server architecture has three layers. On one end is the web server, which fulfills requests from clients. The other end is the database server, where the data resides. -In between these two is the application server. It consists of the business logic required to interact with the web server to receive the request, and then with the database server to perform operations. -In your configuration, the WSGI serve/Flask app is the application server. -Most application servers can double up as web servers.",0.0,False,1,5930 -2019-02-07 04:21:01.713,How keras model H5 works in theory,After training the trained model will be saved as H5 format. But I didn't know how that H5 file can be used as classifier to classifying new data. How H5 model works in theory when classifying new data?,"When you save your model as h5-file, you save the model structure, all its parameters and further informations like state of your optimizer and so on. It is just an efficient way to save huge amounts of information. You could use json or xml file formats to do this as well. -You can't classifiy anything only using this file (it is not executable). You have to rebuild the graph as a tensorflow graph from this file. To do so you simply use the load_model() function from keras, which returns a keras.models.Model object. Then you can use this object to classifiy new data, with keras predict() function.",0.2012947653214861,False,1,5931 -2019-02-07 19:36:54.707,Using pyautogui with multiple monitors,"I'm trying to use the pyautogui module for python to automate mouse clicks and movements. However, it doesn't seem to be able to recognise any monitor other than my main one, which means i'm not able to input any actions on any of my other screens, and that is a huge problem for the project i am working on. -I've searched google for 2 hours but i can't find any straight answers on whether or not it's actually possible to work around. If anyone could either tell me that it is or isn't possible, tell me how to do it if it is, or suggest an equally effective alternative (for python) i would be extremely grateful.",not sure if this is clear but I subtracted an extended monitor's horizontal resolution from 0 because my 2nd monitor is on the left of my primary display. That allowed me to avoid the out of bounds warning. my answer probably isn't the clearest but I figured I would chime in to let folks know it actually can work.,0.0,False,1,5932 -2019-02-07 21:14:35.190,How to encrypt(?) a document to prove it was made at a certain time?,"So, a bit of a strange question, but let's say that I have a document (jupyter notebook) and I want to be able to prove to someone that it was made before a certain date, or that it was created on a certain date - does anyone have any ideas as to how I'd achieve that? -It would need to be a solution that couldn't be technically re-engineered after the fact (faking the creation date). -Keen to hear your thoughts :) !","email it to yourself or a trusted party – dandavis yesterday -Good solution. -Thanks!",0.0,False,1,5933 -2019-02-08 03:38:25.450,How to reset Colab after the following CUDA error 'Cuda assert fails: device-side assert triggered'?,"I'm running my Jupyter Notebook using Pytorch on Google Colab. After I received the 'Cuda assert fails: device-side assert triggered' I am unable to run any other code that uses my pytorch module. Does anyone know how to reset my code so that my Pytorch functions that were working before can still run? -I've already tried implementing CUDA_LAUNCH_BLOCKING=1but my code still doesn't work as the Assert is still triggered!","You need to reset the Colab notebook. To run existing Pytorch modules that used to work before, you have to do the following: - -Go to 'Runtime' in the tool bar -Click 'Restart and Run all' - -This will reset your CUDA assert and flush out the module so that you can have another shot at avoiding the error!",1.2,True,1,5934 -2019-02-08 07:38:41.967,How change hostpython for use python3 on MacOS for compile Python+Kivy project for Xcode,"I use toolchain from Kivy for compile Python + Kivy project on MacOS, but by default, toolchain use python2 recipes but I need change to python3. -I´m googling but I don't find how I can do this. -Any idea? -Thanks","your kivy installation is likely fine already. Your kivy-ios installation is not. Completely remove your kivy-ios folder on your computer, then do git clone git://github.com/kivy/kivy-ios to reinstall kivy-ios. Then try using toolchain.py to build python3 instead of python 2 -This solution work for me. Thanks very much Erik.",1.2,True,2,5935 -2019-02-08 07:38:41.967,How change hostpython for use python3 on MacOS for compile Python+Kivy project for Xcode,"I use toolchain from Kivy for compile Python + Kivy project on MacOS, but by default, toolchain use python2 recipes but I need change to python3. -I´m googling but I don't find how I can do this. -Any idea? -Thanks","For example, recipe ""ios"" and ""pyobjc"" dependency is changed from depends = [""python""] to depends = [""python3""]. (__init__.py in each packages in receipe folder in kivy-ios package) -These recipes are loaded from your request implicitly or explicitly -This description of the problem recipes is equal to require hostpython2/python2. then conflict with python3. -The dependency of each recipe can be traced from output of kivy-ios. ""hostpython"" or ""python"" in output(console) were equaled to hostpython2 or python2.(now ver.)",0.0,False,2,5935 -2019-02-09 15:50:20.647,How to reach streaming learning in Neural network?,"As title, I know there're some model supporting streaming learning like classification model. And the model has function partial_fit() -Now I'm studying regression model like SVR and RF regressor...etc in scikit. -But most of regression models doesn't support partial_fit . -So I want to reach the same effect in neural network. If in tensorflow, how to do like that? Is there any keyword?","There is no some special function for it in TensorFlow. You make a single training pass over a new chunk of data. And then another training pass over another new chunk of data, etc till you reach the end of the data stream (which, hopefully, will never happen).",0.0,False,1,5936 -2019-02-10 09:38:54.947,How to pickle or save a WxPython FontData Object,"I've been coding a text editor, and it has the function to change the default font displayed in the wx.stc.SyledTextCtrl. -I would like to be able to save the font as a user preference, and I have so far been unable to save it. -The exact object type is . -Would anyone know how to pickle/save this?","Probably due to its nature, you cannot pickle a wx.Font. -Your remaining option is to store its constituent parts. -Personally, I store facename, point size, weight, slant, underline, text colour and background colour. -How you store them is your own decision. -I use 2 different options depending on the code. - -Store the entries in an sqlite3 database, which allows for multiple -indexed entries. -Store the entries in an .ini file using -configobj - -Both sqlite3 and configobj are available in the standard python libraries.",1.2,True,1,5937 -2019-02-10 09:51:41.193,how to decode gzip string in JS,"I have one Django app and in the view of that I am using gzip_str(str) method to compress data and send back to the browser. Now I want to get the original string back in the browser. How can I decode the string in JS. -P.S. I have found few questions here related to the javascript decode of gzip string but I could not figure out how to use those. Please tell me how can I decode and get the original string.","Serve the string with an appropriate Content-Encoding, then the browser will decode it for you.",0.0,False,1,5938 -2019-02-10 15:03:18.307,How to remove unwanted python packages from the Base environment in Anaconda,"I am using Anaconda. I would like to know how to remove or uninstall unwanted packages from the base environment. I am using another environment for my coding purpose. -I tried to update my environment by using yml file (Not base environment). Unexpectedly some packages installed by yml into the base environment. So now it has 200 python packages which have another environment also. I want to clear unwanted packages in the base environment and I am not using any packages in the base environment. Also, my memory is full because of this. -Please give me a solution to remove unwanted packages in the base environment in anaconda. -It is very hard to remove one by one each package, therefore, I am looking for a better solution.","Please use the below code: -conda uninstall -n base ",0.0,False,1,5939 -2019-02-11 00:05:55.277,Pythonic way to split project into modules?,"Say, there is a module a which, among all other stuff, exposes some submodule a.b. -AFAICS, it is desired to maintain modules in such a fashion that one types import a, import a.b and then invokes something b-specific in a following way: a.b.b_specific_function() or a.a_specific_function(). -The questions I'd like to ask is how to achive such effect? -There is directory a and there is source-code file a.py inside of it. Seems to be logical choice, thought it would look like import a.a then, rather than import a. The only way I see is to put a.py's code to the __init__.py in the a directory, thought it is definitely wrong... -So how do I keep my namespaces clean?",You can put the code into __init__.py. There is nothing wrong with this for a small subpackage. If the code grows large it is also common to have a submodule with a repeated name like a/a.py and then inside __init__.py import it using from .a import *.,1.2,True,1,5940 -2019-02-11 11:28:57.127,Fastest way in numpy to sum over upper triangular elements with the least memory,"I need to perform a summation of the kind i Configure IDLE => Settings => Highlights there is a highlight setting for builtin names (default purple), including a few non-functions like Ellipsis. There is another setting for the names in def (function) and class statements (default blue). You can make def (and class) names be purple also. -This will not make function names purple when used because the colorizer does not know what the name will be bound to when the code is run.",1.2,True,1,5949 -2019-02-17 13:30:30.583,Count number of Triggers in a given Span of Time,"I've been working for a while with some cheap PIR modules and a raspberry pi 3. My aim is to use 4 of these guys to understand if a room is empty, and turn off some lights in case. -Now, this lovely sensors aren't really precise. They false trigger from time to time, and they don't trigger right after their status has changed, and this makes things much harder. -I thought I could solve the problem measuring a sort of ""density of triggers"", meaning how many triggers occurred during the last 60 seconds or something. -My question is how could I implement effectively this solution? I thought to build a sort of container and fill it with elements with a timer or something, but I'm not really sure this would do the trick. -Thank you!",How are you powering PIR sensors? They should be powered with 5V. I had similar problem with false triggers when I was powered PIR sensor with only 3.3V.,0.0,False,1,5950 -2019-02-18 02:33:10.543,"While debugging in pycharm, how to debug only through a certain iteration of the for loop?","I have a for loop in Python in Pycharm IDE. I have 20 iterations of the for loop. However, the bug seems to be coming from the dataset looped during the 18th iteration. Is it possible to skip the first 17 values of the for loop, and solely jump to debug the 18th iteration? -Currently, I have been going through all 17 iterations to reach the 18th. The logic encompassed in the for loop is quite intricate and long. Hence, every cycle of debug through each iteration takes a very long. -Is there some way to skip to the desired iteration in Pycharm without going in in-depth debugging of the previous iterations?",You can set a break point with a condition (i == 17 [right click on the breakpoint to put it]) at the start of the loop.,-0.1352210990936997,False,1,5951 -2019-02-18 17:11:25.750,How to evaluate the path to a python script to be executed within Jupyter Notebook,"Note: I am not simply asking how to execute a Python script within Jupyter, but how to evaluate a python variable which would then result in the full path of the Python script I was to execute. -In my particular scenario, some previous cell on my notebook generates a path based on some condition. -Example on two possible cases: - -script_path = /project_A/load.py -script_path = /project_B/load.py - -Then some time later, I have a cell where I just want to execute the script. Usually, I would just do: -%run -i /project_A/load.py -but I want to keep the cell's code generic by doing something like: -%run -i script_path -where script_path is a Python variable whose value is based on the conditions that are evaluated earlier in my Jupyter notebook. -The above would not work because Jupyter would then complain that it cannot find script_path.py. -Any clues how I can have a Python variable passed to the %run magic?","One hacky way would be to change the directory via %cd path -and then run the script with %run -i file.py -E: I know that this is not exactly what you were asking but maybe it helps with your problem.",0.0,False,1,5952 -2019-02-19 09:11:19.870,How to use pretrained word2vec vectors in doc2vec model?,"I am trying to implement doc2vec, but I am not sure how the input for the model should look like if I have pretrained word2vec vectors. -The problem is, that I am not sure how to theoretically use pretrained word2vec vectors for doc2vec. I imagine, that I could prefill the hidden layer with the vectors and the rest of the hidden layer fill with random numbers -Another idea is to use the vector as input for word instead of a one-hot-encoding but I am not sure if the output vectors for docs would make sense. -Thank you for your answer!","You might think that Doc2Vec (aka the 'Paragraph Vector' algorithm of Mikolov/Le) requires word-vectors as a 1st step. That's a common belief, and perhaps somewhat intuitive, by analogy to how humans learn a new language: understand the smaller units before the larger, then compose the meaning of the larger from the smaller. -But that's a common misconception, and Doc2Vec doesn't do that. -One mode, pure PV-DBOW (dm=0 in gensim), doesn't use conventional per-word input vectors at all. And, this mode is often one of the fastest-training and best-performing options. -The other mode, PV-DM (dm=1 in gensim, the default) does make use of neighboring word-vectors, in combination with doc-vectors in a manner analgous to word2vec's CBOW mode – but any word-vectors it needs will be trained-up simultaneously with doc-vectors. They are not trained 1st in a separate step, so there's not a easy splice-in point where you could provide word-vectors from elsewhere. -(You can mix skip-gram word-training into the PV-DBOW, with dbow_words=1 in gensim, but that will train word-vectors from scratch in an interleaved, shared-model process.) -To the extent you could pre-seed a model with word-vectors from elsewhere, it wouldn't necessarily improve results: it could easily send their quality sideways or worse. It might in some lucky well-managed cases speed model convergence, or be a way to enforce vector-space-compatibility with an earlier vector-set, but not without extra gotchas and caveats that aren't a part of the original algorithms, or well-described practices.",1.2,True,1,5953 -2019-02-21 02:24:51.223,How to convert every other character in a string to ascii in Python?,"I know how to convert characters to ascii and stuff, and I'm making my first encryption algorithm just as a little fun project, nothing serious. I was wondering if there was a way to convert every other character in a string to ascii, I know this is similar to some other questions but I don't think it's a duplicate. Also P.S. I'm fairly new to Python :)",Use ord() function to get ascii value of a character. You can then do a chr() of that value to get the character.,0.0,False,1,5954 -2019-02-21 05:36:13.653,Run python script by PHP from another server,"I am making APIs. -I'm using CentOS for web server, and another windows server 2016 for API server. -I'm trying to make things work between web server and window server. -My logic is like following flow. -1) Fill the data form and click button from web server -2) Send data to windows server -3) Python script runs and makes more data -4) More made data must send back to web server -5) Web server gets more made datas -6) BAMM! Datas append on browser! -I had made python scripts. -but I can't decide how to make datas go between two servers.. -Should I use ajax Curl in web server? -I was planning to send a POST type request by Curl from web server to Windows server. -But I don't know how to receipt those datas in windows server. -Please help! Thank you in advance.","First option: (Recommended) -You can create the python side as an API endpoint and from the PHP server, you need to call the python API. -Second option: -You can create the python side just like a normal webpage and whenever you call that page from PHP server you pass the params along with HTTP request, and after receiving data in python you print the data in JSON format.",1.2,True,1,5955 -2019-02-21 11:00:17.487,Kivy Android App - Switching screens with a swipe,"Every example I've found thus-far for development with Kivy in regards to switching screens is always done using a button, Although the user experience doesn't feel very ""native"" or ""Smooth"" for the kind of app I would like to develop. -I was hoping to incorperate swiping the screen to change the active screen. -I can sort of imagine how to do this by tracking the users on_touch_down() and on_touch_up() cords (spos) and if the difference is great enough, switch over to the next screen in a list of screens, although I can't envision how this could be implemented within the kv language -perhaps some examples could help me wrap my head around this better? + +Regards, +Niklas","After talking to Simon on Slack we found the culprit: + +simon-mo: aha yes objects/strings are not zero copy. categorical or fixed length string works. for fixed length you can try convert them to np.array first + +Experimenting with this (categorical values, fixed length strings etc) allows me not quite get zero-copy but at least fairly low latency (~300ms or less) when using Ray Objects or Plasma store.",1.2,True,1,6816 +2020-06-10 15:08:46.340,linking web application's backend in python and frontend in flutter,I am making a CRM web application. I am planning to do its backend in python(because I only know that language better) and I have a friend who uses flutter for frontend. Is it possible to link these two things(flutter and python backend)? If yes how can it be done...and if no what are the alternatives I have?,I used $.ajax() method in HTML pages and then used request.POST['variable_name_used_in_ajax()'] in the views.py,1.2,True,2,6817 +2020-06-10 15:08:46.340,linking web application's backend in python and frontend in flutter,I am making a CRM web application. I am planning to do its backend in python(because I only know that language better) and I have a friend who uses flutter for frontend. Is it possible to link these two things(flutter and python backend)? If yes how can it be done...and if no what are the alternatives I have?,"Yes you both can access same Django rest framework Backend. Try searching for rest API using Django rest framework and you are good to go. +Other alternatives are Firebase or creating rest API with PHP. +You would need to define API endpoints for different functions of your app like login,register etc. +Django rest framework works well with Flutter. I have tried it. You could also host it in Heroku +Use http package in flutter to communicate with the Django server.",0.0,False,2,6817 +2020-06-10 16:24:39.130,Building Tensorflow 1.5,"I have an old Macbook Pro 3,1 running ubuntu 20.04 and python 3.8. The mac CPU doesn't have support for avx (Advanced Vector Extensions) which is needed for tensorflow 2.2 so whilst tensorflow installs, it fails to run with the error: + +illegal instruction (core dumped) + +I've surfed around and it seems that I need to use tensorflow 1.5 however there is no wheel for this for my configuration and I have the impression that I need to build one for myself. +So here's my question... how do I even start to do that? Does anyone have a URL to Building-Stuff-For-Dummies or something similar please? (Any other suggestions also welcome) +Thanks in advance for your help",Update: I installed python 3.6 alongside the default 3.8 and then installed tensorflow 1.5 and it looks like it works now (albeit with a few 'future warnings'.),0.0,False,2,6818 +2020-06-10 16:24:39.130,Building Tensorflow 1.5,"I have an old Macbook Pro 3,1 running ubuntu 20.04 and python 3.8. The mac CPU doesn't have support for avx (Advanced Vector Extensions) which is needed for tensorflow 2.2 so whilst tensorflow installs, it fails to run with the error: + +illegal instruction (core dumped) + +I've surfed around and it seems that I need to use tensorflow 1.5 however there is no wheel for this for my configuration and I have the impression that I need to build one for myself. +So here's my question... how do I even start to do that? Does anyone have a URL to Building-Stuff-For-Dummies or something similar please? (Any other suggestions also welcome) +Thanks in advance for your help",Usually there are instructions for building in the repository's README.md. Isn't there such for TensorFlow? It would be odd.,0.0,False,2,6818 +2020-06-10 17:22:00.360,xgboost how to copy model,"In the xgboost documentation they refer to a copy() method, but I can't figure out how to use it since if foo is my model, neither bar = foo.copy() nor bar=xgb.copy(foo) works (xgboost can't find a copy() attribute of either the module or the model). Any suggestions?","It turns out that copy() is a method of the Booster object, but a (say) XGBClassifier is not one, so if using the sklearn front end, you do +bar = foo.get_booster().copy()",0.2012947653214861,False,1,6819 +2020-06-11 02:21:57.910,Need help getting data using Selenium,"I'm trying to get Python and selenium to store the ""1292"" in the following html script and cant figure out why it won't work. I've tried using find_element_by_xpath as well as placing a wait before it and I keep getting this error ""Message: no such element: Unable to locate element:"" +Any ideas on how else I can accomplish this? Thanks + + 1292 + ","You can try: +driver.find_element_by_xpath(""//tspan[text()='1292']"").text +to obtain the string ""1292"".",0.0,False,1,6820 +2020-06-11 07:07:02.407,Alternatives for interaction between C# and Python application -- Pythonnet vs DLL vs shared memory vs messaging,"We have a big C# application, would like to include an application written in python and cython inside the C# +Operating system: Win 10 +Python: 2.7 +.NET: 4.5+ +I am looking at various options for implementation here. +(1) pythonnet - embed the python inside the C# application, if I have abc.py and inside the C#, while the abc.py has a line of ""import numpy"", does it know how to include all python's dependencies inside C#? +(2) Convert the python into .dll - Correct me if i am wrong, this seems to be an headache to include all python files and libraries inside clr.CompileModules. Is there any automatically solution? (and clr seems to be the only solution i have found so far for building dll from python. +(3) Convert .exe to .dll for C# - I do not know if i can do that, all i have is the abc.exe constructed by pyinstaller +(4) shared memory seems to be another option, but the setup will be more complicated and more unstable? (because one more component needs to be taken care of?) +(5) Messaging - zeromq may be a candidate for that. +Requirements: +Both C# and python have a lot of classes and objects and they need to be persistent +C# application need to interact with Python Application +They run in real-time, so performance for communication does matter, in milliseconds space. +I believe someone should have been through a similar situation and I am looking for advice to find the best suitable solution, as well as pros and cons for above solution. +Stability comes first, then the less complex solution the better it is.",For variant 1: in my TensorFlow binding I simply add the content of a conda environment to a NuGet package. Then you just have to point Python.NET to use that environment instead of the system Python installation.,0.0,False,1,6821 +2020-06-11 15:48:31.190,Test interaction between flask apps,"I have a flask app that is intended to be hosted on multiple host. That is, the same app is running on different hosts. Each host can then send a request to the others host to take some action on the it's respective system. +For example, assume that there is systems A and B both running this flask app. A knows the IP address of B and the port number that the app is hosted on B. A gets a request via a POST intended for B. A then needs to forward this request to B. +I have the forwarding being done in a route that simply checks the JSON attached to the POST to see if it is the intended host. If not is uses python's requests library to make a POST request to the other host. +My issue is how do I simulate this environment (two different instance of the same app with different ports) in a python unittest so I can confirm that the forwarding is done correctly? +Right now I am using the app.test_client() to test most of the routes but as far as I can tell the app.test_client() does not contain a port number or IP address associated with it. So having the app POST to another app.test_client() seems unlikely. +I tried hosting the apps in different threads but there does not seem to be a clean and easy way to kill the thread once app.run() starts, can't join as app.run() never exits. In addition, the internal state of the app (app.config) would be hidden. This makes verifying that A does not do the request and B does hard. +Is there any way to run two flask app simultaneously on different port numbers and still get access to both app's app.config? Or am I stuck using the threads and finding some other way to make sure A does not execute the request and B does? +Note: these app do not have any forums so there is no CSRF.","I ended up doing two things. One, I started using patch decorator from the mock library to fake the response form systems B. More specifically I use the @patch('requests.post') then in my code I set the return value to ""< Response [200]>"". However this only makes sure that requests.post is called, not that the second system processed it correctly. The second thing I did was write a separate test that makes the request that should have been sent by A and sends it to the system to check if it processes it correctly. In this manner systems A and B are never running at the same time. Instead the tests just fake there responses/requests. +In summery, I needed to use @patch('requests.post') to fake the reply from B saying it got the request. Then, in a different test, I set up B and made a request to it.",0.0,False,1,6822 +2020-06-11 23:53:40.030,How do I perform crosscorelation between two time series and what transformations should I perform in python?,"I have two-time series datasets i.e. errors received and bookings received on a daily basis for three years (a few million rows). I wish to find if there is any relationship between them.As of now, I think that cross-correlation between these two series might help. I order to so, should I perform any transformations like stationarity, detrending, deseasonality, etc. If this is correct, I'm thinking of using ""scipy.signal.correlate¶"" but really want to know how to interpret the result?","scipy.signal.correlate is for the correlation of time series. For series y1 and y2, correlate(y1, y2) returns a vector that represents the time-dependent correlation: the k-th value represents the correlation with a time lag of ""k - N + 1"", so that the N+1 th element is the similarity of the time series without time lag: close to one if y1 and y2 have similar trends (for normalized data), close to zero if the series are independent. +numpy.corrcoef takes two arrays and aggregates the correlation in a single value (the ""time 0"" of the other routine), the Pearson correlation, and does so for N rows, returning a NxN array of correlations. corrcoef normalizes the data (divides the results by their rms value), so that he diagonal is supposed to be 1 (average self correlation). +The questions about stationarity, detrending, and deseasonality depend on your specific problem. The routines above consider ""plain"" data without consideration for their signification.",1.2,True,1,6823 +2020-06-12 19:17:12.797,How to remove superuser on the system in Django?,"I was doing some project by using django +and I realized that I forgot to activate virtualenv. +I already made some changes and applied it not on the venv, +and created superuser on the system. + +How to find any changes on the system? +how to remove superuser that I made on the system +and what are the cmd commands for that?","If you haven't setup an additional database for your project and you have used django-admin startproject you'll just have a standard django setup, and you will be using sqlite. With this setup, your database is stored in a file in your root directory (for the project) called db.sqlite3. +This is where the super-user you have created will be stored. So it does not matter if the virtualenv was activated or not. Your superuser will have been created in the right place. +TLDR: No need to worry, the superuser you created will most likely be in the right place.",1.2,True,1,6824 +2020-06-12 19:21:05.707,How to get python to search for whole numbers in a string-not just digits,"Okay please do not close this and send me to a similar question because I have been looking for hours at similar questions with no luck. +Python can search for digits using re.search([0-9]) +However, I want to search for any whole number. It could be 547 or 2 or 16589425. I don't know how many digits there are going to be in each whole number. +Furthermore I need it to specifically find and match numbers that are going to take a form similar to this: 1005.2.15 or 100.25.1 or 5.5.72 or 1102.170.24 etc. +It may be that there isn't a way to do this using re.search but any info on what identifier I could use would be amazing.","Assuming that you're looking for whole numbers only, try re.search(r""[0-9]+"")",0.0,False,1,6825 +2020-06-12 20:05:50.403,Dynamic Select Statement In Python,"I'm using Python with cx_Oracle, and I'm trying to do an INSERT....SELECT. Some of the items in the SELECT portion are variable values. I'm not quite sure how to accomplish this. Do I bind those variables in the SELECT part, or just concatenate a string? + + v_insert = (""""""\ + INSERT INTO editor_trades + SELECT "" + v_sequence + "", "" + issuer_id, UPPER("" + p_name + ""), "" + p_quarter + "", "" + p_year + + "", date_traded, action, action_xref, SYSDATE + FROM "" + p_broker.lower() + ""_tmp"") """""") + +Many thanks!","With Oracle DB, binding only works for data, not for SQL statement text (like column names) so you have to do concatenation. Make sure to allow-list or filter the variables (v_sequence etc) so there is no possibility of SQL injection security attacks. You probably don't need to use lower() on the table name, but that's not 100% clear to me since your quoting currently isn't valid.",0.0,False,1,6826 +2020-06-14 05:35:32.993,Heroku won't run latest python file,"I use Heroku to host my discord.py bot, and since I've started using sublime merge to push to GitHub (I use Heroku GitHub for it), Heroku hasn't been running the latest file. The newest release is on GitHub, but Heroku runs an older version. I don't think it's anything to do with sublime merge, but it might be. I've already tried making a new application, but same problem. Anyone know how to fix this? +Edit: I also tried running Heroku bash and running the python file again","1) Try to deploy branch (maybe another branch) +2) Enable automatic deploy",0.3869120172231254,False,1,6827 +2020-06-14 09:54:51.873,Is it faster and more memory efficient to manipulate data in Python or PostgreSQL?,"Say I had a PostgreSQL table with 5-6 columns and a few hundred rows. Would it be more effective to use psycopg2 to load the entire table into my Python program and use Python to select the rows I want and order the rows as I desire? Or would it be more effective to use SQL to select the required rows, order them, and only load those specific rows into my Python program. +By 'effective' I mean in terms of: + +Memory Usage. +Speed. + +Additionally, how would these factors start to vary as the size of the table increases? Say, the table now has a few million rows?","Actually, if you are comparing data that is already loaded into memory to data being retrieved from a database, then the in-memory operations are often going to be faster. Databases have overhead: + +They are in separate processes on the same server or on a different server, so data and commands needs to move between them. +Queries need to be parsed and optimized. +Databases support multiple users, so other work may be going on using up resources. +Databases maintain ACID properties and data integrity, which can add additional overhead. + +The first two of these in particular add overhead compared to equivalent in-memory operations for every query. +That doesn't mean that databases do not have advantages, particularly for complex queries: + +They implement multiple different algorithms and have an optimizer to choose the best one. +They can take advantage of more resources -- particularly by running in parallel. +They can (sometimes) cache results saving lots of time. + +The advantage of databases is not that they provide the best performance all the time. The advantage is that they provide good performance across a very wide range of requests with a simple interface (even if you don't like SQL, I think you need to admit that it is simpler, more concise, and more flexible that writing code in a 3rd generation language). +In addition, databases protect data, via ACID properties and other mechanisms to support data integrity.",1.2,True,1,6828 +2020-06-15 04:21:10.657,Creating a stop in a While loop - Python,"I am working on a code that is supposed to use a while loop to determine if the number inputted by the user is the same as the variable secret_number = 777. +the following criteria are: +will ask the user to enter an integer number; +will use a while loop; +will check whether the number entered by the user is the same as the number picked by the magician. If the number chosen by the user is different than the magician's secret number, the user should see the message ""Ha ha! You're stuck in my loop!"" and be prompted to enter a number again. +If the number entered by the user matches the number picked by the magician, the number should be printed to the screen, and the magician should say the following words: ""Well done, muggle! You are free now."" +if you also have any tips how to use the while loop that would be really helpful. Thank you!","You can use while(true) to create a while loop. +Inside, set a if/else to compare the value input and secret_number. If it's true, print(""Well done, muggle! You are free now."") and break. Unless, print(""Ha ha! You're stuck in my loop"") and continue",0.0,False,1,6829 +2020-06-15 16:39:14.833,"IDLE and python is different, not able to install modules properly","thanks for reading this. I am using macOS High Sierra. I am not very familiar with terminal or environment variables, but am trying to learn more. From reading other threads and google, it seems like I either have multiple pythons installed, or have pythons running from different paths. However I am not able to find a solution to resolving this, either by re-pathing my IDLE or deleting it entirely. +I do have python, python launcher, and anaconda (not very sure how anaconda works, have it installed a few years back and didn't touch it) installed. I am trying to install pandas (pip install pandas), which tells me that I have it installed, but when I run it on IDLE, it says module not found. Though if i run python3 on terminal and type my code in, it works (so pandas has indeed been installed). +When i run which python on terminal, it returns +/Users/myname/anaconda3/bin/python +(when i enter into this directory from terminal, it shows that in the bin folder, I have python, python.app, python3, python3-config, python3.7, python3.7-config, python3.7m, python3.7m-config) +When i run which idle on terminal, it returns +/usr/bin/idle (im not even sure how to find this directory from the terminal) +When i run import os; print(os.path) on IDLE, it returns module 'posixpath' from '/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/posixpath.py' +Would really appreciate some help to figure out how to ensure that when i install modules from terminal, it would be installed into the same python as the one IDLE is using. Also, I would like to know whether it is possible for me to work on VSCode instead of IDLE. I cant seem to find suitable extensions for data science and its related modules (like statsmodels, pandas etc). Thanks a lot!","First of all, a quick description of anaconda: +Anaconda is meant to help you manage multiple python ""environments"", each one potentially having its own python version and installed packages (with their own respective versions). This is really useful in cases where you would like multiple python versions for different tasks or when there is some conflict in versions of packages, required by other ones. By default, anaconda creates a ""base"" environment with a specific python version, IDLE and pip. Also, anaconda provides an improved way (with respect to pip) of installing and managing packages via the command conda install . +For the rest, I will be using the word ""vanilla"" to refer to the python/installation that you manually set up, independent of anaconda. +Explanation of the problem: +Now, the problem arises since you also installed python independently. The details of the problem depend on how exactly you set up both python and anaconda, so I cannot tell you exactly what went wrong. Also, I am not an OSX user, so I have no idea how python is installed and what it downloads/sets alongside. +By your description however, it seems that the ""vanilla"" python installation did not overwrite neither your anaconda python nor anaconda's pip, but it did install IDLE and set it up to use this new python. +So right now, when you are downloading something via pip, only the python from anaconda is able to see that and not IDLE's python. +Possible solutions: +1. Quick fix: +Just run IDLE via /Users/myname/anaconda3/bin/idle3 every time. This one uses anaconda's python and should be able to see all packages installed via conda install of pip install (*). I get this is tiresome, but you don't have to delete anything. You can also set an ""alias"" in your ~/.bashrc file to make the command idle specifically linking you there. Let me know with a comment if you would like me to explain how to do that, as this answer will get too long and redundant. +2. Remove conda altogether (not recommended) +You can search google on how to uninstall anaconda along with everything that it has installed. What I do not know at this point is whether your ""vanilla"" python will become the default, whether you will need to also manually install pip again and whether there is the need to reinstall python in order for everything to work properly. +3. Remove your python ""vanilla"" installation and only use anaconda +Again, I do not know how python installation works in OSX, but it should be reasonably straightforward to uninstall it. The problem now is that probably you will not have a launcher for IDLE (since I am guessing anaconda doesn't provide one on OSX) but you will be able to use it via the terminal as described in 1.. +4. Last resort: +If everything fails, simply uninstall both your vanilla python (which I presume will also uninstall IDLE) and anaconda which will uninstall its own python, pip and idle versions. The relevant documentation should not be difficult to follow. Then, reinstall whichever you want anew. +Finally: +When you solve your problems, any IDE you choose, being VScode (I haven't use that either), pycharm or something else, will probably be able to integrate with your installed python. There is no need to install a new python ""bundle"" with every IDE. + +(*): Since you said that after typing pip install pandas your anaconda's python can import pandas while IDLE cannot, I am implying in my answer that pip is also the one that comes with anaconda. You can make sure this is the case by typing which pip which should point to an anaconda directory, probably /Users/myname/anaconda3/bin/pip",1.2,True,3,6830 +2020-06-15 16:39:14.833,"IDLE and python is different, not able to install modules properly","thanks for reading this. I am using macOS High Sierra. I am not very familiar with terminal or environment variables, but am trying to learn more. From reading other threads and google, it seems like I either have multiple pythons installed, or have pythons running from different paths. However I am not able to find a solution to resolving this, either by re-pathing my IDLE or deleting it entirely. +I do have python, python launcher, and anaconda (not very sure how anaconda works, have it installed a few years back and didn't touch it) installed. I am trying to install pandas (pip install pandas), which tells me that I have it installed, but when I run it on IDLE, it says module not found. Though if i run python3 on terminal and type my code in, it works (so pandas has indeed been installed). +When i run which python on terminal, it returns +/Users/myname/anaconda3/bin/python +(when i enter into this directory from terminal, it shows that in the bin folder, I have python, python.app, python3, python3-config, python3.7, python3.7-config, python3.7m, python3.7m-config) +When i run which idle on terminal, it returns +/usr/bin/idle (im not even sure how to find this directory from the terminal) +When i run import os; print(os.path) on IDLE, it returns module 'posixpath' from '/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/posixpath.py' +Would really appreciate some help to figure out how to ensure that when i install modules from terminal, it would be installed into the same python as the one IDLE is using. Also, I would like to know whether it is possible for me to work on VSCode instead of IDLE. I cant seem to find suitable extensions for data science and its related modules (like statsmodels, pandas etc). Thanks a lot!","First: This would be a comment if I had enough reputation. +Second: I would just delete python. Everything. And reinstall it.",0.1352210990936997,False,3,6830 +2020-06-15 16:39:14.833,"IDLE and python is different, not able to install modules properly","thanks for reading this. I am using macOS High Sierra. I am not very familiar with terminal or environment variables, but am trying to learn more. From reading other threads and google, it seems like I either have multiple pythons installed, or have pythons running from different paths. However I am not able to find a solution to resolving this, either by re-pathing my IDLE or deleting it entirely. +I do have python, python launcher, and anaconda (not very sure how anaconda works, have it installed a few years back and didn't touch it) installed. I am trying to install pandas (pip install pandas), which tells me that I have it installed, but when I run it on IDLE, it says module not found. Though if i run python3 on terminal and type my code in, it works (so pandas has indeed been installed). +When i run which python on terminal, it returns +/Users/myname/anaconda3/bin/python +(when i enter into this directory from terminal, it shows that in the bin folder, I have python, python.app, python3, python3-config, python3.7, python3.7-config, python3.7m, python3.7m-config) +When i run which idle on terminal, it returns +/usr/bin/idle (im not even sure how to find this directory from the terminal) +When i run import os; print(os.path) on IDLE, it returns module 'posixpath' from '/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/posixpath.py' +Would really appreciate some help to figure out how to ensure that when i install modules from terminal, it would be installed into the same python as the one IDLE is using. Also, I would like to know whether it is possible for me to work on VSCode instead of IDLE. I cant seem to find suitable extensions for data science and its related modules (like statsmodels, pandas etc). Thanks a lot!","To repeat and summarized what has been said on various other question answers: +1a. 3rd party packages are installed for a particular python(3).exe binary. +1b. To install multiple packages to multiple binaries, see the option from python -m pip -h. + +To find out which python binary is running, execute import sys; print(sys.executable). + +3a. For 3rd party package xyz usually installed in some_python/Lib/site-packages, IDLE itself has nothing to do with whether import xyz works. It only matters whether xyz is installed for 'somepython' (see 1a). +3b. To run IDLE with 'somepython', run somepython -m idlelib in a terminal or console. +somepython can be a name recognized by the OS or a path to a python executable.",0.0,False,3,6830 +2020-06-15 16:46:12.930,Why does os.system('cls') print 0,"hello before I say anything I would like to let you know that I tried searching for the answer but I found nothing. +whenever I use os.system('cls') it clears the screen but it prints out a zero. +is this normal, if not how do I stop it from doing that?","I guess you running in inside an interpreter +os.system will return: + +a 16-bit number, whose low byte is the signal number that killed the process, and whose high byte is the exit status (if the signal number is zero) + +So it just print the value it got, the return value of the command cls in the command line, which is 0 cause the command run successfully",0.2012947653214861,False,1,6831 +2020-06-15 22:24:45.037,VS Code - pylint is not running,"I have a workspace setup in VS Code where I do python development. I have linting enabled, pylint enabled as the provider, and lint on save enabled, but I continue to see no errors in the Problems panel. When I run pylint via the command line in the virtual environment i see a bunch of issues - so I know pylint works. I am also using black formatting(on save) which works without issue. I have tried using both the default pylint path as well as updating it manually to the exact location and still no results. When I look at the Output panel for python it looks like pylint is never even running (i.e. I see the commands for black running there but nothing for pylint). +My pylint version is 2.4.4 and VS Code version 1.46 +Any idea how to get this working?","Uninstall Python Extension +Reinstall Python Extension +And with that there will will be one more extension of ""Python Extension"" named - ""PYLANCE"" don't forget to install that too. +Reload VS Code + +DONE !!",0.0,False,1,6832 +2020-06-16 06:08:53.517,Saving a File in an Atom Text Editor Folder,"This is my first time on stack overflow. I am a beginner python coder and I use the Atom text editor. I am currently learning from a book called PythonCrashCourse by Eric Matthes (second edition) and is developing a practice-project called Alien Invasion. I am currently stuck on saving a file of a spaceship image into a folder named ""images"" within my text editor. I have an ASUS chromebook. The file I am trying to save is called ship.bmp and the book instructions say ""Make a folder called images inside your main alien_invasion project folder. Save the file ship.bmp in the images folder."" I have the ship.bmp file saved but I just don't know how to transport it into a file within my text editor ""images"" folder. I have been stuck on this for quite a while and I would really appreciate it if someone could give me some advice. Thanks!","First of all you need to have the ship.bmp file downloaded somewhere on your computer. You then would need to move it into your project folder. I think that the easiest way for you to navigate through the files you have is to go to your ""Files"" app in the Chromebook. You should look through your Downloads folder for the ship.bmp after you download it and manually move it into the project folder that you are working on. You should be able to open your project folder and place the ship.bmp file inside the ""images"" folder.",0.0,False,1,6833 +2020-06-16 10:20:24.730,How does Python compare two lists of unequal length?,"I am aware of the following: + +[1,2,3]<[1,2,4] is True because Python does an element-wise comparison from left to right and 3 < 4 +[1,2,3]<[1,3,4] is True because 2 < 3 so Python never even bothers to compare 3 and 4 + +My question is how does Python's behavior change when I compare two lists of unequal length? + +[1,2,3]<[1,2,3,0] is True +[1,2,3]<[1,2,3,4] is True + +This led me to believe that the longer list is always greater than the shorter list. But then: + +[1,2,3]<[0,0,0,0] is False + +Can someone please explain how these comparisons are being done by Python? +My hunch is that element-wise comparisons are first attempted and only if the first n elements are the same in both lists (where n is the number of elements in the shorter list) does Python consider the longer list to be greater. If someone could kindly confirm this or shed some light on the reason for this behavior, I'd be grateful.","The standard comparisons (<, <=, >, >=, ==, !=, in , not in ) work exactly the same among lists, tuples and strings. +The lists are compared element by element. +If they are of variable length, it happens till the last element of the shorter list +If they are same from start to the length of the smaller one, the length is compared i.e. shorter is smaller",1.2,True,1,6834 +2020-06-16 18:37:38.617,Cannot install older versions of tensorflow: No matching distribution found for tensorflow==1.9.0,"I need to install older versions of tensorflow to get the deepface library to work properly, however whenever I run pip install tensorflow==1.9.0, I get: ERROR: Could not find a version that satisfies the requirement tensorflow==1.9.0 (from versions: 2.2.0rc1, 2.2.0rc2, 2.2.0rc3, 2.2.0rc4, 2.2.0) +Anyone else run into this issue/know how to fix it? Thanks!",You can install TensorFlow 1.9.0 with the following Python versions: 2.7 and 3.4 to 3.6.,0.6730655149877884,False,1,6835 +2020-06-17 20:14:07.813,Remove character '\xa0' while reading CSV file in python,I want to remove the non-ASCII Character '\xa0' while reading my CSV file using read_csv into a dataframe with python. Can someone tell me how to achieve this?,"You can use x = txt.replace(u'\xa0', u'') for text you're reading.",1.2,True,1,6836 +2020-06-17 21:33:19.697,"How to scrape over 50,000 data points from dynamically loading webpage in under 24 hours?","I am using selenium python and was wondering how one effectively scrapes over 50,000 data points in under 24 hours. For example, when I search for products on the webpage 'insight.com' it takes about 3.5 seconds for the scraper to search for the product and grab its price, meaning that with large amounts of data it takes the scraper several days. A part from using threads to simultaneously look up several products at the same time, how else can I speed up this process? +I only have one laptop and will have to simultaneously scrape six other similar websites so therefore do not want too many threads and the speed at which the computer operates will slow down significantly. +How do people achieve to scrape large amounts of data in such short periods of time?","If you stop using the selenium module, and rather work with a much more sleek and elegant module, like requests, you could get the job done in a matter of mere minutes. +If you manage to reverse engineer the requests being handled, and send them yourself, you could pair this with threading to scrape at some 50 'data points' per second, more or less (depending on some factors, like processing and internet connection speed).",0.3869120172231254,False,2,6837 +2020-06-17 21:33:19.697,"How to scrape over 50,000 data points from dynamically loading webpage in under 24 hours?","I am using selenium python and was wondering how one effectively scrapes over 50,000 data points in under 24 hours. For example, when I search for products on the webpage 'insight.com' it takes about 3.5 seconds for the scraper to search for the product and grab its price, meaning that with large amounts of data it takes the scraper several days. A part from using threads to simultaneously look up several products at the same time, how else can I speed up this process? +I only have one laptop and will have to simultaneously scrape six other similar websites so therefore do not want too many threads and the speed at which the computer operates will slow down significantly. +How do people achieve to scrape large amounts of data in such short periods of time?","Find an API and use that!!! The goal of both web scraping and APIs is to access web data. +Web scraping allows you to extract data from any website through the use of web scraping software. On the other hand, APIs give you direct access to the data you’d want. +As a result, you might find yourself in a scenario where there might not be an API to access the data you want, or the access to the API might be too limited or expensive. +In these scenarios, web scraping would allow you to access the data as long as it is available on a website. +For example, you could use a web scraper to extract product data information from Amazon since they do not provide an API for you to access this data. However, if you had access to an API, you could grab all the data you want, super, super, super fast!!! It's analogous to doing a query in a database on prem, which is very fast and very efficient, vs. refreshing a webpage, waiting for ALL elements to load, and you can't use the data until all elements have been loaded, and then.....do what you need to do.",0.2012947653214861,False,2,6837 +2020-06-18 02:49:23.653,How to efficiently query a large database on a hourly basis?,"Background: +I have multiple asset tables stored in a redshift database for each city, 8 cities in total. These asset tables display status updates on an hourly basis. 8 SQL tables and about 500 mil rows of data in a year. +(I also have access to the server that updates this data every minute.) + +Example: One market can have 20k assets displaying 480k (20k*24 hrs) status updates a day. + +These status updates are in a raw format and need to undergo a transformation process that is currently written in a SQL view. The end state is going into our BI tool (Tableau) for external stakeholders to look at. +Problem: +The current way the data is processed is slow and inefficient, and probably not realistic to run this job on an hourly basis in Tableau. The status transformation requires that I look back at 30 days of data, so I do need to look back at the history throughout the query. +Possible Solutions: +Here are some solutions that I think might work, I would like to get feedback on what makes the most sense in my situation. + +Run a python script that looks at the most recent update and query the large history table 30 days as a cron job and send the result to a table in the redshift database. +Materialize the SQL view and run an incremental refresh every hour +Put the view in Tableau as a datasource and run an incremental refresh every hour + +Please let me know how you would approach this problem. My knowledge is in SQL, limited Data Engineering experience, Tableau (Prep & Desktop) and scripting in Python or R.","So first things first - you say that the data processing is ""slow and inefficient"" and ask how to efficiently query a large database. First I'd look at how to improve this process. You indicate that the process is based on the past 30 days of data - is the large tables time sorted, vacuumed and analyzed? It is important to take maximum advantage of metadata when working with large tables. Make sure your where clauses are effective at eliminating fact table block - don't rely on dimension table where clauses to select the date range. +Next look at your distribution keys and how these are impacting the need for your critical query to move large amounts of data across the network. The internode network has the lowest bandwidth in a Redshift cluster and needlessly pushing lots of data across it will make things slow and inefficient. Using EVEN distribution can be a performance killer depending on your query pattern. +Now let me get to your question and let me paraphrase - ""is it better to use summary tables, materialized views, or external storage (tableau datasource) to store summary data updated hourly?"" All 3 work and each has its own pros and cons. + +Summary tables are good because you can select the distribution of the data storage and if this data needs to be combined with other database tables it can be done most efficiently. However, there is more data management to be performed to keep this data up to data and in sync. +Materialized views are nice as there is a lot less management action to worry about - when the data changes, just refresh the view. The data is still in the database so is is easy to combine with other data tables but since you don't have control over storage of the data these action may not be the most efficient. +External storage is good in that the data is in your BI tool so if you need to refetch the results during the hour the data is local. However, it is not locked into your BI tool and far less efficient to combine with other database tables. + +Summary data usually isn't that large so how it is stored isn't a huge concern and I'm a bit lazy so I'd go with a materialized view. Like I said at the beginning I'd first look at the ""slow and inefficient"" queries I'm running every hour first. +Hope this helps",1.2,True,1,6838 +2020-06-18 03:50:06.060,How to send a HTML file as a table through outlook?,"I now have an HTML file and I want to send it as a table, not an attachment by using outlook. The code that I found online only sends the file as an attachment. Can anyone give me ideas on how to do it properly?",You can use the HTMLBody property of the MailItem class to set up the message body.,1.2,True,1,6839 +2020-06-18 03:51:03.357,Python idle to python.exe,"So I've made a script/code in python idle and want to run it on python.exe but whenever I do this the you can see the python window pop up briefly for a second before closing, and I want to run my code using python instead of idle, how can I do this?","since I cant comment yet: +go to the command line and open the file location directory and type: python filename.py",1.2,True,1,6840 +2020-06-18 04:35:58.813,Using Selenium without using any browser,"I have been trying to do web automation using selenium,Is there any way to use browser like chrome,firefox without actually installing then, like using some alternate options, or having portable versions of them.If I can use portable versions how do i tell selenium to use it?","If you install pip install selenium +it comes with the protable chrome browser, no need to install any browser for this. +the chrome has a tag ""chrome is controlled by automated test software"" near search bar",0.0,False,1,6841 +2020-06-18 06:04:06.180,Tkinter: How do I handle menus with many items?,"If I have a menu with too many items to fit on the screen, how do I get one of those 'more' buttons with a downward arrow at the bottom of the menu? Is that supported?","I solved my problem with cascading menus. I already had some, but I didn't want to use more for these particular menus items—but after closer inspection, I think it's better this way. +I'm still interested in other solutions, for scenarios where cascading menus are not a practical option, however (like if the screen is too narrow to cascade that far or something). So, I don't plan to mark this as the accepted answer anytime soon (even though in most circumstances, it's probably the best solution).",-0.2012947653214861,False,1,6842 +2020-06-18 10:18:20.563,How to check if a QThread is alive or killed and restart it if it is killed in PyQt5?,"I have an PyQt5 application to update database collections one by one using QThread and send updation signal to main thread as each collection gets updated to reflect it on GUI. It runs continuously 24X7. But somehow the data stops getting updated and also GUI stops getting signals. But the application is still running as other part are accessible and functioning properly. Also no errors are found in log file. +Mostly the application runs fine but after some random period this problem arises(first time after approximately a month, then after 2 weeks and now after 23 days). However restarting the application solves the problem. +I tried using isRunning() method and isFinished() method but no change found. +Can anyone tell what is the problem?? Thank you in advance. +Also tell how to check weather the QThread is stuck or killed?","If any exception occur in the thread, then thread can be finished soon. +so You should use settimeout function to calling any third party library(data update) in the thread. +That will solve your problem.",0.0,False,1,6843 +2020-06-18 12:22:55.510,Ngrok hostname SSL Certificate,"I am running a Flask API application, and I have an SSL Certificate. +When I run flask server on localhost the certificate is applied from Flask successfully. +But when I use Ngrok to deploy the localhost on a custom domain, the certificate is changed to *.ngrok.com, how can I change that to my certificate?. +EDIT #1: +I already have a certificate for the new hostname and I have already applied it on Flask, but ngrok is changing it.","You expose your service through the URL *.ngrok.com. A browser or other client will make a request to *.ngrok.com. The certificate presented there must be valid for *.ngrok.com. If *.ngrok.com presents a certificate for example.com, any valid HTTPS client would reject it because the names do not match, which by definition makes it an invalid certificate and is a flag for a potential security problem, exactly what HTTPS is designed to mitigate. +If you want to present your certificate for example.com to the client, you need to actually host your site at example.com",0.0,False,1,6844 +2020-06-18 14:48:27.287,Record sound without blocking Pygame UI,"I am making a simple Python utility that shows the tempo of a song (BPM) that is playing. I record short fragments of a few seconds to calculate the tempo over. The problem is that now I want to show this on a display using a Pygame UI, but when I'm recording sound, the UI does not respond. I want to make it so that the UI will stay responsive during the recording of the sound, and then update the value on the screen once the tempo over a new fragment has been calculated. How can I implement this? +I have looked at threading but I'm not sure this is the appropriate solution for this.","I'd use the python threading library. +Use the pygame module in the main thread (just the normal python shell, effectively) an create a separate thread for the function that determines BPM. +This BPM can then be saved to a global variable that can be accessed by PyGame for displaying.",1.2,True,1,6845 +2020-06-18 18:48:51.503,Text classification using Word2Vec,"I am in trouble to understand Word2Vec. I need to do a help desk text classification, based on what users complain in the help desk system. Each sentence has its own class. +I've seen some pre-trained word2vec files in the internet, but I don't know if is the best way to work since my problem is very specific. And my dataset is in Portuguese. +I'm considering that I will have to create my own model and I am in doubt on how to do that. Do I have to do it with the same words as the dataset I have with my sentences and classes? +In the frst line, the column titles. Below the first line, I have the sentence and the class. Could anyone help me? I saw Gensin to create vector models, and sounds me good. But I am completely lost. + +: chamado,classe 'Prezados não estou conseguindo gerar uma nota fiscal + do módulo de estoque e custos.','ERP GESTÃO', 'Não consigo acessar o + ERP com meu usuário e senha.','ERP GESTÃO', 'Médico não consegue gerar + receituário no módulo de Medicina e segurança do trabalho.','ERP + GESTÃO', 'O produto 4589658 tinta holográfica não está disponível no + EIC e não consigo gerar a PO.','ERP GESTÃO',","Your inquiry is very general, and normally StackOverflow will be more able to help when you've tried specific things, and hit specific problems - so that you can provide exact code, errors, or shortfalls to ask about. +But in general: + +You might not need word2vec at all: there are many text-classification approaches that, with sufficient training data, may assign your texts to helpful classes without using word-vectors. You will likely want to try those first, then consider word-vectors as a later improvement. +For word-vectors to be helpful, they need to be based on your actual language, and also ideally your particular domain-of-concern. Generic word-vectors from news articles or even Wikipedia may not include the important lingo, and word-senses for your problem. But it's not too hard to train your own word-vectors – you just need a lot of varied, relevant texts that use the words in realistic, relevant contexts. So yes, you'd ideally train your word-vectors on the same texts you eventually want to classify. + +But mostly, if you're ""totally lost"", start with more simple text-classification examples. As you're using Python, examples based on scikit-learn may be most relevant. Adapt those to your data & goals, to familiarize yourself with all the steps & the ways of evaluating whether your changes are improving your end results or not. Then investigate techniques like word-vectors.",0.0,False,1,6846 +2020-06-19 18:05:21.573,Pyqt5 widget style similar to tkinter style,"I want to create a qwidgets one with raised/sunkin/groove/ridge relief similar to tkinter. I know how to do this in tkinter, but don't know the style sheet option in Pyqt5 for each one. Please find the tkinter option +Widget = Tkinter.Button(top, text =""FLAT"", relief=raised ). Hope you can help to translate to Pyqt5",You can do this with QFrame. you can set QFrame.setFrameShadow(QFrame.Sunken). But I couldn't find for a QWidget one.,0.0,False,1,6847 +2020-06-20 13:21:05.770,How to program NVIDIA's tensor cores in RTX GPU with python and numba?,"I am interested in using the tensor cores from NVIDIA RTX GPUs in python to benefit from its speed up in some scientific computations. Numba is a great library that allows programming kernels for cuda, but I have not found how to use the tensor cores. Can it be done with Numba? If not, what should I use?",".... I have not found how to use the tensor cores. Can it be done with Numba? + +No. Numba presently doesn't have half precision support or tensor core intrinsic functions available in device code. + +If not, what should I use? + +I think you are going to be stuck with writing kernel code in the native C++ dialect and then using something like PyCUDA to run device code compiled from that C++ dialect.",1.2,True,1,6848 +2020-06-20 19:06:57.617,is it possible to run multiple http servers on one machine?,"can i run multiple python http servers on one machine to receive http post request from a webpage? +currently i am running an http server on port 80 and on the web page there is a HTML form which sends the http post request to the python server and in the HTML form i am using the my server's address like this : ""http://123.123.123.123"" and i am receiving the requests +but i want to run multiple servers on the same machine with different ports for each server. +if i run 2 more servers on port 21200 and 21300 how do i send the post request from the HTML form on a specified port , so that the post request is received and processed by correct server?? +do i need to define the server address like this : ""http://123.123.123.123:21200"" and ""http://123.123.123.123:21300"" ?","Yes can run multiple webservers on one machine. +use following commands to run on different ports: +python3 -m http.server 4000 +4000 is the port number, you can replace it with any port number here.",1.2,True,1,6849 +2020-06-21 01:52:26.577,How to change API level when using buildozer?,"I just finished my app and made a release version with buildozer and signed it but when I tried to upload my apk file to Google Play Console...It said that the API level of the app was 27 and it should be level 28. So how can I do this? +Thanks in advance",Find the line that says android.api = 27 in your buildozer.spec file and change it to 28.,0.0,False,2,6850 +2020-06-21 01:52:26.577,How to change API level when using buildozer?,"I just finished my app and made a release version with buildozer and signed it but when I tried to upload my apk file to Google Play Console...It said that the API level of the app was 27 and it should be level 28. So how can I do this? +Thanks in advance","It should be edited in buildozer.spec file. +If you scroll down it's default to 27, change it to specification",1.2,True,2,6850 +2020-06-21 11:00:41.357,Is there a plugin similar to gitlens for pycharm or other products?,"My question is very simple , as you read the title I want plugin similar to GitLens that I found in vscode. As you know with GitLens you can easily see the difference between two or multiple commits. I searched it up and I found GitToolBox but I don't know how to install it as well and I don't think that's like GitLens...","Open Settings on jetbrains IDE. +Go to plugins and look for git toolbox. +Install it and boom, its done!",0.0,False,1,6851 +2020-06-21 14:24:22.560,Sending Information from one Python file to another,"I would like to know how to perform the below mentioned task +I want to upload a CSV file to a python script 1, then send file's path to another python script in file same folder which will perform the task and send the results to python script 1. +A working code will be very helpful or any suggestion is also helpful.","You can import the script editing the CSV to the python file and then do some sort of loop that edits the CSV file with your script 1 then does whatever else you want to do with script 2. +This is an advantage of OOP, makes these sorts of tasks very easy as you have functions set in a module python file and can create a main python file and run a bunch of functions editing CSV files this way.",0.0,False,1,6852 +2020-06-21 14:56:48.953,I'm trying to figure out how to install this lib on python (time),"im new to python and i was trying to install ""time"" library on python, i typed +pip install time +but the compiler said this +C:\Users\Giuseppe\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Python 3.6>pip install time ERROR: Could not find a version that satisfies the requirement time (from versions: none) ERROR: No matching distribution found for time +i dont know how to resolve, can anyone help me? please be the more simple u can cause im not too good in py, as i said im new, thx to everyone! P.S. -I want to keep as much UI code within the kv language file as possible to prevent my project from producing a speghetti-code sort of feel to it. I'm also rather new to Kivy development altogether so I appologize if this question has an official answer somewhere and I just missed it.","You might want to use a Carousel instead of ScreenManager, but if you want that logic while using the ScreenManager, you'll certainly have to write some python code to manage that in a subclass of it, then use it in kv as a normal ScreenManager. Using previous and next properties to get the right screen to switch to depending on the action. This kind of logic is better done in python, and that doesn't prevent using the widgets in kv after.",1.2,True,1,5956 -2019-02-21 14:37:29.177,is it possible to code in python inside android studio?,"is it possible to code in python inside android studio? -how can I do it. -I have an android app that I am try to develop. and I want to code some part in python. -Thanks for the help -how can I do it. -I have an android app that I am try to develop. and I want to code some part in python. -Thanks for the help","If you mean coding part of your Android application in python (and another part for example in Java) it's not possible for now. However, you can write Python script and include it in your project, then write in your application part that will invoke it somehow. Also, you can use Android Studio as a text editor for Python scripts. To develop apps for Android in Python you have to use a proper library for it.",1.2,True,1,5957 -2019-02-22 09:08:55.793,"How to create .cpython-37 file, within __pycache__","I'm working on a project with a few scripts in the same directory, a pychache folder has been created within that directory, it contains compiled versions of two of my scripts. This has happened by accident I do not know how I did it. One thing I do know is I have imported functions between the two scripts that have been compiled. -I would like a third compiled python script for a separate file however I do not want to import any modules(if this is even the case). Does anyone know how I can manually create a .cpython-37 file? Any help is appreciated.","There is really no reason to worry about __pycache__ or *.pyc files - these are created and managed by the Python interpreter when it needs them and you cannot / should not worry about manually creating them. They contain a cached version of the compiled Python bytecode. Creating them manually makes no sense (and I am not aware of a way to do that), and you should probably let the interpreter decide when it makes sense to cache the bytecode and when it doesn't. -In Python 3.x, __pycache__ directories are created for modules when they are imported by a different module. AFAIK Python will not create a __pycache__ entry when a script is ran directly (e.g. a ""main"" script), only when it is imported as a module.",1.2,True,1,5958 -2019-02-22 10:05:07.620,Install python packages in windows server 2016 which has no internet connection,"I need to install python packages in a windows server 2016 sandbox for running a developed python model in production.This doesn't have internet connection. -My laptop is windows 2010 and the model is now running in my machine and need to push this to the server. -My question is how can i install all the required packages in my server which has no internet connection. -Thanks -Mithun","A simply way is to install the same python version on another machine having internet access, and use normally pip on that machine. This will download a bunch of files and installs them cleanly under Lib\site_packages of your Python installation. -You can they copy that folder to the server Python installation. If you want to be able to later add packages, you should keep both installations in sync: do not add or remove any package on the laptop without syncing with the server.",0.0,False,1,5959 -2019-02-22 18:47:07.843,How to write unit tests for text parser?,"For background, I am somewhat of a self-taught Python developer with only some formal training with a few CS courses in school. -In my job right now, I am working on a Python program that will automatically parse information from a very large text file (thousands of lines) that's a output result of a simulation software. I would like to be doing test driven development (TDD) but I am having a hard time understanding how to write proper unit tests. -My trouble is, the output of some of my functions (units) are massive data structures that are parsed versions of the text file. I could go through and create those outputs manually and then test but it would take a lot of time. The whole point of a parser is to save time and create structured outputs. Only testing I've been doing so far is trial and error manually which is also cumbersome. -So my question is, are there more intuitive ways to create tests for parsers? -Thank you in advance for any help!","Usually parsers are tested using a regression testing system. You create sample input sets and verify that the output is correct. Then you put the input and output in libraries. Each time you modify the code, you run the regression test system over the library to see if anything changes.",0.6730655149877884,False,1,5960 -2019-02-22 20:17:16.640,Specific reasons to favor pip vs. conda when installing Python packages,"I use miniconda as my default python installation. What is the current (2019) wisdom regarding when to install something with conda vs. pip? -My usual behavior is to install everything with pip, and only using conda if a package is not available through pip or the pip version doesn't work correctly. -Are there advantages to always favoring conda install? Are there issues associated with mixing the two installers? What factors should I be considering? - -OBJECTIVITY: This is not an opinion-based question! My question is when I have the option to install a python package with pip or conda, how do I make an informed decision? Not ""tell me which is better, but ""Why would I use one over the other, and will oscillating back & forth cause problems / inefficiencies?""","This is what I do: - -Activate your conda virutal env -Use pip to install into your virtual env -If you face any compatibility issues, use conda - -I recently ran into this when numpy / matplotlib freaked out and I used the conda build to resolve the issue.",0.3275988531455109,False,1,5961 -2019-02-24 14:21:54.997,how can I use python 3.6 if I have python 3.7?,"I'm trying to make a discord bot, and I read that I need to have an older version of Python so my code will work. I've tried using ""import discord"" on IDLE but an error message keeps on coming up. How can I use Python 3.6 and keep Python 3.7 on my Windows 10 computer?","Just install it in different folder (e.g. if current one is in C:\Users\noob\AppData\Local\Programs\Python\Python37, install 3.6. to C:\Users\noob\AppData\Local\Programs\Python\Python36). -Now, when you'll want to run a script, right click the file and under ""edit with IDLE"" will be multiple versions to choose. Works on my machine :)",0.0,False,2,5962 -2019-02-24 14:21:54.997,how can I use python 3.6 if I have python 3.7?,"I'm trying to make a discord bot, and I read that I need to have an older version of Python so my code will work. I've tried using ""import discord"" on IDLE but an error message keeps on coming up. How can I use Python 3.6 and keep Python 3.7 on my Windows 10 computer?","Install in different folder than your old Python 3.6 then update path -Using Virtualenv and or Pyenv -Using Docker - -Hope it help!",0.0,False,2,5962 -2019-02-25 15:00:24.023,"Is a Pyramid ""model"" also a Pyramid ""resource""?","I'm currently in the process of learning how to use the Python Pyramid web framework, and have found the documentation to be quite excellent. -I have, however, hit a stumbling block when it comes to distinguishing the idea of a ""model"" (i.e. a class defined under SQLAlchemy's declarative system) from the idea of a ""resource"" (i.e. a means of defining access control lists on views for use with Pyramid's auth system). -I understand the above statements seem to show that I already understand the difference, but I'm having trouble understanding whether I should be making models resources (by adding the __acl__ attribute directly in the model class) or creating a separate resource class (which has the proper __parent__ and __name__ attributes) which represents the access to a view which uses the model. -Any guidance is appreciated.","I'm having trouble understanding whether I should be making models resources (by adding the acl attribute directly in the model class) or creating a separate resource class - -The answer depends on what level of coupling you want to have. For a simple app, I would recommend making models resources just for simplicity sake. But for a complex app with a high level of cohesion and low level of coupling it would be better to have models separated from resources.",0.2012947653214861,False,1,5963 -2019-02-25 22:42:31.903,Python Gtk3 - Custom Statusbar w/ Progressbar,"Currently I am working to learn how to use Gtk3 with Python 3.6. So far I have been able to use a combination of resources to piece together a project I am working on, some old 2.0 references, some 3.0 shallow reference guides, and using the python3 interpreters help function. -However I am stuck at how I could customise the statusbar to display a progressbar. Would I have to modify the contents of the statusbar to add it to the end(so it shows up at the right side), or is it better to build my own statusbar? -Also how could I modify the progressbars color? Nothing in the materials list a method/property for it.","GtkStatusbar is a subclass of GtkBox. You can use any GtkBox method including pack_start and pack_end or even add, which is a method of GtkContainer. -Thus you can simply add you progressbar to statusbar.",1.2,True,1,5964 -2019-02-26 04:59:25.937,Can a consumer read records from a partition that stores data of particular key value?,Instead of creating many topics I'm creating a partition for each consumer and store data using a key. So is there a way to make a consumer in a consumer group read from partition that stores data of a specific key. If so can you suggest how it can done using kafka-python (or any other library).,"Instead of using the subscription and the related consumer group logic, you can use the ""assign"" logic (it's provided by the Kafka consumer Java client for example). -While with subscription to a topic and being part of a consumer group, the partitions are automatically assigned to consumers and re-balanced when a new consumer joins or leaves, it's different using assign. -With assign, the consumer asks to be assigned to a specific partition. It's not part of any consumer group. It's also mean that you are in charge of handling rebalancing if a consumer dies: for example, if consumer 1 get assigned partition 1 but at some point it crashes, the partition 1 won't be reassigned automatically to another consumer. It's up to you writing and handling the logic for restarting the consumer (or another one) for getting messages from partition 1.",0.0,False,1,5965 -2019-02-26 08:57:02.207,how to increase fps for raspberry pi for object detection,"I'm having low fps for real-time object detection on my raspberry pi -I trained the yolo-darkflow object detection on my own data set using my laptop windows 10 .. when I tested the model for real-time detection on my laptop with webcam it worked fine with high fps -However when trying to test it on my raspberry pi, which runs on Raspbian OS, it gives very low fps rate that is about 0.3 , but when I only try to use the webcam without the yolo it works fine with fast frames.. also when I use Tensorflow API for object detection with webcam on pi it also works fine with high fps -can someone suggest me something please? is the reason related to the yolo models or opencv or phthon? how can I make the fps rate higher and faster for object detection with webcam?",The raspberry pi not have the GPU procesors and because of that is very hard for it to do image recognition at a high fps .,0.0,False,2,5966 -2019-02-26 08:57:02.207,how to increase fps for raspberry pi for object detection,"I'm having low fps for real-time object detection on my raspberry pi -I trained the yolo-darkflow object detection on my own data set using my laptop windows 10 .. when I tested the model for real-time detection on my laptop with webcam it worked fine with high fps -However when trying to test it on my raspberry pi, which runs on Raspbian OS, it gives very low fps rate that is about 0.3 , but when I only try to use the webcam without the yolo it works fine with fast frames.. also when I use Tensorflow API for object detection with webcam on pi it also works fine with high fps -can someone suggest me something please? is the reason related to the yolo models or opencv or phthon? how can I make the fps rate higher and faster for object detection with webcam?","My detector on raspberry pi without any accelerator can reach 5 FPS. -I used SSD mobilenet, and quantize it after training. -Tensorflow Lite supplies a object detection demo can reach about 8 FPS on raspberry pi 4.",0.0,False,2,5966 -2019-02-26 10:41:52.910,"Python3: FileNotFoundError: [Errno 2] No such file or directory: 'train.txt', even with complete path","I'm currently working with Python3 on Jupyter Notebook. I try to load a text file which is in the exact same directory as my python notebook but it still doesn't find it. My line of code is: -text_data = prepare_text('train.txt') -and the error is a typical -FileNotFoundError: [Errno 2] No such file or directory: 'train.txt' -I've already tried to enter the full path to my text file but then I still get the same error. -Does anyone know how to solve this?","I found the answer. Windows put a secont .txt at the end of the file name, so I should have used train.txt.txt instead.",0.2012947653214861,False,1,5967 -2019-02-26 16:48:18.673,Write own stemmer for stemming,"I have a dataset of 27 files, each containing opcodes. I want to use stemming to map all versions of similar opcodes into the same opcode. For example: push, pusha, pushb, etc would all be mapped to push. -My dictionary contains 27 keys and each key has a list of opcodes as a value. Since the values contain opcodes and not normal english words, I cannot use the regular stemmer module. I need to write my own stemmer code. Also I cannot hard-code a custom dictionary that maps different versions of the opcodes to the root opcode because I have a huge dataset. -I think regex expression would be a good idea but I do not know how to use it. Can anyone help me with this or any other idea to write my own stemmer code?","I would recommend looking at the levenshtein distance metric - it measures the distance between two words in terms of character insertions, deletions, and replacements (so push and pusha would be distance 1 apart if you do the ~most normal thing of weighing insertions = deletions = replacements = 1 each). Based on the example you wrote, you could try just setting up categories that are all distance 1 from each other. However, I don't know if all of your equivalent opcodes will be so similar - if they're not leven might not work.",0.0,False,1,5968 -2019-02-26 18:30:41.820,Elementree Fromstring and iterparse in Python 3.x,"I am able to parse from file using this method: -for event, elem in ET.iterparse(file_path, events=(""start"", ""end"")): -But, how can I do the same with fromstring function? Instead of from file, xml content is stored in a variable now. But, I still want to have the events as before.","From the documentation for the iterparse method: - -...Parses an XML section into an element tree incrementally, and - reports what’s going on to the user. source is a filename or file - object containing XML data... - -I've never used the etree python module, but ""or file object"" says to me that this method accepts an open file-like object as well as a file name. It's an easy thing to construct a file-like object around a string to pass as input to a method like this. -Take a look at the StringIO module.",0.0,False,1,5969 -2019-02-26 21:57:40.743,Why should I use tf.data?,"I'm learning tensorflow, and the tf.data API confuses me. It is apparently better when dealing with large datasets, but when using the dataset, it has to be converted back into a tensor. But why not just use a tensor in the first place? Why and when should we use tf.data? -Why isn't it possible to have tf.data return the entire dataset, instead of processing it through a for loop? When just minimizing a function of the dataset (using something like tf.losses.mean_squared_error), I usually input the data through a tensor or a numpy array, and I don't know how to input data through a for loop. How would I do this?","The tf.data module has specific tools which help in building a input pipeline for your ML model. A input pipeline takes in the raw data, processes it and then feeds it to the model. - - -When should I use tf.data module? - -The tf.data module is useful when you have a large dataset in the form of a file such as .csv or .tfrecord. tf.data.Dataset can perform shuffling and batching of samples efficiently. Useful for large datasets as well as small datasets. It could combine train and test datasets. - -How can I create batches and iterate through them for training? - -I think you can efficiently do this with NumPy and np.reshape method. Pandas can read data files for you. Then, you just need a for ... in ... loop to get each batch amd pass it to your model. - -How can I feed NumPy data to a TensorFlow model? - -There are two options to use tf.placeholder() or tf.data.Dataset. - -The tf.data.Dataset is a much easier implementation. I recommend to use it. Also, has some good set of methods. -The tf.placeholder creates a placeholder tensor which feeds the data to a TensorFlow graph. This process would consume more time feeding in the data.",1.2,True,1,5970 -2019-02-27 00:06:57.810,Pipenv: Multiple Environments,"Right now I'm using virtualenv and just switching over to Pipenv. Today in virtualenv I load in different environment variables and settings depending on whether I'm in development, production, or testingby setting DJANGO_SETTINGS_MODULE to myproject.settings.development, myproject.settings.production, and myproject.settings.testing. -I'm aware that I can set an .env file, but how can I have multiple versions of that .env file?","You should create different .env files with different prefixes depending on the environment, such as production.env or testing.env. With pipenv, you can use the PIPENV_DONT_LOAD_ENV=1 environment variable to prevent pipenv shell from automatically exporting the .env file and combine this with export $(cat .env | xargs). -export $(cat production.env | xargs) && PIPENV_DONT_LOAD_ENV=1 pipenv shell would configure your environment variables for production and then start a shell in the virtual environment.",1.2,True,1,5971 -2019-02-27 05:59:35.137,How to architect a GUI application with UART comms which stays responsive to the user,"I'm writing an application in PyQt5 which will be used for calibration and test of a product. The important details: - -The product under test uses an old-school UART/serial communication link at 9600 baud. -...and the test / calibration operation involves communicating with another device which has a UART/serial communication link at 300 baud(!) -In both cases, the communication protocol is ASCII text with messages terminated by a newline \r\n. - -During the test/calibration cycle the GUI needs to communicate with the devices, take readings, and log those readings to various boxes in the screen. The trouble is, with the slow UART communications (and the long time-outs if there is a comms drop-out) how do I keep the GUI responsive? -The Minimally Acceptable solution (already working) is to create a GUI which communicates over the serial port, but the user interface becomes decidedly sluggish and herky-jerky while the GUI is waiting for calls to serial.read() to either complete or time out. -The Desired solution is a GUI which has a nice smooth responsive feel to it, even while it is transmitting and receiving serial data. -The Stretch Goal solution is a GUI which will log every single character of the serial communications to a text display used for debugging, while still providing some nice ""message-level"" abstraction for the actual logic of the application. -My present ""minimally acceptable"" implementation uses a state machine where I run a series of short functions, typically including the serial.write() and serial.read() commands, with pauses to allow the GUI to update. But the state machine makes the GUI logic somewhat tricky to follow; the code would be much easier to understand if the program flow for communicating to the device was written in a simple linear fashion. -I'm really hesitant to sprinkle a bunch of processEvents() calls throughout the code. And even those don't help when waiting for serial.read(). So the correct solution probably involves threading, signals, and slots, but I'm guessing that ""threading"" has the same two Golden Rules as ""optimization"": Rule 1: Don't do it. Rule 2 (experts only): Don't do it yet. -Are there any existing architectures or design patterns to use as a starting point for this type of application?","Okay for the past few days I've been digging, and figured out how to do this. Since there haven't been any responses, and I do think this question could apply to others, I'll go ahead and post my solution. Briefly: - -Yes, the best way to solve this is with with PyQt Threads, and using Signals and Slots to communicate between the threads. -For basic function (the ""Desired"" solution above) just follow the existing basic design pattern for PyQt multithreaded GUI applications: - - -A GUI thread whose only job is to display data and relay user inputs / commands, and, -A worker thread that does everything else (in this case, including the serial comms). - -One stumbling point along the way: I'd have loved to write the worker thread as one linear flow of code, but unfortunately that's not possible because the worker thread needs to get info from the GUI at times. - - -The only way to get data back and forth between the two threads is via Signals and Slots, and the Slots (i.e. the receiving end) must be a callable, so there was no way for me to implement some type of getdata() operation in the middle of a function. Instead, the worker thread had to be constructed as a bunch of individual functions, each one of which gets kicked off after it receives the appropriate Signal from the GUI. - -Getting the serial data monitoring function (the ""Stretch Goal"" above) was actually pretty easy -- just have the low-level serial transmit and receive routines already in my code emit Signals for that data, and the GUI thread receives and logs those Signals. - -All in all it ended up being a pretty straightforward application of existing principles, but I'm writing it down so hopefully the next guy doesn't have to go down so many blind alleys like I did along the way.",0.0,False,1,5972 -2019-02-27 13:33:17.083,how to register users of different kinds using different tables in django?,"I'm new to django, I want to register users using different tables for different users like students, teaching staff, non teaching staff, 3 tables. -How can i do it instead of using default auth_users table for registration","In Django authentication, there is Group model available which have many to many relationship with User model. You can add students, teaching staff and non teaching staff to Group model for separating users by their type.",0.0,False,2,5973 -2019-02-27 13:33:17.083,how to register users of different kinds using different tables in django?,"I'm new to django, I want to register users using different tables for different users like students, teaching staff, non teaching staff, 3 tables. -How can i do it instead of using default auth_users table for registration","cf Sam's answer for the proper solutions from a technical POV. From a design POV, ""student"", ""teaching staff"" etc are not entities but different roles a user can have. -One curious things with living persons and real-life things in general is that they tend to evolve over time without any respect for our well-defined specifications and classifications - for example it's not uncommon for a student to also have teaching duties at some points, for a teacher to also be studying some other topic, or for a teacher to stop teaching and switch to more administrative tasks. If you design your model with distinct entities instead of one single entitie and distinct roles, it won't properly accomodate those kind of situations (and no, having one account as student and one as teacher is not a proper solution either). -That's why the default user model in Django is based on one single entity (the User model) and features allowing roles definitions (groups and permissions) in such a way that one user can have many roles, whether at the same time or in succession.",0.0,False,2,5973 -2019-02-28 01:29:15.947,How do I know if a file has finished copying?,"I've been given a simple file-conversion task: whenever an MP4 file is in a certain directory, I do some magic to it and move it to a different directory. Nice and straightforward, and easy to automate. -However, if a user is copying some huge file into the directory, I worry that my script might catch it mid-copy, and only have half of the file to work with. -Is there a way, using Python 3 on Windows, to check whether a file is done copying (in other words, no process is currently writing to it)? -EDIT: To clarify, I have no idea how the files are getting there: my script just needs to watch a shared network folder and process files that are put there. They might be copied from a local folder I don't have access to, or placed through SCP, or downloaded from the web; all I know is the destination.","you could try first comparing the size of the file initially, or alternatively see if there are new files in the folder, capture the name of the new file and see if its size increases in x time, if you have a script, you could show the code....",0.0,False,1,5974 -2019-02-28 03:04:27.143,Viewing Graph from saved .pbtxt file on Tensorboard,I just have a graph.pbtxt file. I want to view the graph in tensorboard. But I am not aware of how to do that. Do I have to write any python script or can I do it from the terminal itself? Kindly help me to know the steps involved.,"Open tensorboard and use the ""Upload"" button on the left to upload the pbtxt file will directly open the graph in tensorboard.",0.9866142981514304,False,1,5975 -2019-02-28 16:24:27.333,Intersection of interpol1d objects,"I have 2 cumulative distributions that I want to find the intersection of. To get an underlying function, I used the scipy interpol1d function. What I’m trying to figure out now, is how to calculate their intersection. Not sure how I can do it. Tried fsolve, but I can’t find how to restrict the range in which to search for a solution (domain is limited).","Use scipy.optimize.brentq for bracketed root-finding: -brentq(lambda x: interp1d(xx, yy)(x) - interp1d(xxx, yyy)(x), -1, 1)",0.0,False,1,5976 -2019-02-28 18:54:51.120,How to make depth of nii images equal?,"I am having some nii images and each having same height and width but different depth. So I need to make the depth of each image equal, how can I do that? Also I didn't find any Python code, which can help me.","Once you have defined the depth you want for all volumes, let it be D, you can instantiate an image (called volume when D > 1) of dimensions W x H x D, for every volume you have. -Then you can fill every such volume, pixel by pixel, by mapping the pixel position onto the original volume and retrieving the value of the pixel by interpolating the values in neighboring pixels. -For example, a pixel (i_x, i_y, i_z) in the new volume will be mapped in a point (i_x, i_y, i_z') of the old volume. One of the simplest interpolation methods is the linear interpolation: the value of (i_x, i_y, i_z) is a weighted average of the values (i_x, i_y, floor(i_z')) and (i_x, i_y, floor(i_z') + 1).",0.0,False,1,5977 -2019-02-28 21:02:20.790,Tensorflow data pipeline: Slow with caching to disk - how to improve evaluation performance?,"I've built a data pipeline. Pseudo code is as follows: - -dataset -> -dataset = augment(dataset) -dataset = dataset.batch(35).prefetch(1) -dataset = set_from_generator(to_feed_dict(dataset)) # expensive op -dataset = Cache('/tmp', dataset) -dataset = dataset.unbatch() -dataset = dataset.shuffle(64).batch(256).prefetch(1) -to_feed_dict(dataset) - -1 to 5 actions are required to generate the pretrained model outputs. I cache them as they do not change throughout epochs (pretrained model weights are not updated). 5 to 8 actions prepare the dataset for training. -Different batch sizes have to be used, as the pretrained model inputs are of a much bigger dimensionality than the outputs. -The first epoch is slow, as it has to evaluate the pretrained model on every input item to generate templates and save them to the disk. Later epochs are faster, yet they're still quite slow - I suspect the bottleneck is reading the disk cache. -What could be improved in this data pipeline to reduce the issue? -Thank you!","prefetch(1) means that there will be only one element prefetched, I think you may want to have it as big as the batch size or larger. -After first cache you may try to put it second time but without providing a path, so it would cache some in the memory. -Maybe your HDD is just slow? ;) -Another idea is you could just manually write to compressed TFRecord after steps 1-4 and then read it with another dataset. Compressed file has lower I/O but causes higher CPU usage.",0.0,False,1,5978 -2019-03-01 11:32:59.497,Get data from an .asp file,"My girlfriend has been given the task of getting all the data from a webpage. The web page belongs to an adult education centre. To get to the webpage, you must first log in. The url is a .asp file. -She has to put the data in an Excel sheet. The entries are student names, numbers, ID card number, telephone, etc. There are thousands of entries. HR students alone has 70 pages of entries. This all shows up on the webpage as a table. It is possible to copy and paste. -I can handle Python openpyxl reasonably and I have heard of web-scraping, which I believe Python can do. -I don't know what .asp is. -Could you please give me some tips, pointers, about how to get the data with Python? -Can I automate this task? -Is this a case for MySQL? (About which I know nothing.)","Try using the tool called Octoparse. -Disclaimer: I've never used it myself, but only came close to using it. So, from my knowledge of its features, I think it would be useful for your need.",0.2012947653214861,False,2,5979 -2019-03-01 11:32:59.497,Get data from an .asp file,"My girlfriend has been given the task of getting all the data from a webpage. The web page belongs to an adult education centre. To get to the webpage, you must first log in. The url is a .asp file. -She has to put the data in an Excel sheet. The entries are student names, numbers, ID card number, telephone, etc. There are thousands of entries. HR students alone has 70 pages of entries. This all shows up on the webpage as a table. It is possible to copy and paste. -I can handle Python openpyxl reasonably and I have heard of web-scraping, which I believe Python can do. -I don't know what .asp is. -Could you please give me some tips, pointers, about how to get the data with Python? -Can I automate this task? -Is this a case for MySQL? (About which I know nothing.)","This is a really broad question and not really in the style of Stack Overflow. To give you some pointers anyway. In the end .asp files, as far as I know, behave like normal websites. Normal websites are interpreted in the browser like HTML, CSS etc. This can be parsed with Python. There are two approaches to this that I have used in the past that work. One is to use a library like requests to get the HTML of a page and then read it using the BeautifulSoup library. This gets more complex if you need to visit authenticated pages. The other option is to use Selenium for python. This module is more a tool to automate browsing itself. You can use this to automate visiting the website and entering login credentials and then read content on the page. There are probably more options which is why this question is too broad. Good luck with your project though! -EDIT: You do not need MySql for this. Especially not if the required output is an Excel file, which I would generate as a CSV instead because standard Python works better with CSV files than Excel.",0.2012947653214861,False,2,5979 -2019-03-01 22:45:49.617,Pygame/Python/Terminal/Mac related,"I'm a beginner, I have really hit a brick wall, and would greatly appreciate any advice someone more advanced can offer. -I have been having a number of extremely frustrating issues the past few days, which I have been round and round google trying to solve, tried all sorts of things to no avail. -Problem 1) -I can't import pygame in Idle with the error: -ModuleNotFoundError: No module named 'pygame' - even though it is definitely installed, as in terminal, if I ask pip3 to install pygame it says: -Requirement already satisfied: pygame in /usr/local/lib/python3.7/site-packages (1.9.4) -I think there may be a problem with several conflicting versions of python on my computer, as when i type sys.path in Idle (which by the way displays Python 3.7.2 ) the following are listed: -'/Users/myname/Documents', '/Library/Frameworks/Python.framework/Versions/3.7/lib/python37.zip', '/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7', '/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload', '/Users/myname/Library/Python/3.7/lib/python/site-packages', '/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages' -So am I right in thinking pygame is in the python3.7/sitepackages version, and this is why idle won't import it? I don't know I'm just trying to make sense of this. I have absoloutely no clue how to solve this,""re-set the path"" or whatever. I don't even know how to find all of these versions of python as only one appears in my applications folder, the rest are elsewhere? -Problem 2) -Apparently there should be a python 2.7 system version installed on every mac system which is vital to the running of python regardless of the developing environment you use. Yet all of my versions of python seem to be in the library/downloaded versions. Does this mean my system version of python is gone? I have put the computer in recovery mode today and done a reinstall of the macOS mojave system today, so shouldn't any possible lost version of python 2.7 be back on the system now? -Problem 3) -When I go to terminal, frequently every command I type is 'not found'. -I have sometimes found a temporary solution is typing: -export PATH=""/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin"" -but the problems always return! -As I say I also did a system reinstall today but that has helped none! -Can anybody please help me with these queries? I am really at the end of my tether and quite lost, forgive my programming ignorance please. Many thanks.","You should actually add the export PATH=""/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin"" to your .bash_profile (if you are using bash). Do this by opening your terminal, verifying that it says ""bash"" at the top. If it doesn't, you may have a .zprofile instead. Type ls -al and it will list all the invisible files. If you have .bash_profile listed, use that one. If you have .zprofile, use that. -Type nano .bash_profile to open and edit the profile and add the command to the end of it. This will permanently add the path to your profile after you restart the terminal. -Use ^X to exit nano and type Y to save your changes. Then you can check that it works when you try to run the program from IDLE.",0.0,False,1,5980 -2019-03-03 16:50:01.227,Force screen session to use specific version of python,"I am using a screen on my server. When I ask which python inside the screen I see it is using the default /opt/anaconda2/bin/python version which is on my server, but outside the screen when I ask which python I get ~/anaconda2/bin/python. I want to use the same python inside the screen but I don't know how can I set it. Both path are available in $PATH","You could do either one of the following: - -Use a virtual environment (install virtualenv). You can specify -the version of Python you want to use when creating the virtual -environment with -p /opt/anaconda2/bin/python. -Use an alias: -alias python=/opt/anaconda2/bin/python.",0.3869120172231254,False,1,5981 -2019-03-04 17:51:31.130,How can i remove an object in python?,"i'm trying to create a chess simulator. -consider this scenario: -there is a black rook (instance object of Rook class) in square 2B called rook1. -there is a white rook in square 2C called rook2. -when the player moves rook1 to square 2C , the i should remove rook2 object from memory completely. -how can i do it? -P.S. i'v already tried del rook2 , but i don't know why it doesn't work.","Trying to remove objects from memory is the wrong way to go. Python offers no option to do that manually, and it would be the wrong operation to perform anyway. -You need to alter whatever data structure represents your chess board so that it represents a game state where there is a black rook at c2 and no piece at b2, rather than a game state where there is a black rook at b2 and a white rook at c2. In a reasonable Python beginner-project implementation of a chess board, this probably means assigning to cells in a list of lists. No objects need to be manually removed from memory to do this. -Having rook1 and rook2 variables referring to your rooks is unnecessary and probably counterproductive.",0.999329299739067,False,1,5982 -2019-03-04 22:00:24.150,Text classification beyond the keyword dependency and inferring the actual meaning,"I am trying to develop a text classifier that will classify a piece of text as Private or Public. Take medical or health information as an example domain. A typical classifier that I can think of considers keywords as the main distinguisher, right? What about a scenario like bellow? What if both of the pieces of text contains similar keywords but carry a different meaning. -Following piece of text is revealing someone's private (health) situation (the patient has cancer): -I've been to two clinics and my pcp. I've had an ultrasound only to be told it's a resolving cyst or a hematoma, but it's getting larger and starting to make my leg ache. The PCP said it can't be a cyst because it started out way too big and I swear I have NEVER injured my leg, not even a bump. I am now scared and afraid of cancer. I noticed a slightly uncomfortable sensation only when squatting down about 9 months ago. 3 months ago I went to squat down to put away laundry and it kinda hurt. The pain prompted me to examine my leg and that is when I noticed a lump at the bottom of my calf muscle and flexing only made it more noticeable. Eventually after four clinic visits, an ultrasound and one pcp the result seems to be positive and the mass is getting larger. -[Private] (Correct Classification) -Following piece of text is a comment from a doctor which is definitely not revealing is health situation. It introduces the weaknesses of a typical classifier model: -Don’t be scared and do not assume anything bad as cancer. I have gone through several cases in my clinic and it seems familiar to me. As you mentioned it might be a cyst or a hematoma and it's getting larger, it must need some additional diagnosis such as biopsy. Having an ache in that area or the size of the lump does not really tells anything bad. You should visit specialized clinics few more times and go under some specific tests such as biopsy, CT scan, pcp and ultrasound before that lump become more larger. -[Private] (Which is the Wrong Classification. It should be [Public]) -The second paragraph was classified as private by all of my current classifiers, for obvious reason. Similar keywords, valid word sequences, the presence of subjects seemed to make the classifier very confused. Even, both of the content contains subjects like I, You (Noun, Pronouns) etc. I thought about from Word2Vec to Doc2Vec, from Inferring meaning to semantic embeddings but can't think about a solution approach that best suits this problem. -Any idea, which way I should handle the classification problem? Thanks in advance. -Progress so Far: -The data, I have collected from a public source where patients/victims usually post their own situation and doctors/well-wishers reply to those. I assumed while crawling is that - posts belongs to my private class and comments belongs to public class. All to gether I started with 5K+5K posts/comments and got around 60% with a naive bayes classifier without any major preprocessing. I will try Neural Network soon. But before feeding into any classifier, I just want to know how I can preprocess better to put reasonable weights to either class for better distinction.","(1) Bayes is indeed a weak classifier - I'd try SVM. If you see improvement than further improvement can be achieved using Neural Network (and perhaps Deep Learning) -(2) Feature engineering - use TFiDF , and try other things (many people suggest Word2Vec, although I personally tried and it did not improve). Also you can remove stop words. -One thing to consider, because you give two anecdotes is to measure objectively the level of agreement between human beings on the task. It is sometime overlooked that two people given the same text can disagree on labels (some might say that a specific document is private although it is public). Just a point to notice - because if e.g. the level of agreement is 65%, then it will be very difficult to build an algorithm that is more accurate.",-0.2655860252697744,False,1,5983 -2019-03-05 03:08:47.917,How do you profile a Python script from Windows command line using PyPy and vmprof?,"I have a Python script that I want to profile using vmprof to figure out what parts of the code are slow. Since PyPy is generally faster, I also want to profile the script while it is using the PyPy JIT. If the script is named myscript.py, how do you structure the command on the command line to do this? -I have already installed vmprof using - -pip install vmprof","I would be suprised if it works, but the command is pypy -m vmprof myscript.py . I would expect it to crash saying vmprof is not supported on windows.",0.0,False,1,5984 -2019-03-06 00:43:24.310,How to update python 3.6 to 3.7 using Mac terminal,"OK -I was afraid to use the terminal, so I installed the python-3.7.2-macosx10.9 package downloaded from python.org -Ran the certificate and shell profile scripts, everything seems fine. -Now the ""which python3"" has changed the path from 3.6 to the new 3.7.2 -So everything seems fine, correct? -My question (of 2) is what's going on with the old python3.6 folder still in the applications folder. Can you just delete it safely? Why when you install a new version does it not at least ask you if you want to update or install and keep both versions? -Second question, how would you do this from the terminal? -I see the first step is to sudo to the root. -I've forgotten the rest. -But from the terminal, would this simply add the new version and leave -the older one like the package installer? -It's pretty simple to use the package installer and then delete a folder. -So, thanks in advance. I'm new to python and have not much confidence -using the terminal and all the powerful shell commands. -And yeah I see all the Brew enthusiasts. I DON'T want to use Brew for the moment. -The python snakes nest of pathways is a little confusing, for the moment. -I don't want to get lost with a zillion pathways from Brew because it's -confusing for the moment. -I love Brew, leave me alone.","Each version of the Python installation is independent of each other. So its safe to delete the version you don't want, but be cautious of this because it can lead to broken dependencies :-). -You can run any version by adding the specific version i.e $python3.6 or $python3.7 -The best approach is to use virtual environments for your projects to enhance consistency. see pipenv",0.0,False,1,5985 -2019-03-07 02:42:18.347,How do I figure out what dependencies to install when I copy my Django app from one system to another?,"I'm using Django and Python 3.7. I want to write a script to help me easily migrate my application from my local machien (a Mac High Sierra) to a CentOS Linux instance. I'm using a virtual environment in both places. There are many things that need to be done here, but to keep the question specific, how do I determine on my remote machine (where I'm deploying my project to), what dependencies are lacking? I'm using rsync to copy the files (minus the virtual environment)","On the source system execute pip freeze > requirements.txt, then copy the requiremnts.txt to the target system and then on the target system install all the dependencies with pip install -r requirements.txt. Of course you will need to activate the virtual environments on both systems before execute the pip commands. -If you are using a source code management system like git it is a good idea to keep the requirements.txt up to date in your source code repository.",1.2,True,1,5986 -2019-03-07 10:03:42.277,Does angular server and flask server have both to be running at the same?,"I'm new to both angular and flask framework so plz be patient with me. -I'm trying to build a web app with flask as a backend server and Angular for the frontend (I didn't start it yet), and while gathering infos and looking at tutorials and some documentation (a little bit) I'm wondering: -Does Angular server and flask server need both to be running at the same time or will just flask be enough? Knowing that I want to send data from the server to the frontend to display and collecting data from users and sending it to the backend. -I noticed some guys building the angular app and using the dist files but I don't exactly know how that works. -So can you guys suggest what should I have to do or how to proceed with this? -Thank you ^^","Angular does not need a server. It's a client-side framework so it can be served by any server like Flask. Probably in most tutorials, the backend is served by nodejs, not Flask.",1.2,True,1,5987 -2019-03-08 19:25:09.250,Change color of single word in Tk label widget,"I would like to change the font color of a single word in a Tkinter label widget. -I understand that something similar to what I would like to be done can be achieved with a Text widget.. for example making the word ""YELLOW"" show in yellow: -self.text.tag_config(""tag_yel"", fg=clr_yellow) -self.text.highligh_pattern(""YELLOW"", ""tag_yel"") -But my text is static and all I want is to change the word ""YELLOW"" to show as yellow font and ""RED"" in red font and I cannot seem to figure out how to change text color without changing it all with label.config(fg=clr). -Any help would be appreciated","You cannot do what you want. A label supports only a single foreground color and a single background color. The solution is to use a text or canvas widget., or to use two separate labels.",1.2,True,1,5988 -2019-03-11 12:10:11.213,Running python directly in terminal,"Is it possible to execute short python expressions in one line in terminal, without passing a file? -e.g. (borrowing from how I would write an awk expression) -python 'print(""hello world"")'","python3 -c ""print('Hello')"" -Use the -c flag as above.",1.2,True,2,5989 -2019-03-11 12:10:11.213,Running python directly in terminal,"Is it possible to execute short python expressions in one line in terminal, without passing a file? -e.g. (borrowing from how I would write an awk expression) -python 'print(""hello world"")'","For completeness, I found you can also feed a here-string to python. -python <<< 'print(""hello world"")'",0.0,False,2,5989 -2019-03-11 13:21:12.590,How to save and load my neural network model after training along with weights in python?,"I have trained a single layer neural network model in python (a simple model without keras and tensorflow). -How canI save it after training along with weights in python, and how to load it later?","So you write it down yourself. You need some simple steps: - -In your code for neural network, store weights in a variable. It could be simply done by using self.weights.weights are numpy ndarrays. for example if weights are between layer with 10 neurons to layer with 100 neurons, it is a 10 * 100(or 100* 10) nd array. -Use numpy.save to save the ndarray. -For next use of your network, use numpy.load to load weights -In the first initialization of your network, use weights you've loaded. -Don't forget, if your network is trained, weights should be frozen. It can be done by zeroing learning rate.",0.1352210990936997,False,1,5990 -2019-03-12 12:23:21.577,tf.gradient acting like tfp.math.diag_jacobian,"I try to calculate noise for input data using the gradient of the loss function from the input-data: -my_grad = tf.gradients(loss, input) -loss is an array of size (n x 1) where n is the number of datasets, m is the size of the dataset, input is an array of (n x m) where m is the size of a single dataset. -I need my_grad to be of size (n x m) - so for each dataset the gradient is calulated. But by definition the gradients where i!=j are zero - but tf.gradients allocates huge amount of memory and runs for prettymuch ever... -A version, which calulates the gradients only where i=j would be great - any Idea how to get there?","I suppose I have found a solution: -my_grad = tf.gradients(tf.reduce_sum(loss), input) -ensures, that the cross dependencies i!=j are ignored - that works really nicely and fast..",0.0,False,1,5991 -2019-03-12 14:50:25.703,Lost my python.exe in Pycharm with Anaconda3,"Everything was working perfectly until today, when for some reason my python.exe file disappeared from the Project Interpreter in Pycharm. -It was located in C:\users\my_name\Anaconda3\python.exe, and for some reason I can't find it anywhere! -Yet, all the packages are here (in the site-packages folder), and only the C:\users\my_name\Anaconda3\pythonw.exe is available. -With the latest however, some packages I installed on top of those available in Anaconda3 won't be recognized. -Therefore, how to get back the python.exe file?","The answer repeats the comment to the question. -I had the same issue once after Anaconda update - python.exe was missing. It was Anaconda 3 installed to Program Files folder by MS Visual Studio (Python 3.6 on Windows10 x64). -To solve the problem I manually copied python.exe file from the most fresh python package available (folder pkgs then folder like python-3.6.8-h9f7ef89_7).",1.2,True,3,5992 -2019-03-12 14:50:25.703,Lost my python.exe in Pycharm with Anaconda3,"Everything was working perfectly until today, when for some reason my python.exe file disappeared from the Project Interpreter in Pycharm. -It was located in C:\users\my_name\Anaconda3\python.exe, and for some reason I can't find it anywhere! -Yet, all the packages are here (in the site-packages folder), and only the C:\users\my_name\Anaconda3\pythonw.exe is available. -With the latest however, some packages I installed on top of those available in Anaconda3 won't be recognized. -Therefore, how to get back the python.exe file?","My Python.exe was missing today in my existing environment in anaconda, so I clone my environment with anaconda to recreate Python.exe and use it again in Spyder.",0.0,False,3,5992 -2019-03-12 14:50:25.703,Lost my python.exe in Pycharm with Anaconda3,"Everything was working perfectly until today, when for some reason my python.exe file disappeared from the Project Interpreter in Pycharm. -It was located in C:\users\my_name\Anaconda3\python.exe, and for some reason I can't find it anywhere! -Yet, all the packages are here (in the site-packages folder), and only the C:\users\my_name\Anaconda3\pythonw.exe is available. -With the latest however, some packages I installed on top of those available in Anaconda3 won't be recognized. -Therefore, how to get back the python.exe file?","I just had the same issue and found out that Avast removed it because it thought it was a threat. I found it in Avast -> Protection -> Virus Chest. And from there, you have the option to restore it.",0.3869120172231254,False,3,5992 -2019-03-12 18:13:12.880,trouble with appending scores in python,"the code is supposed to give 3 questions with 2 attempts. if the answer is correct the first try, 3 points. second try gives 1 point. if second try is incorrect, the game will end. -however, the scores are not adding up to create a final score after the 3 rounds. how do i make it so that it does that?",First move import random to the top of the script because you're importing it every time in the loop and the score is calculated just in the last spin of the program since you empty scoreList[] every time,0.6730655149877884,False,1,5993 -2019-03-13 05:05:14.420,Accessing Luigi visualizer on AWS,"I’ve been using the Luigi visualizer for pipelining my python code. -Now I’ve started using an aws instance, and want to access the visualizer from my own machine. -Any ideas on how I could do that?","We had the very same problem today on GCP, and solved with the following steps: - -setting firewall rules for incoming TCP connections on port used by the service (which by default is 8082); -installing apache2 server on the instance with a site.conf configuration that resolve incoming requests on ip-of-instance:8082. - -That's it. Hope this can help.",0.2012947653214861,False,1,5994 -2019-03-13 09:24:24.310,"Async, multithreaded scraping in Python with limited threads","We have to refactor scraping algorithm. To speed it up we came up to conclusion to multi-thread processes (and limit them to max 3). Generally speaking scraping consists of following aspects: - -Scraping (async request, takes approx 2 sec) -Image processing (async per image, approx 500ms per image) -Changing source item in DB (async request, approx 2 sec) - -What I am aiming to do is to create batch of scraping requests and while looping through them, create a stack of consequent async operations: Process images and as soon as images are processed -> change source item. -In other words - scraping goes. but image processing and changing source items must be run in separate limited async threads. -Only think I don't know how to stack the batch and limit threads. -Has anyone came across the same task and what approach have you used?","What you're looking for is consumer-producer pattern. Just create 3 different queues and when you process the item in one of them, queue new work in another. Then you can 3 different threads each of them processing one queue.",1.2,True,1,5995 -2019-03-13 20:16:42.690,Pymongo inserts _id in original array after insert_many .how to avoid insertion of _id?,"Pymongo inserts _id in original array after insert_many .how to avoid insertion of _id ? And why original array is updated with _id? Please explain with example, if anybody knows? Thanks in advance.",Pymongo driver explicitly inserts _id of type ObjectId into the original array and hence original array gets updated before inserting into mongo. This is the expected behaviour of pymongo for insertmany query as per my previous experiences. Hope this answers your question.,1.2,True,1,5996 -2019-03-13 21:29:05.987,how can i prevent the user from closing my cmd window in a python script on windows,is there any way to prevent the user from closing the cmd window of a python script on windows or maybe just disable the (X) close button ?? I have looked for answers already but i couldn't find anything that would help me,"I dont think its possible, what you can do instead is to not display the cmd window (backgroundworker) and make it into a hidden process with system rights so that it cant be shutdown until it finishes.",0.0,False,1,5997 -2019-03-14 00:37:45.023,regex python multiline,"how can i search for patterns in texts that cover multiple lines and have fixed positions relating each other, for example a pattern consisting of 3 letters of x directly below each other and I want to find them at any position in the line, not just at the beginning for example. -Thank you in advance for the answer!","I believe the problem you are asking about is ""Find patterns that appear at the same offset in a series of lines."" -I do not think this describes a regular language, so you would need to draw on Python's extended regex features to have a chance at a regex-based solution. But I do not believe Python supports sufficiently extended features to accomplish this task [1]. -If it is acceptable that they occur at a particular offset (rather than ""any offset, so long as the offset is consistent""), then something like this should work: -/^.{OFFSET}PATTERN.*\n^.{OFFSET}PATTERN.*\n^.{OFFSET}PATTERN/, using the MULTILINE flag so that ^ matches the beginning of a series of lines instead of just the beginning of the entire text. -[1] In particular, you could use a backreference to capture the text preceding the desired pattern on one line, but I do not think you can query the length of the captured content ""inline"". You could search for the same leading text again on the next line, but that does not sound like what you want.",0.0,False,1,5998 -2019-03-14 15:52:38.597,Faster pytorch dataset file,"I have the following problem, I have many files of 3D volumes that I open to extract a bunch of numpy arrays. -I want to get those arrays randomly, i.e. in the worst case I open as many 3D volumes as numpy arrays I want to get, if all those arrays are in separate files. -The IO here isn't great, I open a big file only to get a small numpy array from it. -Any idea how I can store all these arrays so that the IO is better? -I can't pre-read all the arrays and save them all in one file because then that file would be too big to open for RAM. -I looked up LMDB but it all seems to be about Caffe. -Any idea how I can achieve this?","One trivial solution can be pre-processing your dataset and saving multiple smaller crops of the original 3D volumes separately. This way you sacrifice some disk space for more efficient IO. -Note that you can make a trade-off with the crop size here: saving bigger crops than you need for input allows you to still do random crop augmentation on the fly. If you save overlapping crops in the pre-processing step, then you can ensure that still all possible random crops of the original dataset can be produced. -Alternatively you may try using a custom data loader that retains the full volumes for a few batch. Be careful, this might create some correlation between batches. Since many machine learning algorithms relies on i.i.d samples (e.g. Stochastic Gradient Descent), correlated batches can easily cause some serious mess.",0.0,False,1,5999 -2019-03-14 19:33:03.197,How does multiplexing in Django sockets work?,"I am new at this part of web developing and was trying to figure out a way of creating a web app with the basic specifications as the example bellow: - -A user1 opens a page with a textbox (something where he can add text or so), and it will be modified as it decides to do it. - -If the user1 has problems he can invite other user2 to help with the typing. - - -The user2 (when logged to the Channel/Socket) will be able to modify that field and the modifications made will be show to the user1 in real time and vice versa. - - -Or another example is a room on CodeAcademy: - -Imagine that I am learning a new coding language, however, at middle of it I jeopardize it and had to ask for help. - -So I go forward and ask help to another user. This user access the page through a WebSocket (or something related to that). - - -The user helps me changing my code and adding some comments at it in real time, and I also will be able to ask questions through it (real time communication) - - -My questions is: will I be able to developed certain app using Django Channels 2 and multiplexing? or better move to use NodeJS or something related to that? -Obs: I do have more experience working with python/django, so it will more productive for me right know if could find a way working with this combo.","This is definitely possible. They will be lots of possibilities, but I would recommend the following. - -Have a page with code on. The page has some websocket JS code that can connect to a Channels Consumer. -The JS does 2 simple things. When code is updated code on the screen, send a message to the Consumer, with the new text (you can optimize this later). When the socket receives a message, then replace the code on screen with the new code. -In your consumer, add your consumer to a channel group when connecting (the group will contain all of the consumers that are accessing the page) -When a message is received, use group_send to send it to all the other consumers -When your consumer callback function gets called, then send a message to your websocket",0.3869120172231254,False,1,6000 -2019-03-14 20:28:27.727,Operating system does not meet the minimum requirements of the language server,"I installed Python 3.7.2 and VSCode 1.32.1 on Mac OS X 10.10.2. In VSCode I installed the Pyhton extension and got a message saying: -""Operating system does not meet the minimum requirements of the language server. Reverting to alternative, Jedi"". -When clicking the ""More"" option under the message I got information indicating that I need OS X 10.12, at least. -I tried to install an older version of the extension, did some reading here and asked Google, but I'm having a hard time since I don´t really know what vocabulary to use. -My questions are: -Will the extension work despite the error message? -Do I need to solve this, and how do I do that?","The extension will work without the language server, but some thing won't work quite as well (e.g. auto-complete and some refactoring options). Basically if you remove the ""python.jediEnabled"" setting -- or set it to false -- and the extension works fine for you then that's the important thing. :)",1.2,True,1,6001 -2019-03-17 20:51:04.933,What is the preferred way to a add a citation suggestion to python packages?,"How should developers indicate how users should cite the package, other than on the documentation? -R packages return the preferred citation using citation(""pkg""). -I can think of pkg.CITATION, pkg.citation and pkg.__citation__. Are there others? If there is no preferred way (which seems to be the case to me as I did not find anything on python.org), what are the pros and cons of each?","Finally I opted for the dunder option. Only the dunder option (__citation__) makes clear, that this is not a normal variable needed for runtime. -Yes, dunder strings should not be used inflationary because python might use them at a later time. But if python is going to use __citation__, then it will be for a similar purpose. Also, I deem the relative costs higher with the other options.",1.2,True,1,6002 -2019-03-18 14:05:53.610,How to see the full previous command in Pycharm Python console using a shortcut,"I was wondering how I could see the history in the Pycharm Python console using a shortcut. I can see the history using the upper arrow key, but If I want to go further back in history I have to go to each individual line if more lines are ran at the time. Is it possible that each time I press a button the full previous commands that are ran are shown? -I don't want to search in history, I want to go back in history similar using arrow up key but each time I enter arrow up I want to see the previous full code that was ran.","Go to preferences -> Appereance & Behaviour -> Keymap. You can search for ""Browse Console History"" and add a keyboard shortcut with right click -> Add Keyboard shortcut.",0.0,False,1,6003 -2019-03-18 17:28:51.877,Python how to to make set of rules for each class in a game,"in C# we have to get/set to make rules, but I don't know how to do this in Python. -Example: -Orcs can only equip Axes, other weapons are not eligible -Humans can only equip swords, other weapons are eligible. -How can I tell Python that an Orc cannot do something like in the example above? -Thanks for answers in advance, hope this made any sense to you guys.","Python language doesn't have an effective mechanism for restricting access to an instance or method. There is a convention though, to prefix the name of a field/method with an underscore to simulate ""protected"" or ""private"" behavior. -But, all members in a Python class are public by default.",0.0,False,1,6004 -2019-03-18 18:58:53.487,"Regex to get key words, all digits and periods","My input text looks like this: - -Put in 3 extenders but by the 4th floor it is weak on signal these don't piggy back of each other. ST -99. 5G DL 624.26 UP 168.20 4g DL 2 - Up .44 - -I am having difficulty writing a regex that will match any instances of 4G/5G/4g/5g and give me all the corresponding measurements after the instances of these codes, which are numbers with decimals. -The output should be: - -5G 624.26 168.20 4g 2 .44 - -Any thoughts how to achieve this? I am trying to do this analysis in Python.","I would separate it in different capture group like this: -(?i)(?P5?4?G)\sDL\s(?P[^\s]*)\sUP\s(?P[^\s]*) -(?i) makes the whole regex case insensitive -(?P5?4?G) is the first group matching on either 4g, 5g, 4G or 5G. -(?P[^\s]*) is the second and third group matching on everything that is not a space. -Then in Python you can do: -match = re.match('(?i)(?P5?4?G)\sDL\s(?P[^\s]*)\sUP\s(?P[^\s]*)', input) -And access each group like so: -match.group('g1') etc.",0.1352210990936997,False,1,6005 -2019-03-19 03:35:02.983,"In Zapier, how do I get the inputs to my Python ""Run Code"" action to be passed in as lists and not joined strings?","In Zapier, I have a ""Run Python"" action triggered by a ""Twitter"" event. One of the fields passed to me by the Twitter event is called ""Entities URLs Display URL"". It's the list of anchor texts of all of the links in the tweet being processed. -Zapier is passing this value into my Python code as a single comma-separated string. I know I can use .split(',') to get a list, but this results in ambiguity if the original strings contained commas. -Is there some way to get Zapier to pass this sequence of strings into my code as a sequence of strings rather than as a single joined-together string?","David here, from the Zapier Platform team. -At this time, all inputs to a code step are coerced into strings due to the way data is passed between zap steps. This is a great request though and I'll make a note of it internally.",0.6730655149877884,False,1,6006 -2019-03-19 07:09:31.563,"Where is the tesseract executable file located on MacOS, and how to define it in Python?","I have made a code using pytesseract and whenever I run it, I get this error: -TesseractNotFoundError: tesseract is not installed or it's not in your path -I have installed tesseract using HomeBrew and also pip installed it.","If installed with Homebrew, it will be located in /usr/local/bin/tesseract by default. To verify this, run which tesseract in the terminal as Dmitrrii Z. mentioned. -If it's there, you can set it up in your python environment by adding the following line to your python script, after importing the library: -pytesseract.pytesseract.tesseract_cmd = r'/usr/local/bin/tesseract'",0.6730655149877884,False,1,6007 -2019-03-19 09:10:22.167,Call function from file that has already imported the current file,"If I have the files frame.py and bindings.py both with classes Frame and Bindings respectively inside of them, I import the bindings.py file into frame.py by using from bindings import Bindings but how do I go about importing the frame.py file into my bindings.py file. If I use import frame or from frame import Frame I get the error ImportError: cannot import name 'Bindings' from 'bindings'. Is there any way around this without restructuring my code?",Instead of using from bindings import Bindings try import bindings.,0.0,False,1,6008 -2019-03-20 10:03:02.943,How to only enter a date that is a weekday in Python,"I'm creating a web applcation in Python and I only want the user to be able to enter a weekday that is older than today's date. I've had a look at isoweekday() for example but don't know how to integrate it into a flask form. The form currently looks like this: -appointment_date = DateField('Appointment Date', format='%Y-%m-%d', validators=[DataRequired()]) -Thanks","If you just want a weekday, you should put a select or a textbox, not a date picker. -If you put a select, you can disable the days before today so you don't even need a validation",0.0,False,1,6009 -2019-03-20 23:43:33.130,Speed up access to python programs from Golang's exec packaqe,"I need suggestions on how to speed up access to python programs when called from Golang. I really need fast access time (very low latency). -Approach 1: -func main() { -... -... - cmd = exec.Command(""python"", ""test.py"") - o, err = cmd.CombinedOutput() -... -} -If my test.py file is a basic print ""HelloWorld"" program, the execution time is over 50ms. I assume most of the time is for loading the shell and python in memory. -Approach 2: -The above approach can be speeded up substantially by having python start a HTTP server and then gaving the Go code POST a HTTP request and get the response from the HTTP server (python). Speeds up response times to less than 5ms. -I guess the main reason for this is probably because the python interpretor is already loaded and warm in memory. -Are there other approaches I can use similar to approach 2 (shared memory, etc.) which could speed up the response from my python code?. Our application requires extremely low latency and the 50 ms I am currently seeing from using Golang's exec package is not going to cut it. -thanks,","Approach 1: Simple HTTP server and client -Approach 2: Local socket or pipe -Approach 3: Shared memory -Approach 4: GRPC server and client -In fact, I prefer the GRPC method by stream way, it will hold the connection (because of HTTP/2), it's easy, fast and secure. And it's easy moving python node to another machine.",0.0,False,1,6010 -2019-03-21 20:01:04.153,Python: Iterate through every pixel in an image for image recognition,"I'm a newbie in image processing and python in general. For an image recognition project, I want to compare every pixel with one another. For that, I need to create a program that iterates through every pixel, takes it's value (for example ""[28, 78, 72]"") and creates some kind of values through comparing it to every other pixel. I did manage to access one single number in an array element /pixel (output: 28) through a bunch of for loops, but I just couldn't figure out how to access every number in every pixel, in every row. Does anyone know a good algorithm to solve my problem? I use OpenCV for reading in the image by the way.","Comparing every pixel with a ""pattern"" can be done with convolution. You should take a look at Haar cascade algorithm.",0.0,False,1,6011 -2019-03-21 20:38:04.357,numpy.savetxt() rounding values,"I'm using numpy.savetxt() to save an array, but its rounding my values to the first decimal point, which is a problem. anyone have any clue how to change this?","You can set the precision through changing fmt parameter. For example np.savetxt('tmp.txt',a, fmt='%1.3f') would leave you with an output with the precision of first three decimal points",0.3869120172231254,False,1,6012 -2019-03-22 03:06:43.583,Training SVM in Python with pictures,"I have basic knowledge of SVM, but now I am working with images. I have images in 5 folders, each folder, for example, has images for letters a, b, c, d, e. The folder 'a' has images of handwriting letters for 'a, folder 'b' has images of handwriting letters for 'b' and so on. -Now how can I use the images as my training data in SVM in Python.","as far i understood you want to train your svm to classify these images into the classes named a,b,c,d . For that you can use any of the good image processing techniques to extract features (such as HOG which is nicely implemented in opencv) from your image and then use these features , and the label as the input to your SVM training (the corresponding label for those would be the name of the folders i.e a,b,c,d) you can train your SVM using the features only and during the inference time , you can simply calculate the HOG feature of the image and feed it to your SVM and it will give you the desired output.",0.0,False,1,6013 -2019-03-22 12:32:50.940,How to execute script from container within another container?,"I have a contanarized flask app with external db, that logs users on other site using selenium. Everything work perfectly in localhost. I want to deploy this app using containers and found selenium container with google chrome within could make the job. And my question is: how to execute scripts/methods from flask container in selenium container? I tried to find some helpful info, but I didn't find anything. -Should I make an API call from selenium container to flask container? Is it the way or maybe something different?","As far as i understood, you are trying to take your local implementation, which runs on your pc and put it into two different docker containers. Then you want to make a call from the selenium container to your container containing the flask script which connects to your database. -In this case, you can think of your containers like two different computers. You can tell docker do create an internal network between these two containers and send the request via API call, like you suggested. But you are not limited to this approach, you can use any technique, that works for two computers to exchange commands.",1.2,True,1,6014 -2019-03-22 21:15:34.407,Visual Studio doesn't work with Anaconda environment,"I downloaded VS2019 preview to try how it works with Python. -I use Anaconda, and VS2019 sees the Anaconda virtual environment, terminal opens and works but when I try to launch 'import numpy', for example, I receive this: - -An internal error has occurred in the Interactive window. Please - restart Visual Studio. Intel MKL FATAL ERROR: Cannot load - mkl_intel_thread.dll. The interactive Python process has exited. - -Does anyone know how to fix it?","I had same issue, this worked for me: -Try to add conda-env-root/Library/bin to the path in the run environment.",0.0,False,1,6015 -2019-03-24 17:23:41.657,Automatically filled field in model,"I have some model where there are date field and CharField with choices New or Done, and I want to show some message for this model objects in my API views if 2 conditions are met, date is past and status is NEW, but I really don't know how I should resolve this. -I was thinking that maybe there is option to make some field in model that have choices and set suitable choice if conditions are fulfilled but I didn't find any information if something like this is possible so maybe someone have idea how resolve this?","You need override the method save of your model. An overrided method must check the condition and show message -You may set the signal receiver on the post_save signal that does the same like (1).",0.0,False,1,6016 -2019-03-25 03:15:40.920,how to drop multiple (~5000) columns in the pandas dataframe?,"I have a dataframe with 5632 columns, and I only want to keep 500 of them. I have the columns names (that I wanna keep) in a dataframe as well, with the names as the row index. Is there any way to do this?","Let us assume your DataFrame is named as df and you have a list cols of column indices you want to retain. Then you should use: -df1 = df.iloc[:, cols] -This statement will drop all the columns other than the ones whose indices have been specified in cols. Use df1 as your new DataFrame.",0.0,False,1,6017 -2019-03-26 17:26:01.377,How to configure PuLP to call GLPK solver,"I am using the PuLP library in Python to solve an MILP problem. I have run my problem successfully with the default solver (CBC). Now I would like to use PuLP with another solver (GLPK). How do I set up PuLP with GLPK? -I have done some research online and found information on how to use GLPK (e.g. with lp_prob.solve(pulp.GLPK_CMD())) but haven't found information on how to actually set up PuLP with GLPK (or any other solver for that matter), so that it finds my GLPK installation. I have already installed GLPK seperately (but I didn't add it to my PATH environment variable). -I ran the command pulp.pulpTestAll() -and got: -Solver unavailable -I know that I should be getting a ""passed"" instead of an ""unavailable"" to be able to use it.","After reading in more detail the code and testing out some things, I finally found out how to use GLPK with PuLP, without changing anything in the PuLP package itself. -Your need to pass the path as an argument to GLPK_CMD in solve as follows (replace with your glpsol path): -lp_prob.solve(GLPK_CMD(path = 'C:\\Users\\username\\glpk-4.65\\w64\\glpsol.exe') -You can also pass options that way, e.g. -lp_prob.solve(GLPK_CMD(path = 'C:\\Users\\username\\glpk-4.65\\w64\\glpsol.exe', options = [""--mipgap"", ""0.01"",""--tmlim"", ""1000""])",1.2,True,2,6018 -2019-03-26 17:26:01.377,How to configure PuLP to call GLPK solver,"I am using the PuLP library in Python to solve an MILP problem. I have run my problem successfully with the default solver (CBC). Now I would like to use PuLP with another solver (GLPK). How do I set up PuLP with GLPK? -I have done some research online and found information on how to use GLPK (e.g. with lp_prob.solve(pulp.GLPK_CMD())) but haven't found information on how to actually set up PuLP with GLPK (or any other solver for that matter), so that it finds my GLPK installation. I have already installed GLPK seperately (but I didn't add it to my PATH environment variable). -I ran the command pulp.pulpTestAll() -and got: -Solver unavailable -I know that I should be getting a ""passed"" instead of an ""unavailable"" to be able to use it.","I had same problem, but is not related with glpk installation, is with solution file create, the message is confusim. My problem was I use numeric name for my variables, as '0238' ou '1342', I add a 'x' before it, then they looked like 'x0238'.",0.2012947653214861,False,2,6018 -2019-03-26 23:03:52.333,Tower of colored cubes,"Consider a set of n cubes with colored facets (each one with a specific color -out of 4 possible ones - red, blue, green and yellow). Form the highest possible tower of k cubes ( k ≤ n ) properly rotated (12 positions of a cube), so the lateral faces of the tower will have the same color, using and evolutionary algorithm. -What I did so far: -I thought that the following representation would be suitable: an Individual could be an array of n integers, each number having a value between 1 and 12, indicating the current position of the cube (an input file contains n lines, each line shows information about the color of each face of the cube). -Then, the Population consists of multiple Individuals. -The Crossover method should create a new child(Individual), containing information from its parents (approximately half from each parent). -Now, my biggest issue is related to the Mutate and Fitness methods. -In Mutate method, if the probability of mutation (say 0.01), I should change the position of a random cube with other random position (for example, the third cube can have its position(rotation) changed from 5 to 12). -In Fitness method, I thought that I could compare, two by two, the cubes from an Individual, to see if they have common faces. If they have a common face, a ""count"" variable will be incremented with the number of common faces and if all the 4 lateral faces will be the same for these 2 cubes, the count will increase with another number of points. After comparing all the adjacent cubes, the count variable is returned. Our goal is to obtain as many adjacent cubes having the same lateral faces as we can, i.e. to maximize the Fitness method. -My question is the following: -How can be a rotation implemented? I mean, if a cube changes its position(rotation) from 3, to 10, how do we know the new arrangement of the faces? Or, if I perform a mutation on a cube, what is the process of rotating this cube if a random rotation number is selected? -I think that I should create a vector of 6 elements (the colors of each face) for each cube, but when the rotation value of a cube is modified, I don't know in what manner the elements of its vector of faces should be rearranged. -Shuffling them is not correct, because by doing this, two opposite faces could become adjacent, meaning that the vector doesn't represent that particular cube anymore (obviously, two opposite faces cannot be adjacent).","First, I'm not sure how you get 12 rotations; I get 24: 4 orientations with each of the 6 faces on the bottom. Use a standard D6 (6-sided die) and see how many different layouts you get. -Apparently, the first thing you need to build is a something (a class?) that accurately represents a cube in any of the available orientations. I suggest that you use a simple structure that can return the four faces in order -- say, front-right-back-left -- given a cube and the rotation number. -I think you can effectively represent a cube as three pairs of opposing sides. Once you've represented that opposition, the remaining organization is arbitrary numbering: any valid choice is isomorphic to any other. Each rotation will produce an interleaved sequence of two opposing pairs. For instance, a standard D6 has opposing pairs [(1, 6), (2, 5), (3, 4)]. The first 8 rotations would put 1 and 6 on the hidden faces (top and bottom), giving you the sequence 2354 in each of its 4 rotations and their reverses. -That class is one large subsystem of your problem; the other, the genetic algorithm, you seem to have well in hand. Stack all of your cubes randomly; ""fitness"" is a count of the most prevalent 4-show (sequence of 4 sides) in the stack. At the start, this will generally be 1, as nothing will match. -From there, you seem to have an appropriate handle on mutation. You might give a higher chance of mutating a non-matching cube, or perhaps see if some cube is a half-match: two opposite faces match the ""best fit"" 4-show, so you merely rotate it along that axis, preserving those two faces, and swapping the other pair for the top-bottom pair (note: two directions to do that). -Does that get you moving?",0.0,False,1,6019 -2019-03-27 20:20:56.763,Airflow: How to download file from Linux to Windows via smbclient,"I have a DAG that imports data from a source to a server. From there, I am looking to download that file from the server to the Windows network. I would like to keep this part in Airflow for automation purposes. Does anyone know how to do this in Airflow? I am not sure whether to use the os package, the shutil package, or maybe there is a different approach.","I think you're saying you're looking for a way to get files from a cloud server to a windows shared drive or onto a computer in the windows network, these are some options I've seen used: - -Use a service like google drive, dropbox, box, or s3 to simulate a synced folder on the cloud machine and a machine in the windows network. -Call a bash command to SCP the files to the windows server or a worker in the network. This could work in the opposite direction too. -Add the files to a git repository and have a worker in the windows network sync the repository to a shared location. This option is only good in very specific cases. It has the benefit that you can track changes and restore old states (if the data is in CSV or another text format), but it's not great for large files or binary files. -Use rsync to transfer the files to a worker in the windows network which has the shared location mounted and move the files to the synced dir with python or bash. -Mount the network drive to the server and use python or bash to move the files there. - -All of these should be possible with Airflow by either using python (shutil) or a bash script to transfer the files to the right directory for some other process to pick up or by calling a bash sub-process to perform the direct transfer by SCP or commit the data via git. You will have to find out what's possible with your firewall and network settings. Some of these would require coordinating tasks on the windows side (the git option for example would require some kind of cron job or task scheduler to pull the repository to keep the files up to date).",0.0,False,1,6020 -2019-03-29 18:04:09.080,Python GTK+ 3: Is it possible to make background window invisible?,"basically I have this window with a bunch of buttons but I want the background of the window to be invisible/transparent so the buttons are essentially floating. However, GTK seems to be pretty limited with CSS and I haven't found a way to do it yet. I've tried making the main window opacity 0 but that doesn't seem to work. Is this even possible and if so how can I do it? Thanks. -Edit: Also, I'm using X11 forwarding.",For transparency Xorg requires a composite manager running on the X11 server. The compmgr program from Xorg is a minimal composite manager.,0.0,False,1,6021 -2019-03-30 18:02:10.470,Matplotlib with Pydroid 3 on Android: how to see graph?,"I'm currently using an Android device (of Samsung), Pydroid 3. -I tried to see any graphs, but it doesn't works. -When I run the code, it just shows me a black-blank screen temporarily and then goes back to the source code editing window. -(means that i can't see even terminal screen, which always showed me [Program Finished]) -Well, even the basic sample code which Pydroid gives me doesn't show me the graph :( -I've seen many tutorials which successfully showed graphs, but well, mine can't do that things. -Unfortunately, cannot grab any errors. -Using same code which worked at Windows, so don't think the code has problem. -Of course, matplotlib is installed, numpy is also installed. -If there's any possible problems, please let me know.","I also had this problem a while back, and managed to fix it by using plt.show() -at the end of your code. With matplotlib.pyplot as plt.",0.1016881243684853,False,3,6022 -2019-03-30 18:02:10.470,Matplotlib with Pydroid 3 on Android: how to see graph?,"I'm currently using an Android device (of Samsung), Pydroid 3. -I tried to see any graphs, but it doesn't works. -When I run the code, it just shows me a black-blank screen temporarily and then goes back to the source code editing window. -(means that i can't see even terminal screen, which always showed me [Program Finished]) -Well, even the basic sample code which Pydroid gives me doesn't show me the graph :( -I've seen many tutorials which successfully showed graphs, but well, mine can't do that things. -Unfortunately, cannot grab any errors. -Using same code which worked at Windows, so don't think the code has problem. -Of course, matplotlib is installed, numpy is also installed. -If there's any possible problems, please let me know.","After reinstalling it worked. -The problem was that I forced Pydroid to update matplotlib via Terminal, not the official PIP tab. -The version of matplotlib was too high for pydroid",1.2,True,3,6022 -2019-03-30 18:02:10.470,Matplotlib with Pydroid 3 on Android: how to see graph?,"I'm currently using an Android device (of Samsung), Pydroid 3. -I tried to see any graphs, but it doesn't works. -When I run the code, it just shows me a black-blank screen temporarily and then goes back to the source code editing window. -(means that i can't see even terminal screen, which always showed me [Program Finished]) -Well, even the basic sample code which Pydroid gives me doesn't show me the graph :( -I've seen many tutorials which successfully showed graphs, but well, mine can't do that things. -Unfortunately, cannot grab any errors. -Using same code which worked at Windows, so don't think the code has problem. -Of course, matplotlib is installed, numpy is also installed. -If there's any possible problems, please let me know.","You just need to add a line -plt.show() -Then it will work. You can also save the file before showing -plt.savefig(""*imageName*.png"")",0.0,False,3,6022 -2019-03-31 02:36:13.693,"Accidentally used homebrew to change my default python to 3.7, how do I change it back to 2.7?","I was trying to install python 3 because I wanted to work on a project using python 3. Instructions I'd found were not working, so I boldly ran brew install python. Wrong move. Now when I run python -V I get ""Python 3.7.3"", and when I try to enter a virtualenv I get -bash: /Users/elliot/Library/Python/2.7/bin/virtualenv: /usr/local/opt/python/bin/python2.7: bad interpreter: No such file or directory -My ~/.bash_profile reads -export PATH=""/Users/elliot/Library/Python/2.7/bin:/usr/local/opt/python/libexec/bin:/Library/PostgreSQL/10/bin:$PATH"" -but ls /usr/local/Cellar/python/ gets me 3.7.3 so it seems like brew doesn't even know about my old 2.7 version anymore. -I think what I want is to reset my system python to 2.7, and then add python 3 as a separate python running on my system. I've been googling, but haven't found any advice on how to specifically use brew to do this. -Edit: I'd also be happy with keeping Python 3.7, if I knew how to make virtualenv work again. I remember hearing that upgrading your system python breaks everything, but I'd be super happy to know if that's outdated knowledge and I'm just being a luddite hanging on to 2.7.","So, I got through it by completely uninstalling Python, which I'd been reluctant to do, and then reinstalled Python 2. I had to update my path and open a new shell to get it to see the new python 2 installation, and things fell into place. I'm now using pyenv for my Python 3 project, and it's a dream.",0.0,False,1,6023 -2019-03-31 06:41:35.623,How does one transfer python code written in a windows laptop to a samsung android phone?,"I created numerous python scripts on my pc laptop, and I want to run those scripts on my android phone. How can I do that? How can I move python scripts from my windows pc laptop, and use those python scripts on my samsung adroid phone? -I have downloaded qpython from the google playstore, but I still don't know how to get my pc python programs onto my phone. I heard some people talk about ""ftp"" but I don't even know what that means. -Thanks","you can use TeamViewer to control your android phone from your PC. And copy and paste the code easily. +the py version is 3.6 +thx everyone, im stupid xd","Time is a module that comes built-in with python so no need to install anything, just import it : +import time",0.1352210990936997,False,1,6853 +2020-06-21 15:07:21.757,how can i use an chrome extension in my selenium python program?,"im just trying to use an vpn extension with selenium. I have the extension running , but i need to click in the button and enable the vpn so it can works, there's a way to do that with selenium? im thinking to use another similar option like scrapy or pyautogui...","No there is no way to enable the VPN on your extension +If you want to use your VPN extension you have to set a profile (otherwise selenium will create a new profile without installed extension)",1.2,True,1,6854 +2020-06-21 15:10:15.180,I have completely messed up my Python Env and need help to start fresh,"Long story short, I messed with my Python environment too much (moving files around, creating new folders, trying to reinstall packages, deleting files etc.) My google package doesn't work anymore. Everytime I try to import the package, it says it can't find the module, even though I did a pip install. +I was wondering how I could do a hard reset/delete python off my computer and reinstall it. +Thanks.","I figured it out. My pip was installing to a site packages folder inside a local folder, while my jupyter notebook was trying to pull from the anaconda site packages folder.",1.2,True,1,6855 +2020-06-22 19:54:41.410,Gettinng back cells after being deleted in Colab,"I often delete code in Colab, by accident, and for some reason when I try to do undo code deletion it does not work. So basically when I do this I want to get my cells back somehow. Is there any way to do this, like take a look at the code that Colab is running, because my cells are probably still there. Another option would be to somehow see cells that have been previously deleted. Please help me. Any other solutions would be nice.",You can undo deleting cell in google colab simply by typing ctrl + M Z,0.2012947653214861,False,1,6856 +2020-06-22 21:33:39.547,"Replace string with quotes, brackets, braces, and slashes in python","I have a string where I am trying to replace [""{\"" with [{"" and all \"" with "". +I am struggling to find the right syntax in order to do this, does anyone have a solid understanding of how to do this? +I am working with JSON, and I am inserting a string into the JSON properties. This caused it to put a single quotes around my inserted data from my variable, and I need those single quotes gone. I tried to do json.dumps() on the data and do a string replace, but it does not work. +Any help is appreciated. Thank you.","if its two characters you want to replace then you have to first check for first character and then the second(which should be present just after the first one and so on) and shift(shorten the whole array by 3 elements in first case whenever the condition is satisfied and in the second case delete \ from the array. +You can also find particular substring by using inbuilt function and replace it by using replace() function to insert the string you want in its place",0.0,False,2,6857 +2020-06-22 21:33:39.547,"Replace string with quotes, brackets, braces, and slashes in python","I have a string where I am trying to replace [""{\"" with [{"" and all \"" with "". +I am struggling to find the right syntax in order to do this, does anyone have a solid understanding of how to do this? +I am working with JSON, and I am inserting a string into the JSON properties. This caused it to put a single quotes around my inserted data from my variable, and I need those single quotes gone. I tried to do json.dumps() on the data and do a string replace, but it does not work. +Any help is appreciated. Thank you.","I would recommend maybe posting more of your code below so we can suggest a better answer. Just based on the information you have provided, I would say that what you are looking for are escape characters. I may be able to help more once you provide us with more info!",0.0,False,2,6857 +2020-06-23 15:32:04.937,How to calculate percentage in Python with very simple formula,"I've seen similar questions but it's shocking that I didn't see the answer I was, in fact, looking for. So here they are, both the question and the answer: +Q: +How to calculate simply the percentage in Python. +Say you need a tax calculator. To put it very simple, the tax is 18% of earnings. +So how much tax do I have to pay if I earn, say, 18 342? The answer in math is that you divide by 100 and multiply the result by 18 (or multiply with 18 divided by 100). But how do you put that in code? +tax = earnings / 100 * 18 +Would that be quite right?","The answer that best fitted me, especially as it implied no import, was this: +tax = earnings * 0.18 +so if I earned 18 342, and the tax was 18%, I should write: +tax = 18 342 * 0.18 +which would result in 3 301.56 +This seems trivial, I know, and probably some code was expected, moreover this form might be applicable not only in Python, but again, I didn't see the answer anywhere and I thought that it is, after all, the simplest.",0.0,False,1,6858 +2020-06-23 17:34:09.277,"In P4, how do i check if a change submitted to one branch is also submitted to another branch using command","I want to find out there is a p4 command that can find cl submitted in a depot branch from a cl submitted in another depot branch. +like - +if CL 123 was submitted to branch //code/v1.0/files/... +and same code changes were also submitted to another branch //code/v5.0/files/... +can i find out cl in 2nd branch from cl 123?","There are a few different methods; which one is easiest will depend on the exact context/requirements of what you're doing. +If you're interested in the specific lines of code rather than the metadata, p4 annotate is the best way. Use p4 describe 123 to see the lines of code changed in 123, and then p4 annotate -c v5.0/(file) to locate the same lines of code in v5.0 and see which changelist(s) introduced them into that branch. This method will work even if the changes were copied over manually instead of using Perforce's merge commands. +If you want to track the integration history (i.e. the metadata) rather than the exact lines of code (which may have been edited in the course of being merged between codelines, making the annotate method not work), the easiest method is to use the Revision Graph tool in P4V, which lets you visually inspect a file's branching history; you can select the revision from change 123 and use the ""highlight ancestors and descendants"" tool to see which revisions/changelists it is connected to in other codelines. This makes it easy to see the context of how many integration steps were involved, who did them, when they happened, whether there were edits in between, etc. +If you want to use the metadata but you're trying for a more automated solution, changes -i is a good tool. This will show you which changelists are included in another changelist via integration, so you can do p4 changes -i @123,123 to see the list of all the changes that contributed to change 123. On the other side (finding changelists in v5.0 that 123 contributed to), you could do this iteratively; run p4 changes -i @N,N for each changelist N in the v5.0 codeline, and see which of them include 123 in the output (it may be more than one).",0.6730655149877884,False,1,6859 +2020-06-24 01:35:15.063,Alpha_Vantage ts.get_daily ending with [0],"I am learning how to use Alpha_Vantage api and came across this line of code. I do not understand what is the purpose of [0]. +SATS = ts.get_daily('S58.SI', outputsize = ""full"")[0]","ts.get_daily() appears to return an array. +SATS is getting the 0 index of the array (first item in the array)",0.0,False,1,6860 +2020-06-24 06:47:05.090,how do I run two separate deep learning based model together?,"I trained a deep learning-based detection network to detect and locate some objects. I also trained a deep learning-based classification network to classify the color of the detected objects. Now I want to combine these two networks to detect the object and also classify color. I have some problems with combining these two networks and running them together. How do I call classification while running detection? +They are in two different frameworks: the classifier is based on the Keras and TensorFlow backend, the detection is based on opencv DNN module.","I have read your question and from that, I can infer that your classification network takes the input from the output of your first network(object locator). i.e the located object from your first network is passed to the second network which in turn classifies them into different colors. The entire Pipeline you are using seems to be a sequential one. Your best bet is to first supply input to the first network, get its output, apply some trigger to activate the second network, feed the output of the first net into the second net, and lastly get the output of the second net. You can run both of these networks in separate GPUs. +The Trigger that calls the second function can be something as simple as cropping the located object in local storage and have a function running that checks for any changes in the file structure(adding a new file). If this function returns true you can grab that cropped object and run the network with this image as input.",0.0,False,1,6861 +2020-06-24 18:24:37.047,ModuleNotFoundError: No module named 'pandas' when converting Python file to Executable using auto-py-to-exe,"I used auto-py-to-exe to convert a Python script into an executable file and it converts it to an executable without any problems, but when I launch the executable the following error happens: +ModuleNotFoundError: No module named 'pandas' +[11084] Failed to execute script test1 +Any ideas on how to fix this? I've tried many libraries to convert the Python file to and Executable and all give me the same error. I've tried with cx_Freeze, PyInstaller, py2exe, and auto-py-to-exe. All give me a ModuleNotFoundError, but when I run the script on the IDE it runs perfectly.",Are you trying pip install pandas?,0.2655860252697744,False,3,6862 +2020-06-24 18:24:37.047,ModuleNotFoundError: No module named 'pandas' when converting Python file to Executable using auto-py-to-exe,"I used auto-py-to-exe to convert a Python script into an executable file and it converts it to an executable without any problems, but when I launch the executable the following error happens: +ModuleNotFoundError: No module named 'pandas' +[11084] Failed to execute script test1 +Any ideas on how to fix this? I've tried many libraries to convert the Python file to and Executable and all give me the same error. I've tried with cx_Freeze, PyInstaller, py2exe, and auto-py-to-exe. All give me a ModuleNotFoundError, but when I run the script on the IDE it runs perfectly.","For cx_freeze, inlcude pandas explicitly in the packages. Like in the example below - +build_exe_options = {'packages': ['os', 'tkinter', 'pandas']} +This should include the pandas module in you build.",0.1352210990936997,False,3,6862 +2020-06-24 18:24:37.047,ModuleNotFoundError: No module named 'pandas' when converting Python file to Executable using auto-py-to-exe,"I used auto-py-to-exe to convert a Python script into an executable file and it converts it to an executable without any problems, but when I launch the executable the following error happens: +ModuleNotFoundError: No module named 'pandas' +[11084] Failed to execute script test1 +Any ideas on how to fix this? I've tried many libraries to convert the Python file to and Executable and all give me the same error. I've tried with cx_Freeze, PyInstaller, py2exe, and auto-py-to-exe. All give me a ModuleNotFoundError, but when I run the script on the IDE it runs perfectly.","A script that runs in your IDE but not outside may mean you are actually working in a virtual environment. Pandas probably is not installed globally in your system. Try remembering if you had created a virtual environment and then installed pandas inside this virtual environment. +Hope it helped, +Vijay.",1.2,True,3,6862 +2020-06-25 05:00:30.313,Is there a python code that I can add to my program that will add it to start in windows 10?,"Currently, I have been scouring the internet for a code that will either add this program (something.exe) to the windows task scheduler or if that is not even an option how to add it to the windows reg key for a startup. I cannot find anything in terms of Python3, and I really hope it is not an answer that is right in front of my face. Thanks!","Open the windows scheduler -> select ""create basic task"" -> fill out the desired times -> input the path to the script you want to execute.",0.0,False,1,6863 +2020-06-25 06:15:17.920,How do I run a downloaded repository's config in Python?,"I am trying to use sunnyportal-py. Relatively new to python, I do not understand step 2 in the README: +How to run + +Clone or download the repository. +Enter the directory and run: +PYTHONPATH=. ./bin/sunnyportal2pvoutput --dry-run sunnyportal.config +Enter the requested information and verify that the script is able to connect to Sunny Portal. +The information is saved in sunnyportal.config and can be edited/deleted if you misstype anything. +Once it works, replace --dry-run with e.g. --output to upload the last seven days output data to pvoutput or --status to upload data for the current day. +Add --quiet to silence the output. + +Could anyone help me? I have gone into a cmd.exe in the folder I have downloaded, I don't know how to correctly write the python path in the correct location. What should I paste into the command line? Thanks! +Edit : I would like to be able to do this on Windows, do tell me if this is possible.","The command at bullet 2 is to be typed at the commandline (You need to be in windows: cmd or powershell, Linux: bash, etc.. to be able to do this). + +PYTHONPATH=. ./bin/sunnyportal2pvoutput --dry-run sunnyportal.config + +The first part of the command code above indicates where your program is located. Go to the specific folder via commandline (windows: cd:... ; where .. is your foldername) and type the command. +The second part is the command to be executed. Its behind the ""--"" dashes. The program knows what to do. In this case: + +--dry-run sunnyportal.config + +running a validation/config file to see if the program code itself works; as indicated by ""dry run"". +In your case type at the location (while in cmd): + +""sunnyportal2pvoutput --dry-run sunnyportal.config"" + or -You can transfer your scripts on your phone memory in the qpython folder and open it using qpython for android.",0.0,False,2,6024 -2019-03-31 06:41:35.623,How does one transfer python code written in a windows laptop to a samsung android phone?,"I created numerous python scripts on my pc laptop, and I want to run those scripts on my android phone. How can I do that? How can I move python scripts from my windows pc laptop, and use those python scripts on my samsung adroid phone? -I have downloaded qpython from the google playstore, but I still don't know how to get my pc python programs onto my phone. I heard some people talk about ""ftp"" but I don't even know what that means. -Thanks","Send them to yourself via email, then download the scripts onto your phone and run them through qpython. -However you have to realize not all the modules on python work on qpython so your scripts may not work the same when you transfer them.",0.0,False,2,6024 -2019-04-01 16:47:12.417,how to find text before and after given words and output into different text files?,"I have a text file like this: - -... - NAME : name-1 - ... - NAME : name-2 - ... - ... - ... - NAME : name-n - ... - -I want output text files like this: - -name_1.txt : NAME : name-1 ... - name_2.txt : NAME : name-2 ... - ... - name_n.txt : NAME : name-n ... - -I have the basic knowledge of grep, sed, awk, shell scripting, python.","With GNU sed: -sed ""s/\(.*\)\(name-.*\)/echo '\1 \2' > \2.txt/;s/-/_/2e"" input-file - -Turn line NAME : name-2 into echo ""NAME : name-2"" > name-2.txt -Then replace the second - with _ yielding echo ""NAME : name-2"" > name_2.txt -e have the shell run the command constructed in the pattern buffer. - -This outputs blank lines to stdout, but creates a file for each matching line. -This depends on the file having nothing but lines matching this format... but you can expand the gist here to skip other lines with n.",0.0,False,1,6025 -2019-04-02 09:36:07.253,"Unable to parse the rows in ResultSet returned by connection.execute(), Python and SQLAlchemy","I have a task to compare data of two tables in two different oracle databases. We have access of views in both of db. Using SQLAlchemy ,am able to fetch rows from views but unable to parse it. -In one db the type of ID column is : Raw -In db where column type is ""Raw"", below is the row am getting from resultset . -(b'\x0b\x975z\x9d\xdaF\x0e\x96>[Ig\xe0/', 1, datetime.datetime(2011, 6, 7, 12, 11, 1), None, datetime.datetime(2011, 6, 7, 12, 11, 1), b'\xf2X\x8b\x86\x03\x00K|\x99(\xbc\x81n\xc6\xd3', None, 'I', 'Inactive') -ID Column data: b'\x0b\x975z\x9d\xdaF\x0e\x96>[_Ig\xe0/' -Actual data in ID column in database: F2588B8603004B7C9928BC816EC65FD3 -This data is not complete hexadecimal format as it has some speical symbols like >|[_ etc. I want to know that how can I parse the data in ID column and get it as a string.",bytes.hex() solved the problem,1.2,True,1,6026 -2019-04-02 12:30:37.360,How to install Python packages from python3-apt in PyCharm on Windows?,"I'm on Windows and want to use the Python package apt_pkg in PyCharm. -On Linux I get the package by doing sudo apt-get install python3-apt but how to install apt_pkg on Windows? -There is no such package on PyPI.",There is no way to run apt-get in Windows; the package format and the supporting infrastructure is very explicitly Debian-specific.,0.2012947653214861,False,1,6027 -2019-04-03 14:59:12.000,“Close and Halt” feature does not functioning in jupyter notebook launched under Canopy on macOs High Sierra,"When I done with my work, I try to close my jupyter notebook via 'Close and Halt' under the file menu. However it somehow do not functioning. -I am running the notebook from Canopy, version: 2.1.9.3717, under macOs High Sierra.","If you are running Jupyter notebook from Canopy, then the Jupyter notebook interface is not controlling the kernel; rather, Canopy's built-in ipython Qtconsole is. You can restart the kernel from the Canopy run menu.",0.3869120172231254,False,1,6028 -2019-04-03 17:51:59.123,Running an external Python script on a Django site,"I have a Python script which communicates with a Financial site through an API. I also have a Django site, i would like to create a basic form on my site where i input something and, according to that input, my Python script should perform some operations. -How can i do this? I'm not asking for any code, i just would like to understand how to accomplish this. How can i ""run"" a python script on a Django project? Should i make my Django project communicate with the script through a post request? Or is there a simpler way?","I agree with @Daniel Roseman -If you are looking for your program to be faster, maybe multi-threading would be useful.",0.0,False,2,6029 -2019-04-03 17:51:59.123,Running an external Python script on a Django site,"I have a Python script which communicates with a Financial site through an API. I also have a Django site, i would like to create a basic form on my site where i input something and, according to that input, my Python script should perform some operations. -How can i do this? I'm not asking for any code, i just would like to understand how to accomplish this. How can i ""run"" a python script on a Django project? Should i make my Django project communicate with the script through a post request? Or is there a simpler way?","Since you don't want code, and you didn't get detailed on everything required required, here's my suggestion: - -Make sure your admin.py file has editable fields for the model you're using. -Make an admin action, -Take the selected row with the values you entered, and run that action with the data you entered. - -I would be more descriptive, but I'd need more details to do so.",0.3869120172231254,False,2,6029 -2019-04-04 02:39:46.887,Tracking any change in an table on SQL Server With Python,"How are you today? -I'm a newbie in Python. I'm working with SQL server 2014 and Python 3.7. So, my issue is: When any change occurs in a table on DB, I want to receive a message (or event, or something like that) on my server (Web API - if you like this name). -I don't know how to do that with Python. -I have an practice (an exp. maybe). I worked with C# and SQL Server, and in this case, I used ""SQL Dependency"" method in C# to solve that. It's really good! -Have something like that in Python? Many thank for any idea, please! -Thank you so much.","I do not know many things about SQL. But I guess there are tools for SQL to detect those changes. And then you could create an everlasting loop thread using multithreading package to capture that change. (Remember to use time.sleep() to block your thread so that It wouldn't occupy the CPU for too long.) Once you capture the change, you could call the function that you want to use. (Actually, you could design a simple event engine to do that). I am a newbie in Computer Science and I hope my answer is correct and helpful. :)",0.0,False,1,6030 -2019-04-04 07:59:55.183,virtual real time limit (178/120s) reached,"I am using ubuntu 16 version and running Odoo erp system 12.0 version. -On my application log file i see information says ""virtual real time limit (178/120s) reached"". -What exactly it means & what damage it can cause to my application? -Also how i can increase the virtual real time limit?","Open your config file and just add below parameter : ---limit-time-real=100000",0.9866142981514304,False,1,6031 -2019-04-04 15:23:10.660,How to handle multiple major versions of dependency,"I'm wondering how to handle multiple major versions of a dependency library. -I have an open source library, Foo, at an early release stage. The library is a wrapper around another open source library, Bar. Bar has just launched a new major version. Foo currently only supports the previous version. As I'm guessing that a lot of people will be very slow to convert from the previous major version of Bar to the new major version, I'm reluctant to switch to the new version myself. -How is this best handled? As I see it I have these options - -Switch to the new major version, potentially denying people on the old version. -Keep going with the old version, potentially denying people on the new version. -Have two different branches, updating both branches for all new features. Not sure how this works with PyPi. Wouldn't I have to release at different version numbers each time? -Separate the repository into two parts. Don't really want to do this. - -The ideal solution for me would be to have the same code base, where I could have some sort of C/C++ macro-like thing where if the version is new, use new_bar_function, else use old_bar_function. When installing the library from PyPi, the already installed version of the major version dictates which version is used. If no version is installed, install the newest. -Would much appreciate some pointers.","Have two different branches, updating both branches for all new features. Not sure how this works with PyPI. Wouldn't I have to release at different version numbers each time? - -Yes, you could have a 1.x release (that supports the old version) and a 2.x release (that supports the new version) and release both simultaneously. This is a common pattern for packages that want to introduce a breaking change, but still want to continue maintaining the previous release as well.",0.2012947653214861,False,1,6032 -2019-04-05 16:28:56.133,How do apply Q-learning to an OpenAI-gym environment where multiple actions are taken at each time step?,"I have successfully used Q-learning to solve some classic reinforcement learning environments from OpenAI Gym (i.e. Taxi, CartPole). These environments allow for a single action to be taken at each time step. However I cannot find a way to solve problems where multiple actions are taken simultaneously at each time step. For example in the Roboschool Reacher environment, 2 torque values - one for each axis - must be specify at each time step. The problem is that the Q matrix is built from (state, action) pairs. However, if more than one action are taken simultaneously, it is not straightforward to build the Q matrix. -The book ""Deep Reinforcement Learning Hands-On"" by Maxim Lapan mentions this but does not give a clear answer, see quotation below. - -Of course, we're not limited to a single action to perform, and the environment could have multiple actions, such as pushing multiple buttons simultaneously or steering the wheel and pressing two pedals (brake and accelerator). To support such cases, Gym defines a special container class that allows the nesting of several action spaces into one unified action. - -Does anybody know how to deal with multiple actions in Q learning? -PS: I'm not talking about the issue ""continuous vs discrete action space"", which can be tackled with DDPG.","You can take one of two approaches - depend on the problem: - -Think of the set of actions you need to pass to the environment as independent and make the network output actions values for each one (make softmax separately) - so if you need to pass two actions, the network will have two heads, one for each axis. -Think of them as dependent and look on the Cartesian product of the sets of actions, and then make the network to output value for each product - so if you have two actions that you need to pass and 5 options for each, the size of output layer will be 2*5=10, and you just use softmax on that.",0.6730655149877884,False,1,6033 -2019-04-06 19:50:46.103,How to install python3.6 in parallel with python 2.7 in Ubuntu 18,Setting up to start python for data analytics and want to install python 3.6 in Ubuntu 18.0 . Shall i run both version in parallel or overwrite 2.7 and how ? I am getting ambiguous methods when searched up.,Try pyenv and/or pipenv . Both are excellent tools to maintain local python installations.,0.0,False,1,6034 -2019-04-07 08:00:53.180,how to display the month in from view ? (Odoo11),"please, how do I display the month in the form? example: -07/04/2019 i want to change it in 07 april, 2019 -Thank you in advance","Try with following steps: - -Go to Translations > Languages -Open record with your current language. -Edit date format with %d %A, %Y",0.3869120172231254,False,1,6035 -2019-04-07 14:08:01.350,How to fix print((double parentheses)) after 2to3 conversion?,"When migrating my project to Python 3 (2to3-3.7 -w -f print *), I observed that a lot of (but not all) print statements became print((...)), so these statements now print tuples instead of performing the expected behavior. I gather that if I'd used -p, I'd be in a better place right now because from __future__ import print_function is at the top of every affected module. -I'm thinking about trying to use sed to fix this, but before I break my teeth on that, I thought I'd see if anyone else has dealt with this before. Is there a 2to3 feature to clean this up? -I do use version control (git) and have commits immediately before and after (as well as the .bak files 2to3 creates), but I'm not sure how to isolate the changes I've made from the print situations.",If your code already has print() functions you can use the -x print argument to 2to3 to skip the conversion.,0.6133572603953825,False,1,6036 -2019-04-08 06:39:59.923,"Windowed writes in python, e.g. to NetCDF","In python how can I write subsets of an array to disk, without holding the entire array in memory? -The xarray input/output docs note that xarray does not support incremental writes, only incremental reads except by streaming through dask.array. (Also that modifying a dataset only affects the in-memory copy, not the connected file.) The dask docs suggest it might be necessary to save the entire array after each manipulation?","This can be done using netCDF4 (the python library of low level NetCDF bindings). Simply assign to a slice of a dataset variable, and optionally call the dataset .sync() method afterward to ensure no delay before those changes are flushed to the file. -Note this approach also provides the opportunity to progressively grow a dimension of the array (by calling createDimension with size None, making it the first dimension of a variable, and iteratively assigning to incrementally larger indices along that dimension of the variable). -Although random-access window (i.e. subset) writes appear to require the lower level package, more systematic subset writes (eventually covering the entire array) can be done incrementally with xarray (by specifying a chunk size parameter to trigger use of the dask.array backend), and provided that your algorithm is refactored so that the main loop occurs in the dask/xarray store-to-file call. This means you will not have explicit control over the sequence in which chunks are generated and written.",0.0,False,1,6037 -2019-04-08 14:03:38.120,Is there any way to hide or encrypt your python code for edge devices? Any way to prevent reverse engineering of python code?,"I am trying to make a smart IOT device (capable of performing smart Computer Vision operations, on the edge device itself). A Deep Learning algorithm (written in python) is implemented on Raspberry Pi. Now, while shipping this product (software + hardware) to my customer, I want that no one should log in to the raspberry pi and get access to my code. The flow should be something like, whenever someone logs into pi, there should be some kind of key that needs to be input to get access to code. But in that case how OS will get access to code and run it (without key). Then I may have to store the key on local. But still there is a chance to get access to key and get access to the code. I have applied a patent for my work and want to protect it. -I am thinking to encrypt my code (written in python) and just ship the executable version. I tried pyinstaller for it, but somehow there is a script available on the internet that can reverse engineer it. -Now I am little afraid as it can leak all my effort of 6 months at one go. Please suggest a better way of doing this. -Thanks in Advance.","Keeping the code on your server and using internet access is the only way to keep the code private (maybe). Any type of distributed program can be taken apart eventually. You can't (possibly shouldn't) try to keep people from getting inside devices they own and are in their physical possession. If you have your property under patent it shouldn't really matter if people are able to see the code as only you will be legally able to profit from it. -As a general piece of advice, code is really difficult to control access to. Trying to encrypt software or apply software keys to it or something like that is at best a futile attempt and at worst can often cause issues with software performance and usability. The best solution is often to link a piece of software with some kind of custom hardware device which is necessary and only you sell. That might not be possible here since you're using generic hardware but food for thought.",-0.3869120172231254,False,1,6038 -2019-04-08 15:02:18.437,How to classify unlabelled data?,I am new to Machine Learning. I am trying to build a classifier that classifies the text as having a url or not having a url. The data is not labelled. I just have textual data. I don't know how to proceed with it. Any help or examples is appreciated.,"Since it's text, you can use bag of words technique to create vectors. - -You can use cosine similarity to cluster the common type text. -Then use classifier, which would depend on number of clusters. -This way you have a labeled training set. - -If you have two cluster, binary classifier like logistic regression would work. -If you have multiple classes, you need to train model based on multinomial logistic regression -or train multiple logistic models using One vs Rest technique. - -Lastly, you can test your model using k-fold cross validation.",0.2012947653214861,False,1,6039 -2019-04-08 16:54:25.427,Django - how to visualize signals and save overrides?,"As a project grows, so do dependencies and event chains, especially in overridden save() methods and post_save and pre_save signals. -Example: -An overridden A.save creates two related objects to A - B and C. When C is saved, the post_save signal is invoked that does something else, etc... -How can these event chins be made more clear? Is there a way to visualize (generate automatically) such chains/flows? I'm not looking for ERD nor a Class diagram. I need to be sure that doing one thing one place won't affect something on the other side of the project, so simple visualization would be the best. -EDIT -To be clear, I know that it would be almost impossible to check dynamically generated signals. I just want to check all (not dynamically generated) post_save, pre_save, and overridden save methods and visualize them so I can see immediately what is happening and where when I save something.","(Too long to fit into a comment, lacking code to be a complete answer) -I can't mock up a ton of code right now, but another interesting solution, inspired by Mario Orlandi's comment above, would be some sort of script that scans the whole project and searches for any overridden save methods and pre and post save signals, tracking the class/object that creates them. It could be as simple as a series of regex expressions that look for class definitions followed by any overridden save methods inside. -Once you have scanned everything, you could use this collection of references to create a dependency tree (or set of trees) based on the class name and then topologically sort each one. Any connected components would illustrate the dependencies, and you could visualize or search these trees to see the dependencies in a very easy, natural way. I am relatively naive in django, but it seems you could statically track dependencies this way, unless it is common for these methods to be overridden in multiple places at different times.",0.4540544406412981,False,1,6040 -2019-04-09 07:36:02.467,Capturing time between HTML form submit action and printing response,"I have a Python Flask application with a HTML form which accept few inputs from user, uses those in an python program which returns the processed values back to flask application return statement. -I wanted to capture the time took for whole processing and rendering output data on browser but not sure how to do that. At present I have captured the take by python program to process the input values but it doesn't account for complete time between ""submit"" action and rendering output data.",Use ajax request to submit form. Fetch the time on clicking the button and after getting the response and then calculate the difference.,0.0,False,1,6041 -2019-04-09 09:15:48.837,"How to extract images from PDF or Word, together with the text around images?","I found there are some library for extracting images from PDF or word, like docx2txt and pdfimages. But how can I get the content around the images (like there may be a title below the image)? Or get a page number of each image? -Some other tools like PyPDF2 and minecart can extract image page by page. However, I cannot run those code successfully. -Is there a good way to get some information of the images? (from the image got from docx2txt or pdfimages, or another way to extract image with info)",docx2python pulls the images into a folder and leaves -----image1.png---- markers in the extracted text. This might get you close to where you'd like to go.,0.0,False,1,6042 -2019-04-09 18:46:58.267,What is this audio datatype and how do I convert it to wav/l16?,"I am recording audio in a web browser and sending it to a flask backend. From there, I want to transcribe the audio using Watson Speech to Text. I cannot figure out what data format I'm receiving the audio and how to convert it to a format that works for watson. -I believe watson expects a bytestring like b'\x0c\xff\x0c\xffd. The data I receive from the browser looks like [ -4 -27 -34 -9 1 -8 -1 2 10 -28], which I can't directly convert to bytes because of the negative values (using bytes() gives me that error). -I'm really at a loss for what kind of conversion I need to be making here. Watson doesn't return any errors for any kind of data I throw at it just doesn't respond.","Those values should be fine, but you have to define how you want them stored before getting the bytes representation of them. -You'd simply want to convert those values to signed 2-byte/16-bit integers, then get the bytes representation of those.",1.2,True,1,6043 -2019-04-09 19:37:11.227,how do I implement ssim for loss function in keras?,"I need SSIM as a loss function in my network, but my network has 2 outputs. I need to use SSIM for first output and cross-entropy for the next. The loss function is a combination of them. However, I need to have a higher SSIM and lower cross-entropy, so I think the combination of them isn't true. Another problem is that I could not find an implementation of SSIM in keras. -Tensorflow has tf.image.ssim, but it accepts the image and I do not think I can use it in loss function, right? Could you please tell me what should I do? I am a beginner in keras and deep learning and I do not know how can I make SSIM as a custom loss function in keras.","other choice would be -ssim_loss = 1 - tf.reduce_mean(tf.image.ssim(target, output, max_val=self.max_val)) -then -combine_loss = mae (or mse) + ssim_loss -In this way, you are minimizing both of them.",0.0,False,1,6044 -2019-04-11 11:57:15.603,KMeans: Extracting the parameters/rules that fill up the clusters,"I have created a 4-cluster k-means customer segmentation in scikit learn (Python). The idea is that every month, the business gets an overview of the shifts in size of our customers in each cluster. -My question is how to make these clusters 'durable'. If I rerun my script with updated data, the 'boundaries' of the clusters may slightly shift, but I want to keep the old clusters (even though they fit the data slightly worse). -My guess is that there should be a way to extract the paramaters that decides which case goes to their respective cluster, but I haven't found the solution yet. -I would appreciate any help","Got the answer in a different topic: -Just record the cluster means. Then when new data comes in, compare it to each mean and put it in the one with the closest mean.",0.3869120172231254,False,1,6045 -2019-04-11 13:25:10.313,how to count number of days via cron job in odoo 10?,"I am setting up a script for counting number of days with passing each day in odoo. -How i can count day passing each day till end of the month. -For example : i have set two dates to find days between them.I need function which compare number of days with each passing day. When meet remaining day is 0 then will call a cron job.","Write a scheduled action that runs python code daily. The first thing that this code should do is to check the number of days you talk about and if it is 0, it should trigger whatever action it is needed.",0.0,False,1,6046 -2019-04-12 04:46:43.223,How to add reply(child comments) to comments on feed in getstream.io python,I am using getstream.io to create feeds. The user can follow feeds and add reaction like and comments. If a user adds a comment on feed and another wants to reply on the comment then how I can achieve this and also retrieve all reply on the comment.,you can add the child reaction by using reaction_id,0.0,False,1,6047 -2019-04-12 12:06:29.347,how to find the similarity between two documents,"I have tried using the similarity function of spacy to get the best matching sentence in a document. However it fails for bullet points because it considers each bullet as the a sentence and the bullets are incomplete sentences (eg sentence 1 ""password should be min 8 characters long , sentence 2 in form of a bullet "" 8 characters""). It does not know it is referring to password and so my similarity comes very low.","Sounds to me like you need to do more text processing before attempting to use similarity. If you want bullet points to be considered part of a sentence, you need to modify your spacy pipeline to understand to do so.",0.0,False,2,6048 -2019-04-12 12:06:29.347,how to find the similarity between two documents,"I have tried using the similarity function of spacy to get the best matching sentence in a document. However it fails for bullet points because it considers each bullet as the a sentence and the bullets are incomplete sentences (eg sentence 1 ""password should be min 8 characters long , sentence 2 in form of a bullet "" 8 characters""). It does not know it is referring to password and so my similarity comes very low.","Bullets are considered but the thing is it doesn't understand who 8 characters is referring to so I thought of finding the heading of the paragraph and replacing the bullets with it -I found the headings using python docs but it doesn't read bullets while reading the document ,is there a way I can read it using python docs ? -Is there any way I can find the headings of a paragraph in spacy? -Is there a better approach for it",0.0,False,2,6048 -2019-04-12 13:48:58.717,Trying to Import Infoblox Module in Python,"I am trying to write some code in python to retrieve some data from Infoblox. To do this i need to Import the Infoblox Module. -Can anyone tell me how to do this ?","Before you can import infoblox you need to install it: - -open a command prompt (press windows button, then type cmd) -if you are working in a virtual environment access it with activate yourenvname (otherwise skip this step) -execute pip install infoblox to install infoblox, then you should be fine -to test it from the command prompt, execute python, and then try executing import infoblox - -The same process works for basically every package.",0.0,False,1,6049 -2019-04-12 21:52:02.810,Why do I keep getting this error when trying to create a virtual environment with Python 3 on MacOS?,"So I'm following a book that's teaching me how to make a Learning Log using Python and the Django web framework. I was asked to go to a terminal and create a directory called ""learning_log"" and change the working directory to ""learning_log"" (did that with no problems). However, when I try to create the virtual environment, I get an error (seen at the bottom of this post). Why am I getting this error and how can I fix this to move forward in the book? -I already tried installing a virtualenv with pip and pip3 (as the book prescribed). I was then instructed to enter the command: -learning_log$ virtualenv ll_env -And I get: -bash: virtualenv: command not found -Since I'm using Python3.6, I tried: -learning_log$ virtualenv ll_env --python=python3 -And I still get: -bash: virtualenv: command not found -Brandons-MacBook-Pro:learning_log brandondusch$ python -m venv ll_env -Error: Command '['/Users/brandondusch/learning_log/ll_env/bin/python', '-Im', 'ensurepip', '--upgrade', '- --default-pip']' returned non-zero exit status 1.","For Ubuntu: -The simple is if virtualenv --version returns something like virtualenv: command not found and which virtualenv prints nothing on the console, then virtualenv is not installed on your system. Please try to install using pip3 install virtualenv or sudo apt-get install virtualenv but this one might install a bit older one. + +""sunnyportal2pvoutput.py --dry-run sunnyportal.config"" (without the environment variables (python path) set). + +Note: the pythonpath is an environment variable. This can be added via: Control Panel\All Control Panel Items\System\ --> bullet Advanced System Settings --> button ""environment variables"". Then you can select to add it to ""Variables for user ""username"""" or ""system variables"". Remember to reboot thereafter to make the change effective immediately. +Update 1 (pip install sunnyportal): + +go to cmd. +type ""pip search sunnyportal"" + +Result: + +Microsoft Windows [Version 10.0.18363.836] (c) 2019 Microsoft +Corporation. All rights reserved. +C:\Windows\System32>pip search +sunnyportal +sunnyportal-py (0.0.4) - A Python client for SMA sunny portal +C:\Windows\System32> + +If found, then type: +""pip install sunnyportal""",0.0,False,1,6864 +2020-06-25 08:51:15.257,Run one file among multiple files in azure webjobs,"I am trying to run a contionus azure webjob for python. +i have 6 files where main.py is the main file, other files internally importing each other and finally everything is being called from main.py, now when i am trying to run only the first python file is getting executed, but i want that when the webjob will start only main.py will be executed not anything else. how to achieve that ?","This is quite simple. In azure webjob, if the file name starts with run, then this file has the highest priority to execute. +So the most easiest way is just renaming the main.py to run.py. +Or add an run.py, then call the main.py within it.",1.2,True,1,6865 +2020-06-25 10:32:25.647,How do you download online libraries on python?,I am trying to download youtube videos using python and for the code to work I need to install pytube3 library but I am very new to coding so I am not sure how to do it.,"You could use +python3 -m pip install pytube3",0.1352210990936997,False,1,6866 +2020-06-25 16:48:51.563,How to check if image contains text or not?,"Given any image of a scanned document, I want to check if it's not empty page. +I know I can send it to AWS Textract - but it will cost money for nothing. +I know I can use pytesseract but maybe there is more elegant and simple solution? +Or given a .html file that represents the text of the image - how to check it shows a blank page?","We can use pytesseract for this application by thresholding the image and passing it to tesseract. However if you have a .html file that represents text of image, you can use beautifulsoup for extracting text from it and check if it is empty.Still this is a round way approach.",0.2012947653214861,False,1,6867 +2020-06-26 15:06:57.583,How to profile my APIs for concurrent requests?,"Scenario +Hi, I have a collection of APIs that I run on Postman using POST requests. The flask and redis servers are set up using docker. +What I'm trying to do +I need to profile my setup/APIs in a high traffic environment. So, + +I need to create concurrent requests calling these APIs + +The profiling aims to get the system conditions with respect to memory (total memory consumed by the application), time (total execution time taken to create and execute the requests) and CPU-time (or the percentage of CPU consumption) + + +What I have tried +I am familiar with some memory profilers like mprof and time profiler like line_profiler. But I could not get a profiler for the CPU consumption. I have run the above two profilers (mprof and line_profiler) on a single execution to get the line-by-line profiling results for my code. But this focuses on the function wise results.I have also created parallel requests earlier using asyncio,etc but that was for some simple API-like programs without POSTMAN. My current APIs work with a lot of data in the body section of POSTMAN +Where did I get stuck +With docker, this problem gets trickier for me. + +Firstly, I am unable to get concurrent requests + +I do not know how to profile my APIs when using POSTMAN (perhaps there is an option to do it without POSTMAN) with respect to the three parameters: time, memory and CPU consumption.","I suppose that you've been using the embbed flask server(dev server) that is NOT production ready and,by default, it supports only on request per time. For concurrent requests should be looking to use gunicorn or some other wsgi server like uWsgi. +Postman is only a client of your API, i don't see it's importance here. If you want to do a stress test or somethin like that, you can write your own script or use known tools, like jmetter. +Hope it helps!",0.0,False,1,6868 +2020-06-26 17:16:07.260,How to send clickable link and Mail in Chatterbot flask app,"I am using chatterbot, I want to send clickable link and Mail as per message sent by the user. I cant find any link or reference on how to do this",Try using linkify.... pip install autolink... linkify (bot.get_response(usr_text)),1.2,True,1,6869 +2020-06-26 17:58:13.033,How to train a model for recognizing two objects?,"Ive got two separate models, one for mask recognition and another for face recognition. The problem now is that how do I combine these both models so that it performs in unison as a single model which is able to :- + +Recognize whether or not a person is wearing mask +Simultaneously recognize who that person is if he isn't wearing mask apart from warning about no mask. + +What are the possibilities I have to solve this problem!!",You don't have to combine the both models and train them you have to train them seprately. And after training the model first you have to check with the mask detection model what's the probability/confidence score that there's a mask detected and if the probability is low say like 40%-45% then you have to use the other model that recognises the person.,0.2012947653214861,False,1,6870 +2020-06-26 20:38:05.160,model for hand written text recognition,"I have been attempting to create a model that given an image, can read the text from it. I am attempting to do this by implementing a cnn, rnn, and ctc. I am doing this with TensorFlow and Keras. There are a couple of things I am confused about. For reading single digits, I understand that your last layer in the model should have 9 nodes, since those are the options. However, for reading words, aren't there infinitely many options, so how many nodes should I have in my last layer. Also, I am confused as to how I should add my ctc to my Keras model. Is it as a loss function?","I see two options here: + +You can construct your model to recognize separate letters of those words, then there are as many nodes in the last layer as there are letters and symbols in the alphabet that your model will read. +You can make output of your model as a vector and then ""decode"" this vector using some other tool that can encode/decode words as vectors. One such tool I can think of is word2vec. Or there's an option to download some database of possible words and create such a tool yourself. +Description of your model is very vague. If you want to get more specific help, then you should provide more info, e.g. some model architecture.",0.0,False,1,6871 +2020-06-27 04:24:24.573,creating an api to update postgres db with a json or yaml payload,"I decided to ask this here after googling for hours. I want to create my own API endpoint on my own server. +Essentially I want to be able to just send a yaml payload to my server, when received I want to kick off my python scripts to parse the data and update the database. I'd also like to be able to retrieve data with a different call. I can code the back-end stuff, I just don't know how to make that bridge between hitting the server from outside and having the server do the things in the back-end in python. +Is django the right way? I've spent a couple days doing Django tutorials, really cool stuff, but I don't really need a website right now but whenever I search for web and python together, django pretty much always comes up. I don't need any python code help, just some direction on how to create that bridge. +Thanks.",DRF was what I was looking for. As suggested.,1.2,True,1,6872 +2020-06-28 12:56:23.870,PySimpleGui: how to remove event-delay in Listboxes?,"When reading events from a simple button in PySimpleGui, spamming this button with mouseclicks will generate an event for each of the clicks. +When you try to do the same with Listboxes (by setting enable_events to True for this element) it seems like there is a timeout after each generated event. If you click once every second, it will generate all the events. But if you spam-click it like before it will only generate the first event. +I'm not sure if this behavior is intended (only started learning PySimpleGui today), but is there a way to get rid of this delay? I tried checking the docs but can't find it mentioned anywhere.","I think the reason is that a Listbox reacts to click events, but also to double click events. A Button does not. This behavior looks like consistent.",0.0,False,1,6873 +2020-06-28 19:56:59.520,How to start multiple py files (2 discord bots) from one file at once,"I'm wondering how would I run my 2 discord bots at once from main, app.py, file. +And after I kill that process (main file process), they both would stop. +Tried os.system, didn't work. Tried multiple subprocess.Popen, didn't work. +Am I doing something wrong? +How would I do that?",I think the good design is to have one bot per .py file. If they both need code that is in app.py then they should 'import' the common code. Doing that you can just run both bot1.py and bot2.py.,0.0,False,1,6874 +2020-06-28 21:34:07.527,pip3 install of Jupyter and Notebook problem when running,"I have tried all of the things here on stack and on other sites with no joy... +I'd appreciate any suggestions please. +I have installed Jupyter and Notebook using pip3 - please note that I have updated pip3 before doing so. +However when trying to check the version of both jupyter --version and notebook --version my terminal is returning no command found. I have also tried to run jupyter, notebook and jupyter notebook and I am still getting the same message. +I have spent nearly two days now trying to sort this out... I'm on the verge of giving up. +I have a feeling it has something to do with my PATH variable maybe not pointing to where the jupyter executable is stored but I don't know how to find out where notebook and jupyter are stored on my system. +many thanks in advance +Bobby","So to summarise this is what I have found on this issue (in my experience): +to run the jupyter app you can use the jupyter-notebook command and this works, but why? This is because, the jupyter-notebook is stored in usr/local/bin which is normally always stored in the PATH variable. +I then discovered that the jupyter notebook or jupyter --version command will now work if I did the following: + +open my ./bash_profile file +add the following to the bottom of the file: export PATH=$PATH:/Users/your-home-directory/Library/Python/3.7/bin + +this should add the location of where jupyter is located to your path variable. +Alternatively, as suggested by @HackLab we can also do the following: + +python3 -m jupyter notebook + +Hopefully, this will give anyone else having the same issues I had an easier time resolving this issue.",1.2,True,2,6875 +2020-06-28 21:34:07.527,pip3 install of Jupyter and Notebook problem when running,"I have tried all of the things here on stack and on other sites with no joy... +I'd appreciate any suggestions please. +I have installed Jupyter and Notebook using pip3 - please note that I have updated pip3 before doing so. +However when trying to check the version of both jupyter --version and notebook --version my terminal is returning no command found. I have also tried to run jupyter, notebook and jupyter notebook and I am still getting the same message. +I have spent nearly two days now trying to sort this out... I'm on the verge of giving up. +I have a feeling it has something to do with my PATH variable maybe not pointing to where the jupyter executable is stored but I don't know how to find out where notebook and jupyter are stored on my system. +many thanks in advance +Bobby","have you tried locate Jupiter? It may tell you where jupyter is on your system. +Also, why not try installing jupyter via anaconda to avoid the hassle?",0.0814518047658113,False,2,6875 +2020-06-30 01:16:27.903,How do I use a cron job in order to insert events into google calendar?,"I wrote a Python script that allows me to retrieve calendar events from an externally connected source and insert them into my Google Calendar thanks to the Google Calendar's API. It works locally when I execute the script from my command line, but I would like to make it happen automatically so that the externally added events pop up in my Google Calendar automatically. +It appears that a cron job is the best way to do this, and given I used Google Calendar's API, I thought it might be helpful to use Cloud Functions with Cloud Scheduler in order to make it happen. However, I really don't know where to start and if this is even possible because accessing the API requires OAuth with Google to my personal Google account which is something I don't think a service account (which I think I need) can do on my behalf. +What are the steps I need to take in order to allow the script which I manually run and authenticates me with Google Calendar run every 60 seconds ideally in the cloud so that I don't need to have my computer on at all times? +Things I’ve tried to do: +I created a service account with full permissions and tried to create an http-trigger event that would theoretically run the script when the created URL is hit. However, it just returns an HTTP 500 Error. +I tried doing Pub/Sub event targets to listen and execute the script, but that doesn’t work either. +Something I’m confused about: +with either account, there needs to be a credentials.json file in order to login; how does this file get “deployed” alongside the main function? Along with the token.pickle file that gets created when the authentication happens for the first time.","The way a service account works is that it needs to be preauthorized. You would take the service account email address and share a calendar with it like you would with any other user. The catch here being that you should only be doing this with calendars you the developer control. If these are calendars owned by others you shouldnt be using a service account. +The way Oauth2 works is that a user is displayed a consent screen to grant your application access to their data. Once the user has granted you access and assuming you requested offline access you should have a refresh token for that users account. Using the refresh token you can request a new access token at anytime. So the trick here would be storing the users refresh tokens in a place that your script can access it then when the cron job runs the first thing it needs to do is request a new access token using its refresh token. +So the only way you will be able to do this as a cron job is if you have a refresh token stored for the account you want to access. Other wise it will require it to open a web browser to request the users consent and you cant do that with a cron job.",0.6730655149877884,False,1,6876 +2020-06-30 08:51:32.650,Python FBX SDK – How to enable auto-complete?,"I am using Pycharm to code with Python FBX SDK, but I don't how to enable auto-complete. I have to look at the document for function members. It's very tedious. So, does anyone know how to enable auto-complete for Python FBX SDK in editor? +Thanks!","Copy these two files +[PATH_TO_YOUR_MOBU]\bin\config\Python\pyfbsdk_gen_doc.py +[PATH_TO_YOUR_MOBU]\bin\config\Python\pyfbsdk_additions.py +to another folder like +d:\pyfbsdk_autocomplete for instance. +rename the file pyfbsdk_gen_doc.py to pyfbsdk.py +add the folder to your interpreter paths in PyCharm. (Interpreter Settings, Show All, Show paths for interpreter)",1.2,True,1,6877 +2020-07-01 02:37:30.927,I must install django for every single project i make?,"i am new to Python programming language and Django. I am learning about web development with Django, however, each time I create a new project in PyCharm, it doesn´t recognize django module, so i have to install it again. Is this normal? Because i´ve installed django like 5 times. It doesn´t seem correct to me, there must be a way to install Django once and for all and not have the necessity of using 'pip install django' for each new project I create, I am sure there must be a way but I totally ignore it, I think I have to add django to path but I really don´t know how (just guessing). I will be thankful if anyone can help me :)","pycharm runs in a venv. A venv is an isolated duplicate (sort of) of python (interpreter) and other scripts. To use your main interpreter, change your interpreter location. The three folders (where your projects is, along with your other files) are just that. I think there is an option to inherit packages. I like to create a file called requirements.txt and put all my modules there. Comment for further help. +In conclusion, this is normal.",1.2,True,1,6878 +2020-07-01 22:53:41.403,How to show messages in Python?,"I am new to Django and trying to create an Application. +My scenario is: +I have a form on which there are many items and user can click on Add to Cart to add those item to Cart. I am validating if the user is logged in then only item should be added to Cart else a message or dialogue box must appear saying please login or sign up first. +Although I was able to verify the authentication but the somehow not able to show the message if user is not logged in. +For now I tried the below things: + +Using session messages, but somehow it needs so many places to take care when to delete or when to show the message +Tried using Django Messages Framework, I checked all the configuration in settings.py and everything seems correct but somehow not showing up on HTML form + +Does anyone can help me here? +I want to know a approach where I can authenticate the user and if user is not logged in a dialogue box or message should appear saying Please login or Signup. It should go when user refreshes the page.","If you are using render() for views.py you could add a boolean value to the context +i.e render(request ""template_name.html"", {""is_auth"": True}) +Assumedly you are doing auth in the serverside so you could tackle it this way. +Not a great fix but might help.",0.0,False,1,6879 +2020-07-02 20:11:13.097,installing Opencv on Mac Catalina,"I have successfully installed opencv 4.3.0 on my Mac OS Catalina, python 3.8 is installed also, but when I try to import cv2, I get the Module not found error. +Please how do I fix this? +thanks in advance.",Can you try pip install opencv-python?,0.0,False,2,6880 +2020-07-02 20:11:13.097,installing Opencv on Mac Catalina,"I have successfully installed opencv 4.3.0 on my Mac OS Catalina, python 3.8 is installed also, but when I try to import cv2, I get the Module not found error. +Please how do I fix this? +thanks in advance.","I was having issue with installing opencv in my Macbook - python version 3.6 ( i downgraded it for TF 2.0) and MacOs Mojave 10.14. Brew , conda and pip - none of the three seemed to work for me. So i went to [https://pypi.org/project/opencv-python/#files] and downloaded the .whl that was suitable for my combo of python and MacOs versions. Post this navigated to the folder where it was downloaded and executed pip install ./opencv_python-4.3.0.36-cp36-cp36m-macosx_10_9_x86_64.whl",0.0,False,2,6880 +2020-07-02 22:37:28.507,DIY HPC cluster to run Jupyter/Python notebooks,"I recently migrated my Python / Jupyter work from a macbook to a refurbrished Gen 8 HP rackmounted server (192GB DDR3 2 x 8C Xeon E5-2600), which I got off amazon for $400. The extra CPU cores have dramatically improved the speed of fitting my models particularly for decision tree ensembles that I tend to use a lot. I am now thinking of buying additional servers from that era (early-mid 2010s) (either dual or quad-socket intel xeon E5, E7 v1/v2) and wiring them up as a small HPC cluster in my apartment. Here's what I need help deciding: + +Is this a bad idea? Am I better off buying a GPU (like a gtx 1080). The reason I am reluctant to go the GPU route is that I tend to rely on sklearn a lot (that's pretty much the only thing I know and use). And from what I understand model training on gpus is not currently a part of the sklearn ecosystem. All my code is written in numpy/pandas/sklearn. So, there will be a steep learning curve and backward compatibility issues. Am I wrong about this? + +Assuming (1) is true and CPUs are indeed better for me in the short term. How do I build the cluster and run Jupyter notebooks on it. Is it as simple as buying an additional server. Designating one of the servers as the head node. Connecting the servers through ethernet. Installing Centos / Rocks on both machines. And starting the Jupyter server with IPython Parallel (?). + +Assuming (2) is true, or at least partly true. What other hardware / software do I need to get? Do I need an ethernet switch? Or if I am connecting only two machines, there's no need for it? Or do I need a minimum of three machines to utilize the extra CPU cores and thus need a switch? Do I need to install Centos / Rocks? Or are there better, more modern alternatives for the software layer. For context, right now I use openSUSE on the HP server, and I am pretty much a rookie when it comes to operating systems and networking. + +How homogeneous should my hardware be? Can I mix and match different frequency CPUs and memory across the machines? For example, having 1600 MHz DDR3 memory in one machine, 1333 MHz DDR3 in another? Or using 2.9 GHz E5-2600v1 and 2.6 GHz E5-2600v2 CPUs? + +Should I be worried about power? I.e. can I safely plug three rackmounted servers in the same power strip in my apartment? There's one outlet that I know if I plug my hairdryer in, the lights go out. So I should probably avoid that one :) Seriously, how do I run 2-3 multi-CPU machines under load and avoid tripping the circuit breaker? + + +Thank you.","Nvidia's rapids.ai implements a fair bit of sklearn on gpus. Whether that is the part you use, only you can say. + +Using Jupiter notebooks for production is known to be a mistake. + +You don't need a switch unless latency is a serious issue, it rarely is. + +Completely irrelevant. + +For old hardware of the sort you are considering, you will be having VERY high power bills. But worse, since you will have many not-so-new machines, the probability of some component failing at any given time is high, so unless you seek a future in computer maintenance, this is not a great idea. A better idea is: develop your idea on your macbook/existing cluster, then rent an AWS spot instance (or two or three) for a couple of days. Cheaper, no muss, no fuss. everything just works.",1.2,True,1,6881 +2020-07-03 10:03:02.433,How to reformat the date text in each individual box of a column?,"I currently converted a list of roughly 1200 items (1200 rows) and a problem arised when i looked at the date of each individual item and realised that the day and month was before the year which meant that ordering them by date would be useless. Is there any way I can reorder over 1200 dates so that they can be formatted correctly with me having to manually do it. Would I have to use python. I am very new to that and I don't know how to use it really. +Here's an example of what I get: +September 9 2016 +And this is what i want: +2016 September 9 +I am also using the microsoft excel if anyone was asking.","it must be date format. +you can split date parts in other cells and re-merge them in preferred format...",0.0,False,1,6882 +2020-07-03 15:06:50.723,How to convert py file to apk?,"I have created a calculator in Python using Tkinter module,though I converted it to exe but I am not able to convert it to apk.please tell me how to do so?",I personally haven't seen anyone do that. I think it would be best to try and re-make you calculator in the Kivy framework if you want to later turn it into an APK using bulldozer. Tkinter is decent for beginners but if you want to have nice Desktop UI's use PyQT5 and if you're interested in making mobile apps use Kivy. Tkinter is just a way to dip into using GUIs in python.,0.3869120172231254,False,1,6883 +2020-07-04 03:40:27.593,How to diagnose inconsistent S3 permission errors,"I'm running a Python script in an AWS Lambda function. It is triggered by SQS messages that tell the script certain objects to load from an S3 bucket for further processing. +The permissions seem to be set up correctly, with a bucket policy that allows the Lambda's execution role to do any action on any object in the bucket. And the Lambda can access everything most of the time. The objects are being loaded via pandas and s3fs: pandas.read_csv(f's3://{s3_bucket}/{object_key}'). +However, when a new object is uploaded to the S3 bucket, the Lambda can't access it at first. The botocore SDK throws An error occurred (403) when calling the HeadObject operation: Forbidden when trying to access the object. Repeated invocations (even 50+) of the Lambda over several minutes (via SQS) give the same error. However, when invoking the Lambda with a different SQS message (that loads different objects from S3), and then re-invoking with the original message, the Lambda can suddenly access the S3 object (that previously failed every time). All subsequent attempts to access this object from the Lambda then succeed. +I'm at a loss for what could cause this. This repeatable 3-step process (1) fail on newly-uploaded object, 2) run with other objects 3) succeed on the original objects) can happen all on one Lambda container (they're all in one CloudWatch log stream, which seems to correlate with Lambda containers). So, it doesn't seem to be from needing a fresh Lambda container/instance. +Thoughts or ideas on how to further debug this?","Amazon S3 is an object storage system, not a filesystem. It is accessible via API calls that perform actions like GetObject, PutObject and ListBucket. +Utilities like s3fs allow an Amazon S3 bucket to be 'mounted' as a file system. However, behind the scenes s3fs makes normal API calls like any other program would. +This can sometimes (often?) lead to problems, especially where files are being quickly created, updated and deleted. It can take some time for s3fs to update S3 to match what is expected from a local filesystem. +Therefore, it is not recommended to use tools like s3fs to 'mount' S3 as a filesystem, especially for Production use. It is better to call the AWS API directly.",1.2,True,1,6884 +2020-07-06 20:18:01.003,Spyder - how to execute python script in the current console?,"I've updated conda and spyder to the latest versions. +I want to execute python scripts (using F5 hotkey) in the current console. +However, the new spyder behaves unexpectedly, for example, if I enter in a console a=5 and then run test.py script that only contains a command print(a), there is an error: NameError: name 'a' is not defined. +In the configuration options (command+F6) I've checked ""Execute in current console"" option. +I am wondering why is this happening? +Conda 4.8.2, Spyder 4.0.1","In the preferences, run settings, there is a ""General settings"", in which you can (hopefully still) deactivate ""Remove all variables before execution"". +I even think to remember that this is new, so it makes sense.",0.0,False,2,6885 +2020-07-06 20:18:01.003,Spyder - how to execute python script in the current console?,"I've updated conda and spyder to the latest versions. +I want to execute python scripts (using F5 hotkey) in the current console. +However, the new spyder behaves unexpectedly, for example, if I enter in a console a=5 and then run test.py script that only contains a command print(a), there is an error: NameError: name 'a' is not defined. +In the configuration options (command+F6) I've checked ""Execute in current console"" option. +I am wondering why is this happening? +Conda 4.8.2, Spyder 4.0.1","I figured out the answer: +In run configuration (command+F6) there is another option that needs to be checked: ""Run in console's namespace instead of empty one""",1.2,True,2,6885 +2020-07-06 20:45:20.950,Resampling data from 1280 Hz to 240 Hz in python,"I have a python list of force data that was sampled at 1280 Hz, I have to get it do exactly 240 Hz in order to match it exactly with a video that was filmed at 240 Hz. I was thinking about downsampling to 160 Hz and then upsampling through interpolation to 240 Hz. Does anyone have any ideas on how to go about doing this? Exact answers not needed, just an idea of where to look to find out how.","Don't downsample and that upsample again; that would lead to unnecessary information loss. +Use np.fft.rfft for a discrete Fourier transform; zero-pad in the frequency domain so that you oversample 3x to a sampling frequency of 3840 Hz. (Keep in mind that rfft will return an odd number of frequencies for an even number of input samples.) You can apply a low-pass filter in the frequency domain, making sure you block everything at or above 120 Hz (the Nyqvist frequency for 240 Hz sampling rate). Now use np.fft.irfft to transform back to a time-domain signal at 3840 Hz sampling rate. Because 240 Hz is exactly 16x lower than 3840 Hz and because the low-pass filter guarantees that there is no content above the Nyqvist frequency, you can safely take every 16th sample.",1.2,True,1,6886 +2020-07-07 09:52:29.370,how does one normalize a TensorFlow `Dataset` pipeline?,"I have my dataset in a TensorFlow Dataset pipeline and I am wondering how can I normalize it, The problem is that in order to normalize you need to load your entire dataset which is the exact opposite of what the TensorFlow Dataset is for. +So how exactly does one normalize a TensorFlow Dataset pipeline? And how do I apply it to new data? (I.E. data used to make a new prediction)","You do not need to normalise the entire dataset at once. +Depending on the type of data you work with, you can use a .map() function whose sole purpose is to normalise that specific batch of data you are working with (for instance divide by 255.0 each pixel within an image. +You can use, for instance, map(preprocess_function_1).map(preprocess_function_2).batch(batch_size), where preprocess_function_1 and preprocess_function_2 are two different functions that preprocess a Tensor. If you use .batch(batch_size) then the preprocessing functions are applied sequentially on batch_size number of elements, you do not need to alter the entire dataset prior to using tf.data.Dataset()",0.2012947653214861,False,1,6887 +2020-07-07 11:19:47.523,Python Selenium bot to view Instagram stories | How can i click the profiles of people that have active stories?,"I have this Instagram bot that is made using Python and Selenium, It log into Instagram, goes to a profile, select the last post and select the ""other x people liked this photo"" to show the complete list of the people that liked the post(it can be done with the follower of the page too). +Now I am stuck because I don't know how can i make the bot click only the profiles that have active stories and how to make it scroll down (the problem is that the way that i found to click on the profiles works just with the first one profile because when I click on the profile it opens the stories and closes the post, so when i reopen the post and the list of like on this post it will reclick on the same profile that I have already seen the stories of). +Does someone know how to do that or a similar thing maybe something even better that I didn't thinked of? +I don't think code is needed but if you need I will post it, just let me know.","Have you tried to use the ""back"" button on your browser window? Or open the page in a new tab, so you have still the old one to go back to.",0.3869120172231254,False,1,6888 +2020-07-08 04:22:54.717,How do we get the output when 1 filter convolutes over 3 images?,"Imagine, that I have a 28 x 28 size grayscale image.. Now if I apply a Keras Convolutional layer with 3 filters and 3X3 size with 1X1 stride, I will get 3 images as output. Now if I again apply a Keras Convolutional layer with only 1 filter and 3X3 size and 1X1 stride, so how will this one 3X3 filter convolute over these 3 images and then how will we get one image.. +What I think is that, the one filter will convolute over each of the 3 images resulting in 3 images, then it adds all of the three images to get the one output image. +I am using tensorflow backend of keras. please excuse my grammar, And Please Help me.","Answering my own question: +I figured out that the one filter convolutes over 3 images, it results in 3 images, but then these these images pixel values are added together to get one resultant image.. +You can indeed check by outputting 3 images for 3 filters on 1 image. when you add these 3 images yourself (matrix addition), and plot it, the resultant image makes a lot of sense.",1.2,True,1,6889 +2020-07-08 09:52:48.397,How to rank images based on pairs of comparisons with SVM?,"I'm working on a neural network to predict scores on how ""good"" the images are. The images are the inputs to another machine learning algorithm, and the app needs to tell the user how good the image they are taking is for that algorithm. +I have a training dataset, and I need to rank these images so I can have a score for each one for the regression neural network to train. +I created a program that gives me 2 images from the training set at a time and I will decide which one wins (or ties). I heard that the full rank can be obtained from these comparisons using SVM Ranking. However, I haven't really worked with SVMs before. I only know the very basics of SVMs. I read a few articles on SVM Ranking and it seems like the algorithm turns the ranking problem to a classification problem, but the maths really confuses me. +Can anyone explain how it works in simple terms and how to implement it in Python?","I did some more poking around on the internet, and found the solution. +The problem was how to transform this ranking problem to a classification problem. This is actually very simple. +If you have images (don't have to be images though, can be anything) A and B, and A is better than B, then we can have (A, B, 1). If B is better, then we have (A, B, -1) +And we just need a normal SVM to take the names of the 2 images in and classify 1 or -1. That's it. +After we train this model, we can give it all the possible pairs of images from the dataset and generating the full rank will be simple.",1.2,True,1,6890 +2020-07-08 11:14:08.523,Efficient way to remove half of the duplicate items in a list,"If I have a list say l = [1, 8, 8, 8, 1, 3, 3, 8] and it's guaranteed that every element occurs an even number of times, how do I make a list with all elements of l now occurring n/2 times. So since 1 occurred 2 times, it should now occur once. Since 8 occurs 4 times, it should now occur twice. Since 3 occurred twice, it should occur once. +So the new list will be something like k=[1,8,8,3] +What is the fastest way to do this? +I did list.count() for every element but it was very slow.","I like using a trie set, as you need to detect duplicates to remove them, or a big hash set (lots of buckets). The trie does not go unbalanced and you do not need to know the size of the final set. An alternative is a very parallel sort -- brute force.",0.0340004944420038,False,2,6891 +2020-07-08 11:14:08.523,Efficient way to remove half of the duplicate items in a list,"If I have a list say l = [1, 8, 8, 8, 1, 3, 3, 8] and it's guaranteed that every element occurs an even number of times, how do I make a list with all elements of l now occurring n/2 times. So since 1 occurred 2 times, it should now occur once. Since 8 occurs 4 times, it should now occur twice. Since 3 occurred twice, it should occur once. +So the new list will be something like k=[1,8,8,3] +What is the fastest way to do this? +I did list.count() for every element but it was very slow.","Instead of using a counter, which keeps track of an integer for each possible element of the list, try mapping elements to booleans using a dictionary. Map to true the first time they're seen, and then every time after that flip the bit, and if it's true skip the element.",0.2336958171850616,False,2,6891 +2020-07-08 16:42:47.570,how to get position of thumb (in pixels) inside of vertical scale widget relatively upper right corner?,"Is there a way to get a position of thumb in pixels in vertical scale widget relative to upper right corner of widget? I want a label with scale value to pop up next to thumb when mouse pointer hovering over it, for this I need thumb coordinates.","The coords method returns the location along the trough corresponding to a particular value. +This is from the canonical documentation for the coords method: + +Returns a list whose elements are the x and y coordinates of the point along the centerline of the trough that corresponds to value. If value is omitted then the scale's current value is used. + +Note: you asked for coordinates relative to upper-right corner. These coordinates are relative to the upper-left. You can get the width of the widget with winfo_width() and do a simple transformation.",1.2,True,1,6892 +2020-07-09 10:59:20.653,user interaction with django,"I'm working on a question and answer system with django. my problem : I want the app to get a question from an ontology and according the user's answer get the next question. how can I have all the questions and user's answers displayed. i'm new to django, I don't know if I can use session with unauthenticated user and if I need to use websocket with the django channels library.","Given that you want to work with anonymous users the simplest way to go is to add a hidden field on the page and use it to track the user progress. The field can contain virtual session id that will point at a model record in the backend, or the entire Q/A session(ugly but fast and easy). Using REST or sockets would require similar approach. +I can't tell from the top of my mind if you can step on top of the built in session system. It will work for registered users, but I do believe that for anonymous users it gets reset on refresh(may be wrong here).",0.3869120172231254,False,1,6893 +2020-07-09 22:14:52.293,How do I use external applications to scrape data from a mobile app?,"I am trying to scrape data from a mobile application (Pokemon HOME). The app shows usage statistics and other useful statistics that I want to scrape. I want to scrape this on my computer using python. +I am having trouble determining how to scrape data from a mobile application. I tried using Fiddler and an Android emulator to intercept server data but I am unfamiliar with the software to be able to understand what exactly to do. +Any help would be very beneficial. Even just suggestions for resources where I can learn how to do this on my own. Thank you!","It's possible but it's really a hard nut to break. There's a huge difference between Mobile app and web app +Web app is accessible through WAN ,v.i.z World area network. Scraping is fairly and squarely easier. +In Python, you can bs4 to do it. +But in Mobile app, essentially and effectively, it's more about LAN. +It's installed locally. +Install an app to remote control your device from another device (usually required root) +However, whole data might not be available.",0.0,False,1,6894 +2020-07-09 23:28:48.100,How does python collections accept multiple data types?,"The most popular python version is CPython, written in C. What i want to know is how is it possible to write a python collection using C when C arrays can only store on type of data at the same time?","This is not how python does it in C, but I've written a small interpreted language in Java (which also only allows arrays/lists with 1 data type) and implemented mixed type lists. I had a Value interface and a class for each type of value and those classes implemented the Value interface. I had FunctionValue class, a StringValue class, a BooleanValue class, and a ListValue class, all of which implemented the value interface. The ListValue class has a field of type List which contains the list's elements. All methods on the Value interface and its implementing classes which do stuff like numeric addition, string appending, list access, function calling, etc. initially take in Value objects and do different things based on which actual kind of Value it is. +You could do something similar in C, albeit at a lower level since it doesn't have interfaces and classes to help you manage your types.",0.0,False,1,6895 +2020-07-10 20:37:35.990,Python same Network Card Game,"So I'm doing this python basics course and my final project is to create a card game. At the bottom of the instructions I get this + +For extra credit, allow 2 players to play on two different computers that are on the same network. Two people should be able to start identical versions of your program, and enter the internal IP address of the user on the network who they want to play against. The two applications should communicate with each other, across the network using simple HTTP requests. Try this library to send requests: + + +http://docs.python-requests.org/en/master/ + + +http://docs.python-requests.org/en/master/user/quickstart/ + + +And try Flask to receive them: + + +http://flask.pocoo.org/ + + +The 2-player game should only start if one person has challenged the other (by entering their internal IP address), and the 2nd person has accepted the challenge. The exact flow of the challenge mechanism is up to you. + +I already investigated how flask works and kind of understand how python-requests works too. I just can't figure out how to make those two work together. If somebody could explain what should I do or tell me what to watch or read I would really appreciate it.","it would be nice to see how far you've come before answer (as hmm suggested you in a comment), but i can tell you something theorical about this. +What you are talking about is a client-server application, where server need to elaborate the result of clients actions. +What i can suggest is to learn about REST API, that you can use to let client and server to communicate in a easy way. Your clients will send http requests to server exposed APIs. +From what you wrote, you have a basically constraints that should be respected during client and server communication, here reasumed: + +Someone search for your ip and send you a challenge request + +You have received a challenge that you refuse or accept; only if you accept the challenge you can start the game + + +As you can see from the project specifications the entire challenge mechanism is up to you, so you can decide the best for you. +I would begin start thinking to a possible protocol that make use of REST API to start initial communication between client and server and let you define a basic challenge mechanism. +Enjoy programming :).",0.0,False,1,6896 +2020-07-11 14:03:16.807,Putting .exe file in windows autorun with python,"I'm writing installer for my program with python. +When everything is extracted, how can i make my program .exe file to run with Windows startup? +I want to make it fully automatic, without any user input. +Thanks.","You don't need to use Python for this. You can copy your .exe file and paste it in this directory: + +C:\Users\YourUsername\AppData\Roaming\Microsoft\Windows\Start +Menu\Programs\Startup + +It will run automatically when your computer starts.",0.0,False,1,6897 +2020-07-12 16:23:15.963,"What's the difference between calling pip as a command line command, and calling it as a module of the python command?","When installing python modules, I seem to have two possible command line commands to do so. +pip install {module} +and +py -{version} -m pip install {module} +I suppose this can be helpful for selecting which version of python has installed which modules? But there's rarely a case where I wouldn't want a module installed for all possible versions. +Also the former method seems to have a pesky habit of being out-of-date no matter how many times I call: +pip install pip --upgrade +So are these separate? Does the former just call the latest version of the latter?","TLDR: Prefer ... -m pip to always install modules for a specific Python version/environment. + +The pip command executes the equivalent of ... -m pip. However, bare pip does not allow to select which Python version/environment to install to – the first match in your executable search path is selected. This may be the most recent Python installation, a virtual environment, or any other Python installation. +Use the ... -m pip variant in order to select the Python version/environment for which to install a module.",0.5457054096481145,False,2,6898 +2020-07-12 16:23:15.963,"What's the difference between calling pip as a command line command, and calling it as a module of the python command?","When installing python modules, I seem to have two possible command line commands to do so. +pip install {module} +and +py -{version} -m pip install {module} +I suppose this can be helpful for selecting which version of python has installed which modules? But there's rarely a case where I wouldn't want a module installed for all possible versions. +Also the former method seems to have a pesky habit of being out-of-date no matter how many times I call: +pip install pip --upgrade +So are these separate? Does the former just call the latest version of the latter?","So the pip install module is callable if you have already installed the pip. The pip install pip --upgrade upgrades the pip and if you replace the pip into a module name it will upgrade that module to the most recent one. the py -{version} -m pip install {module} is callable if you have installed many versions of python - for example most of the Linux servers got installed python 2, so when you install the Python 3, and you want to install a module to version 3, you will have to call that command.",0.0,False,2,6898 +2020-07-13 03:40:35.067,how to get names of all detected models from existing tensorflow lite instance?,"I'm looking to build a system that alerts me when there's a package at my front door. I already have a solution for detecting when there's a package (tflite), but I don't know how to get the array of detected objects from the existing tflite process and then pull out an object's title through the array. Is this even possible, or am I doing this wrong? +Also, the tflite model google gives does not know how to detect packages, but I'll train my own for that",I've figured out a solution. I can just use the same array that the function that draws labels uses (labels[int(classes[i])) to get the name of the object in place i of the array (dunno if I'm using the correct terminology but whatever). hopefully this will help someone,0.0,False,1,6899 +2020-07-13 04:19:48.737,Upgrading pycharm venv python version,"I have python 3.6 in my venv on PyCharm. However, I want to change that to Python 3.8. I have already installed 3.8, so how do I change my venv python version? +I am on windows 10. +Changing the version on the project intepreter settings seems to run using the new venv not my existing venv with all the packages I have installed. Attempting to add a new intepreter also results in the ""OK"" button being greyed out, possibly due to the current venv being not empty.","In pycharm you can do further steps: + +Go in File-->Settings-->Python Interpreter +Select different python environment if already available from the drop down, If not click on ""Add"". +Select New Environment option, then in Base interpreter you can select 3.8 version",0.2012947653214861,False,1,6900 +2020-07-13 11:13:34.620,How to embed my python chatbot to a website,"I am very new to python, and I am trying to create a chatbot with python for a school project. +I am almost done with creating my chatbot, but I don't know how to create a website to display it, I know how to create a website with Flask but how can I embed the chatbot code into the website?","In your flask code you can also embed the chatbot predict-functions into specific routes of your flask app. This would require following steps: +Just before you start the flask server you train the chatbot to ensure its predict function works propperly. +After that you can specifiy some more route-functions to your flask app. +In those functions you grab input from the user (from for example route parameters), send it through the chatbots predict function and then send the respons (probably with postprocessing if you wish) back to the requester. +Sending to the requester can be done through many different ways. +Two examples just of my head would be via display (render_template) to the webpage (if the request came in over GET-Request via usual browser site-opening request) or by sending a request to the users ip itself. +As a first hand experience i coupled the later mechanism to a telegram bot on my home-automation via post-request which itself then sends the response to me via telegram.",0.0,False,1,6901 +2020-07-13 12:20:28.610,two versions of python installed at two places,"I had uninstalled python 3.8 from my system and installed 3.7.x +But after running the command where python and where python3 in the cmd I get two different locations. +I was facing issues regarding having two versions of python. So I would like to know how i can completely remove python3 located files.","To delete a specific python version, you can use which python and remove the python folder using sudo rm -rf . You might also have to modify the PATH env variable to the location which contains the python executables of the version you want. +Or you can install Anaconda [https://www.anaconda.com/products/individual] which helps to manage multiple versions of python for you.",0.0,False,1,6902 +2020-07-14 20:41:18.337,How to encrypt data using the password from User,"I have a flask site. It's specifically a note app. At the moment I am storing the user notes as plaintext. That means that anyone with access to the server which is me has access to the notes. I want to encrypt the data with the user password, so that only the user can access it using their password, but that would require the user to input his/her password each time they save their notes, retrive the notes or even updates them. I am hashing the password obviously. +Anyone has any idea how this could be done?","Use session to store user information, the Flask-Login extension would be a good choice for you.",-0.2012947653214861,False,1,6903 +2020-07-15 03:10:47.947,I have a visual studio code terminal problem how do i fix it so that i have the integrated one instead of external?,"I'm using VS Code on Windows 10. I had no problems until a few hours ago (at the time of post), whenever I want to run a python program, it opens terminals outside of VS Code like Win32 and Git Bash. How do I change it back to the integrated terminal I usually had?","With your Python file open in VS Code: + +Go to Run > Open Configurations, if you get prompted select ""Python File"" +In the launch.json file, change the value of ""console"" to ""integratedTerminal""",0.3869120172231254,False,1,6904 +2020-07-15 12:26:42.943,How can I remove/delete a virtual python environment created with virtualenv in Windows 10?,"I want to learn how to remove a virtual environment using the windows command prompt, I know that I can easily remove the folder of the environment. But I want to know if there is a more professional way to do it.","There is no command to remove virtualenv, you can deactivate it or remove the folder but unfortunately virtualenv library doesn't contain any kind of removal functionality.",1.2,True,1,6905 +2020-07-16 07:00:18.590,"In NumPy, how to use a float that is larger than float64's max value?","I have a calculation that may result in very, very large numbers, that won fit into a float64. I thought about using np.longdouble but that may not be large enough either. +I'm not so interested in precision (just 8 digits would do for me). It's the decimal part that won't fit. And I need to have an array of those. +Is there a way to represent / hold an unlimited size number, say, only limited by the available memory? Or if not, what is the absolute max value I can place in an numpy array?","Can you rework the calculation so it works with the logarithms of the numbers instead? +That's pretty much how the built-in floats work in any case... +You would only convert the number back to linear for display, at which point you'd separate the integer and fractional parts; the fractional part gets exponentiated as normal to give the 8 digits of precision, and the integer part goes into the ""×10ⁿ"" or ""×eⁿ"" or ""×2ⁿ"" part of the output (depending on what base logarithm you use).",1.2,True,1,6906 +2020-07-16 15:46:39.480,Why does the dimensions of Kivy app changes after deployment?,"As mentioned in the question, I build a kivy app and deploy it to my android phone. The app works perfectly on my laptop but after deploying it the font size changes all of a sudden and become very small. +I can't debug this since everything works fine. The only problem is this design or rather the UI. +Does anyone had this issue before? Do you have a suggestion how to deal with it? +PS: I can't provide a reproducible code here since everything works fine. I assume it is a limitation of the framework but I'm not sure.","It sounds like you coded everything in terms of pixel sizes (the default units for most things). The difference on the phone is probably just that the pixels are smaller. +Use the kivy.metrics.dp helper function to apply a rough scaling according to pixel density. You'll probably find that if you currently have e.g. width: 50, on the desktop then width: dp(50) will look the same while on the phone it will be twice as big as before. + +PS: I can't provide a reproducible code here since everything works fine. + +Providing a minimal runnable example would, in fact, have let the reader verify whether you were attempting to compensate for pixel density.",1.2,True,1,6907 +2020-07-16 16:58:29.950,Adding files to gitignore in Visual Studio Code,"In Visual Studio Code, with git extensions installed, how do you add files or complete folders to the .gitignore file so the files do not show up in untracked changes. Specifically, using Python projects, how do you add the pycache folder and its contents to the .gitignore. I have tried right-clicking in the folder in explorer panel but the pop-menu has no git ignore menu option. Thanks in advance. +Edit: I know how to do it from the command line. Yes, just edit the .gitignore file. I was just asking how it can be done from within VS Code IDE using the git extension for VS Code.","So after further investigation, it is possible to add files from the pycache folder to the .gitignore file from within VS Code by using the list of untracked changed files in the 'source control' panel. You right-click a file and select add to .gitignore from the pop-up menu. You can't add folders but just the individual files.",1.2,True,1,6908 +2020-07-17 06:35:43.907,how to get proper formatted string?,"if I print the string in command prompt I I'm getting it i proper structure +""connectionstring""."""".""OT"".""ORDERS"".""SALESMAN_ID"" +but when I write it to json, I'm getting it in below format +\""connectionstring\"".\""\"".\""OT\"".\""ORDERS\"".\""SALESMAN_ID\"" +how to remove those escape characters? +when It's happening?","What is happening? +Json serialization and de-serialization is happening. +From wikipedia: +In the context of data storage, serialization (or serialisation) is the process of translating data structures or object state into a format that can be stored (for example, in a file or memory buffer) or transmitted (for example, across a network connection link) and reconstructed later. [...] +The opposite operation, extracting a data structure from a series of bytes, is deserialization. +In console, you de-serialize the json but when storing in file, you serialize the json.",0.0,False,1,6909 +2020-07-17 11:57:33.973,how do we check similarity between hash values of two audio files in python?,"About the data : +we have 2 video files which are same and audio of these files is also same but they differ in quality. +that is one is in 128kbps and 320kbps respectively. +we have used ffmpeg to extract the audio from video, and generated the hash values for both the audio file using the code : ffmpeg -loglevel error -i 320kbps.wav -map 0 -f hash - +the output was : SHA256=4c77a4a73f9fa99ee219f0019e99a367c4ab72242623f10d1dc35d12f3be726c +similarly we did it for another audio file to which we have to compare , +C:\FFMPEG>ffmpeg -loglevel error -i 128kbps.wav -map 0 -f hash - +SHA256=f8ca7622da40473d375765e1d4337bdf035441bbd01187b69e4d059514b2d69a +Now we know that these audio files and hash values are different but we want to know how much different/similar they are actually , for eg: like some distance in a-b is say 3 +can someone help with this?","You cannot use a SHA256 hash for this. This is intentional. It would weaken the security of the hash if you could. what you suggest is akin to differential cryptoanalysis. SHA256 is a modern cryptographic hash, and designed to be safe against such attacks.",0.2012947653214861,False,1,6910 +2020-07-17 19:42:44.647,Add Kivy Widgets Gradually,"I would like to ask how could I add dynamically some widgets in my application one by one and not all at once. Those widgets are added in a for loop which contains the add_widget() command, and is triggered by a button. +So I would like to know if there is a way for the output to be shown gradually, and not all at once, in the end of the execution. Initially I tried to add a delay inside the for loop, but I'm afraid it has to do with the way the output is built each time. +EDIT: Well, it seems that I hadn't understood well the use of Clock.schedule_interval and Clock.schedule_once, so what I had tried with them (or with time.sleep) didn't succeed at all. But obviously, this was the solution to my problem.",Use Clock.schedule_interval or Clock.schedule_once to schedule each iteration of the loop at your desired time spacing.,1.2,True,1,6911 +2020-07-18 01:31:21.407,Why isn't lst.sort().reverse() valid?,"Per title. I do not understand why it is not valid. I understand that they mutate the object, but if you call the sort method, after it's done then you'd call the reverse method so it should be fine. Why is it then that I need to type lst.sort() then on the line below, lst.reverse()? +Edit: Well, when it's pointed out like that, it's a bit embarrassing how I didn't get it before. I literally recognize that it mutated the object and thus returns a None, but I suppose it didn't register that also meant that you can't reverse a None-type object.","When you call lst.sort(), it does not return anything, it changes the list itself. +So the result of lst.sort() is None, thus you try to reverse None which is impossible.",1.2,True,1,6912 +2020-07-18 05:52:32.897,Converting numpy boolean array to binary array,"I have a boolean numpy array which I need to convert it to binary, therefore where there is true it should be 255 and where it is false it should be 0. +Can someone point me out how to write the code?","Let x be your data in numpy array Boolean format. +Try +np.where(x,255,0)",0.0,False,1,6913 +2020-07-18 16:00:43.153,"df['colimn_name'] vs df.loc[:, 'colimn_name']","I would like more info. on the answer to the following question: + +df[‘Name’] and 2. df.loc[:, ‘Name’], where: + +df = pd.DataFrame(['aa', 'bb', 'xx', 'uu'], [21, 16, 50, 33], columns = ['Name', 'Age']) +Choose the correct option: + +1 is the view of original dataframe and 2 is a copy of original +dataframe +2 is the view of original dataframe and 1 is a copy of +original dataframe +Both are copies of original dataframe +Both are views of original dataframe + +I found more than one answer online but not sure. I think the answer is number 2 but when i tried x = df['name'] then x[0] = 'cc' then print(df) I saw that the change appeared in the original dataframe. So how the changed appeared in the original dataframe although I also got this warining: +A value is trying to be set on a copy of a slice from a DataFrame +I just want to know more about the difference between the two and weather one is really a copy of the original dataframe or not. Thank you.","Both are the views of original dataframe +One can be used to add more columns in dataframe and one is used for specifically getting a view of a cell or row or column in dataframe.",0.0,False,1,6914 +2020-07-19 11:57:34.290,In-memory database and programming language memory management / garbage collection,"I've been reading about in-memory databases and how they use RAM instead of disk-storage. +I'm trying to understand the pros and cons of building an in-memory database with different programming languages, particularly Java and Python. What would each implementation offer in terms of speed, efficiency, memory management and garbage collection? +I think I could write a program in Python faster, but I'm not sure what additional benefits it would generate. +I would imagine the language with a faster or more efficient memory management / garbage collection algorithm would be a better system to use because that would free up resources for my in-memory database. From my basic understanding I think Java's algorithm might be more efficient that Python's at freeing up memory. Would this be a correct assumption? +Cheers","You choose an in-memory database for performance, right? An in-memory database written in C/C++ and that provides an API for Java and/or Python won't have GC issues. Many (most?) financial systems are sensitive to latency and 'jitter'. GC exacerbates jitter.",0.0,False,1,6915 +2020-07-20 08:27:36.160,How to know the response data type of API using requests,"I have one simple question, is there a easy way to know the type of API's response? +Fox example: +Using requests post method to send api requests, some apis will return data format as .xml type or .json type, +how can i know the response type so i can choose not to convert to .json use json() when response type is .xml?",Use r.headers.get('content-type') to get the response type,1.2,True,1,6916 +2020-07-20 14:58:08.290,Calculating how much area of an ellipsis is covered by a certain pixel in Python,"I am working with Python and currently trying to figure out the following: If I place an ellipsis of which the semi-axes, the centre's location and the orientation are known, on a pixel map, and the ellipsis is large enough to cover multiple pixels, how do I figure out which pixel covers which percentage of the total area of the ellipsis? As an example, let's take a map of 10*10 pixels (i.e. interval of [0,9]) and an ellipsis with the centre at (6.5, 6.5), semi-axes of (0.5, 1.5) and an orientation angle of 30° between the horizontal and the semi-major axis. I have honestly no idea, so any help is appreciated. +edit: To clarify, the pixels (or cells) have an area. I know the area of the ellipsis, its position and its orientation, and I want to find out how much of its area is located within pixel 1, how much it is within pixel 2 etc.","This is math problem. Try math.exchange rather than stackoverflow. +I suggest you to transform the plane: translation to get the center in the middle, rotation to get the ellipsis's axes on the x-y ones and dilatation on x to get a circle. And then work with a circle on rhombus tiles. +Your problem won't be less or more tractable in the new formulation but the math and code you have to work on will be slightly lighter.",0.0,False,1,6917 +2020-07-20 17:32:32.860,How to dinamically inject HTML code in Django,"In a project of mine I need to create an online encyclopedia. In order to do so, I need to create a page for each entry file, which are all written in Markdown, so I have to covert it to HTML before sending them to the website. I didn't want to use external libraries for this so I wrote my own python code that receives a Markdown file and returns a list with all the lines already formatted in HTML. The problem now is that I don't know how to inject this code to the template I have in Django, when I pass the list to it they are just printed like normal text. I know I could make my function write to an .html file but I don't think it's a great solution thinking about scalability. +Is there a way to dynamically inject HTML in Django? Is there a ""better"" approach to my problem?","You could use the safe filter in your template! So it would look like that. +Assuming you have your html in a string variable called my_html then in your template just write +{{ my_html | safe }} +And don’t forget to import it!",1.2,True,1,6918 +2020-07-21 09:12:16.213,EnvironmentNotWritableError on Windows 10,"I am trying to get python-utils package and utils module work in my anaconda3. However, whenever I open my Anaconda Powershell and try to install the package it fails with the comment + +EnvironmentNotWritableError: The current user does not have write permissions to the target environment. +environment location: C:\ProgramData\Anaconda3 + +I searched for solutions and was advised that I update conda. +However, when I ran the comment below + +conda update -n base -c defaults conda + +it also failed with EnvironmentNotWritableError showing. +Then I found a comment that says maybe my conda isn't installed at some places, so I tried + +conda install conda + +which got the same error. +Then I tried + +conda install -c conda-forge python-utils + +which also failed with the same error. +Maybe it's the problem with setting paths? but I don't know how to set them. All I know about paths is that I can type + +sys.path + +and get where Anaconda3 is running.","I have got the same non writable error in anaconda prompt for downloading pandas,then sorted the the error by running anaconda prompt as administrator. it worked for me since i already had that path variable in environment path",0.3869120172231254,False,2,6919 +2020-07-21 09:12:16.213,EnvironmentNotWritableError on Windows 10,"I am trying to get python-utils package and utils module work in my anaconda3. However, whenever I open my Anaconda Powershell and try to install the package it fails with the comment + +EnvironmentNotWritableError: The current user does not have write permissions to the target environment. +environment location: C:\ProgramData\Anaconda3 + +I searched for solutions and was advised that I update conda. +However, when I ran the comment below + +conda update -n base -c defaults conda + +it also failed with EnvironmentNotWritableError showing. +Then I found a comment that says maybe my conda isn't installed at some places, so I tried + +conda install conda + +which got the same error. +Then I tried + +conda install -c conda-forge python-utils + +which also failed with the same error. +Maybe it's the problem with setting paths? but I don't know how to set them. All I know about paths is that I can type + +sys.path + +and get where Anaconda3 is running.",Run the PowerShell as Administrator. Right Click on the PowerShell -> Choose to Run as Administrator. Then you'll be able to install the required packages.,1.2,True,2,6919 +2020-07-21 19:42:40.367,"Selenium(Python): After clicking button, wait until all the new elements (which can have different attributes) are loaded","How do I wait for all the new elements that appear on the screen to load after clicking a specific button? I know that I can use the presence_of_elements_located function to wait for specific elements, but how do I wait until all the new elements have loaded on the page? Note that these elements might not necessarily have one attribute value like class name or id.","Well in reality you can't, but you can run a script to check for that. +However be wary that this will not work on javascript/AJAX elements. +self.driver.execute_script(""return document.readyState"").equals(""complete""))",1.2,True,1,6920 +2020-07-22 10:14:37.227,Scipy Differential Evolution initial solution(s) input,"Does anyone know how to feed in an initial solution or matrix of initial solutions into the differential evolution function from the Scipy library? +The documentation doesn't explain if its possible but I know that initial solution implementation is not unusual. Scipy is so widely used I would expect it to have that type of functionality.","Ok, after review and testing I believe I now understand it. +There are a set of parameters that the scipy.optimize.differential_evolution(...) function can accept, one is the init parameter which allows you to upload an array of solutions. Personally I was looking at a set of coordinates so enumerated them into an array and fed in 99 other variations of it (100 different solutions) and fed this matrix into the inti parameter. I believe it needs to have more than 4 solutions or your are going to get a tuple error. +I probably didn't need to ask/answer the question though it may help others that got equally confused.",1.2,True,1,6921 +2020-07-22 18:39:12.457,How do i check if it should be an or a in python?,"so im making a generator (doesn't really matter what one it is) +and im trying to make the a/ans appear before nouns correctly. +for example: +""an apple plays rock paper scissors with a banana"" +and not: +""a apple plays rock paper scissors with an banana"" +the default thing for the not-yet determined a/an is """" +so i need to replace the """" with either a or an depending on if the letter after it is a vowel or not. +how would i do this?","Pseudo code + +first find letter 'a' or 'an' in string and keep track of it +then find first word after it +if word starts with vowel: make it 'an' +Else: make it 'a' +this rules breaks with words like 'hour' or 'university' so also make exception rule(find a list of words if u can)",0.0,False,1,6922 +2020-07-23 02:51:14.593,Schoology API understanding,I can get to the user information using the API but I cannot access course information. Can someone explain what I need to do to make the correct call for course information?,The easiest way to answer these questions is to try it in Postman. Highly recommended.,0.0,False,1,6923 +2020-07-23 08:31:12.210,Is an abstract class without any implementation and variables effectively interface?,"I'm reviewing the concepts of OOP, reading . +Here the book defines interface as + +The set of all signatures defined by an object’s operations is called the interface to the object. (p.39) + +And the abstract class as + +An abstract class is one whose main purpose is to define a common interface for its subclasses. An abstract class will defer some or all of its implementation to operations defined in subclasses; hence an abstract class cannot be instantiated. The operations that an abstract class declares but doesn’t implement are called abstract operations. Classes that aren’t abstract are called concrete classes. (p.43) + +And I wonder, if I define an abstract class without any internal data (variables) and concrete operations, just some abstract operations, isn't it effectively just a set of signatures? Isn't it then just an interface? +So this is my first question: + +Can I say an abstract class with only abstract functions is ""effectively (or theoretically)"" an interface? + +Then I thought, the book also says something about types and classes. + +An object’s class defines how the object is implemented. The class defines the object’s internal state and the implementation of its operations. In contrast, an object’s type only refers to its interface—the set of requests to which it can respond. An object can have many types, and objects of different classes can have the same type. (p.44) + +Then I remembered that some languages, like Java, does not allow multiple inheritance while it allows multiple implementation. So I guess for some languages (like Java), abstract class with only abstract operations != interfaces. +So this is my second question: + +Can I say an abstract class with only abstract functions is ""generally equivalent to"" an interface in languages that support multiple inheritance? + +My first question was like checking definitions, and the second one is about how other languages work. I mainly use Java and Kotlin so I'm not so sure about other languages that support multiple inheritance. I do not expect a general, comprehensive review on current OOP languages, but just a little hint on single language (maybe python?) will be very helpful.","No. + +In Java, every class is a subclass of Object, so you can't make an abstract class with only abstract methods. It will always have the method implementations inherited from Object: hashCode(), equals(), toString(), etc. + +Yes, pretty much. + +In C++, for example, there is no specific interface keyword, and an interface is just a class with no implementations. There is no universal base class in C++, so you can really make a class with no implementations. +Multiple inheritance is not really the deciding feature. Java has multiple inheritance of a sort, with special classes called ""interfaces"" that can even have default methods. +It's really the universal base class Object that makes the difference. interface is the way you make a class that doesn't inherit from Object.",1.2,True,1,6924 +2020-07-23 11:53:33.000,How to control Django with Javascript?,"I am building a web application with Django and I show the graphs in the website. The graphs are obtained from real time websites and is updated daily. I want to know how can I send graphs using matplotlib to template and add refresh option with javascript which will perform the web scraping script which I have written. The main question is which framework should I use? AJAX, Django REST, or what?","You're better off using a frontend framework and calling the backend for the data via JS. separating the front and backend is a more contemporary approach and has some advantages over doing it all in the backend. +From personal experience, it gets really messy mixing Python and JS in the same system. +Use Django as a Rest-ful backend, and try not to use AJAX in the frontend, then pick a frontend of your choice to deliver the web app.",0.3869120172231254,False,1,6925 +2020-07-23 15:56:17.107,How can I deploy a streamlit application in repl.it?,"I installed/imported streamlit, numpy, and pandas but I do not know how I can see the charts I have made. How do I deploy it on repl.it?","You can not deploy streamlit application within repl.it because + +In order to protect against CSRF attacks, we send a cookie with each request. +To do so, we must specify allowable origins, which places a restriction on +cross-origin resource sharing. + +One solution is push your code from repl.it to GitHub. Then deploy from GitHub on share.streamlit.io.",0.2012947653214861,False,1,6926 +2020-07-23 17:07:46.247,How to get jupyter notebook theme in vscode,I am a data scientist use jupyter notebook a lot and also have started to do lot of development work and use Vscode for development. so how can I get Jupyter notebook theme in vscode as well? I know how to open a Jupyter notebook in vscode by installing an extension but I wanted to know how to get Jupyter notebook theme for vs code. so it gets easier to switch between both ide without training eyes,"You can edit your VScode's settings by: +1- Go to your Jupyter extension => Extension settings => and check ""Ignore Vscode Theme"". +2- Click on File => preference=> color Theme +3- Select the theme you need. +You can download the theme extension from VSCode's extension store, for example: Markdown Theme Kit; Material Theme Kit. +Note: +You need to restart or reload VSCode to see the changes.",0.296905446847765,False,1,6927 +2020-07-24 18:18:58.150,KivyMD MDFlatButton not clickable & Kivy ScreenManager not working,"So I'm making this game with Kivy and it's a game where there's a start screen with an MDToolbar, an MDNavigationDrawer, two Images, three MDLabels and a OneLineIconListItem that says 'Start Game' and when you click on it the game is supposed to start. +The game screen contains: + +Viruses +Masked man +Soap which you use to hit the viruses +Current score in an MDLabel +A button to go back to the start screen + +Issues: + +The background music for the game starts playing before the game screen is shown (When the start screen is shown) - ScreenManager issue +When I click the button to go back to the start screen, the button doesn't get clicked - MDFlatButton issue + +I used on_touch_down, on_touch_move, and on_touch_up for this game and I know that's what's causing the MDFlatButton issue. So does anyone know how I'm supposed to have the on_touch_* methods defined AND have clickable buttons? +And I don't know how to fix the ScreenManager issue either. +I know I haven't provided any code here, but that's because this post is getting too long. I already got a post deleted because people thought the post was too long and I was providing too much code and too less details. And I don't want that to happen again. If anyone needs to view the code of my project, I will leave a Google Docs link to it. +Thanks in advance!","I fixed my app. +Just in case anyone had the same question, I'm gonna post the answer here. + +To get a clickable button, you have to create a new Screen or Widget and add the actual screen as a widget to the new class. Then, you can add buttons to the new class. This works because the button is on top of the actual screen. So when you click anywhere in the button's area, the button gets clicked and the on_touch_* methods of the actual screen don't get called. + + +And to fix the ScreenManager issue, you just have to expirement.",1.2,True,1,6928 +2020-07-25 22:12:31.897,Tkinter pickle save and load,help me please how can I use the pickle save if I have a lot of entry and I want to save all in one file and load form the file for each entry separately?,"You can't pickle tkinter widgets. You will have to extract the data and save just the data. Then, on restart you will have to unpickle the data and insert it back into the widgets.",0.0,False,1,6929 +2020-07-26 07:50:11.350,Windows desktop application read session data from browser,"I'm writing a desktop and web app, Just need to know how can i authorize this desktop application with same open web app browser after installed?","if you mean to authorize your desktop app via the login of user from any web browser, you can use TCP/UDP socket or also for example , call an api every 2 seconds to check is user is loged in or not. in web browser , if user had be loged in , you can set login state with its ip or other data in database to authorize the user from desktop app.",0.0,False,1,6930 +2020-07-26 13:19:22.760,How to add a python matplotlib interactive figure to vue.js web app?,"I have a plot made using Python matplotlib that updates every time new sensor data is acquired. I also have a web GUI using vue. I'd like to incorporate the matplotlib figure into the web GUI and have it update as it does when running it independently. This therefore means not just saving plot and loading it as an image. +Can anyone advise how to achieve this?","In my opinion it's not reasonable way, There are very good visualizing tools powered by javascript, for example chart.js. +you can do your computation with python in back-end and pass data to front-end by API and plot every interactive diagrams you want using javascript.",1.2,True,1,6931 +2020-07-27 06:36:07.150,How to instal python packages for Spyder,"I am using the IDE called Spyder for learning Python. +I would like to know in how to go about in installing Python packages for Spyder? +Thank you","Spyder is a package too, you can install packages using pip or conda, and spyder will access them using your python path in environment. +Spyder is not a package manager like conda,, but an IDE like jupyter notebook and VS Code.",0.1618299653758019,False,2,6932 +2020-07-27 06:36:07.150,How to instal python packages for Spyder,"I am using the IDE called Spyder for learning Python. +I would like to know in how to go about in installing Python packages for Spyder? +Thank you","I have not checked if the ways described by people here before me work or not. +I am running Spyder 5.0.5, and for me below steps worked: + +Step 1: Open anaconda prompt (I had my Spyder opened parallelly) +Step 2: write - ""pip install package-name"" + +Note: I got my Spyder 5.0.5 up and running after installing the whole Anaconda Navigator 2.0.3.",0.0,False,2,6932 +2020-07-28 16:08:13.623,What is the difference between sys.stdin.read() and sys.stdin.readline(),"Specifically, I would like to know how to give input in the case of read(). I tried everywhere but couldn't find the differences anywhere.","read() recognizes each character and prints it. +But readline() recognizes the object line by line and prints it out.",0.2012947653214861,False,2,6933 +2020-07-28 16:08:13.623,What is the difference between sys.stdin.read() and sys.stdin.readline(),"Specifically, I would like to know how to give input in the case of read(). I tried everywhere but couldn't find the differences anywhere.",">>> help(sys.stdin.read) +Help on built-in function read: + +read(size=-1, /) method of _io.TextIOWrapper instance + Read at most n characters from stream. + + Read from underlying buffer until we have n characters or we hit EOF. + If n is negative or omitted, read until EOF. +(END) + +So you need to send EOF when you are done (*nix: Ctrl-D, Windows: Ctrl-Z+Return): + +>>> sys.stdin.read() +asd +123 +'asd\n123\n' + +The readline is obvious. It will read until newline or EOF. So you can just press Enter when you are done.",0.3869120172231254,False,2,6933 +2020-07-28 17:13:22.017,"Is there any simple way to pass arguments based on their position, rather than kwargs. Like a positional version of kwargs?","Is there a generic python way to pass arguments to arbitrary functions based on specified positions? While it would be straightforward to make a wrapper that allows positional argument passing, it would be incredibly tedious for me considering how frequently I find myself needing to pass arguments based on their position. +Some examples when such would be useful: + +when using functools.partial, to partially set specific positional arguments +passing arguments with respect to a bijective argument sorting key, where 2 functions take the same type of arguments, but where their defined argument names are different + +An alternative for me would be if I could have every function in my code automatically wrapped with a wrapper that enables positional argument passing. I know several ways this could be done, such as running my script through another script which modifies it, but before resorting to that I'd like to consider simpler pythonic solutions.",For key arguments use **kwargs but for positional arguments use *args.,0.0,False,1,6934 +2020-07-28 22:24:48.747,NaN values with Pandas Spearman and Kendall correlations,"I am attempting to calculate Kendall's tau for a large matrix of data stored in a Pandas dataframe. Using the corr function, with method='kendall', I am receiving NaN for a row that has only one value (repeated for the length of the array). Is there a way to resolve it? The same issue happened with Spearman's correlation as well, presumably because Python doesn't know how to rank an array that has a single repeated value, which leaves me with Pearson's correlation -- which I am hesitant to use due to its normality and linearity assumptions. +Any advice is greatly appreciated!","I decided to abandon the complicated mathematics in favor of intuition. Because the NaN values arose only on arrays with constant values, it occurred to me that there is no relationship between it and the other data, so I set its Spearman and Kendall correlations to zero.",0.0,False,1,6935 +2020-07-28 23:02:11.343,Cannot find Python 3.8.2 path on Windows 10,"I have Windows 10 on my computer and when I use the cmd and check python --version, I get python 3.8.2. But when I try to find the path for it, I am unable to find it through searching on my PC in hidden files as well as through start menu. I don't seem to have a python 3.8 folder on my machine. Anybody have any ideas how to find it?","If you're using cmd (ie Command Prompt), and typing python works, then you can get the path for it by doing where python. It will list all the pythons it finds, but the first one is what it'll be using.",0.1352210990936997,False,1,6936 +2020-07-29 02:33:18.637,Pygame how to let balls collide,I want to make a script in pygame where two balls fly towards each other and when they collide they should bounce off from each other but I don't know how to do this so can you help me?,"Its pretty easy you just check if the x coordinate is in the same spot as the other x coordinate. For example if you had one of the x coordinated called x, and another one called i(there are 2 x coordinates for both of the balls) then you could just say if oh and before I say anything esle this example is fi your pygame window is a 500,500. You could say if x == 250: x -= 15. And the other way around for i. If i == 250: i += 15. Ther you go!. Obviously there are a few changes you have to do, but this is the basic code, and I think you would understand this",0.0,False,1,6937 +2020-07-29 08:54:18.833,How to set intervals between multiple requests AWS Lambda API,"I have created an API using AWS Lambda function (using Python). Now my react js code hits this API whenever an event fire. So user can request API as many times the events are fired. Now the problem is we are not getting the response from lambda API sequentially. Sometime we are getting the response of our last request faster than the previous response of previous request. +So we need to handle our response in Lambda function sequentially, may be adding some delay between 2 request or may be implementing throttling. So how can I do that.","Did you check the concurrency setting on Lambda? You can throttle the lambda there. +But if you throttle the lambda and the requests being sent are not being received, the application sending the requests might be receiving an error unless you are storing the requests somewhere on AWS for being processed later. +I think putting an SQS in front of lambda might help. You will be hitting API gateway, the requests get sent to SQS, lambda polls requests concurrently (you can control the concurrency) and then send the response back.",0.1352210990936997,False,2,6938 +2020-07-29 08:54:18.833,How to set intervals between multiple requests AWS Lambda API,"I have created an API using AWS Lambda function (using Python). Now my react js code hits this API whenever an event fire. So user can request API as many times the events are fired. Now the problem is we are not getting the response from lambda API sequentially. Sometime we are getting the response of our last request faster than the previous response of previous request. +So we need to handle our response in Lambda function sequentially, may be adding some delay between 2 request or may be implementing throttling. So how can I do that.","You can use SQS FIFO Queue as a trigger on the Lambda function, set Batch size to 1, and the Reserved Concurrency on the Function to 1. The messages will always be processed in order and will not concurrently poll the next message until the previous one is complete. +SQS triggers do not support Batch Window - which will 'wait' until polling the next message. This is a feature for Stream based Lambda triggers (Kinesis and DynamoDB Streams) +If you want to streamlined process, Step Function will let you manage states using state machines and supports automatic retry based off the outputs of individual states.",1.2,True,2,6938 +2020-07-29 11:03:18.770,"Is it possible to store an image with a value in a way similar to an array, in a database (Firebase or any other)?","Would it be possible to store an image and a value together in a database? Like in a array? +So it would be like [image, value]. I’m just trying to be able to access the image to print that and then access the value later (for example a image if a multi-choice question and its answer is the value). +Also how would I implement and access this? I’m using Firebase with the pyrebase wrapper for python but if another database is more suitable I’m open to suggestions.","you can set your computer as a server and in database you can store like [image_path, value].",0.0,False,1,6939 +2020-07-29 11:45:40.760,How to change the Anaconda environment of a jupyter notebook?,"I have created a new Anaconda environnement for Python. I managed to add it has an optional environnement you can choose when you create a new Notebook. Hovewer, I'd like to know how can I change the environnement of an already existing Notebook.","open your .ipynb file on your browser. On top, there is Kernel tab. You can find your environments under Change Kernel part.",0.2012947653214861,False,1,6940 +2020-07-29 13:58:51.300,"'pychattr' library in Python, 'n_simulations' parameter","Does anyone know if it is possible to use n_simulation = None in 'MarkovModel' algorithm in 'pychhatr' library in Python? +It throws me an error it must be an integer, but in docsting i have information like that: +'n_simulations : one of {int, None}; default=10000' +I`d like to do something like nsim = NULL in 'markov_model' in 'ChannelAttribution' package in R, these two algorithms are similarly implemented. +I don`t know how does it works exactly, how many simulations from a transition matrix I have using NULL. +Could anyone help with this case? +Regards, +Sylwia","Out of curiosity I spent some minutes staring intensely at the source code of both pychattr module and ChannelAttribution package. +I'm not really familiar with the model, but are you really able to call this in R with ""nsim=NULL""? Unless I missed something if you omit this parameter it will use value 100000 as the default and if parameter exists, the R wrapper will complain if it's not a positive number. +Regards, +Maciej",0.0,False,2,6941 +2020-07-29 13:58:51.300,"'pychattr' library in Python, 'n_simulations' parameter","Does anyone know if it is possible to use n_simulation = None in 'MarkovModel' algorithm in 'pychhatr' library in Python? +It throws me an error it must be an integer, but in docsting i have information like that: +'n_simulations : one of {int, None}; default=10000' +I`d like to do something like nsim = NULL in 'markov_model' in 'ChannelAttribution' package in R, these two algorithms are similarly implemented. +I don`t know how does it works exactly, how many simulations from a transition matrix I have using NULL. +Could anyone help with this case? +Regards, +Sylwia","I checked that 'pychattr' (Python) doesn`t support value None but it supports n_simulations = 0 and it sets n_simulations to 1e6 (1 000 000). +'ChannelAttribution' (R) replaces nsim = NULL and nsim = 0 to nsim = 1e6 (1 000 000) too. +In latest version of 'ChannelAttribution' (27.07.2020) we have nsim_start parameter instead of nsim and it doesn`t support 0 or NULL value anymore. +Important: default value of nsim_start is 1e5 (100 000) and from my experience it`s not enough in many cases. +Regards, +Sylwia",0.0,False,2,6941 +2020-07-29 16:10:55.583,How to know the alpha or critical value of your t test analysis?,"How do you decide the critical values(alpha) and analyze with the p value +example: stats.ttest_ind(early['assignment1_grade'], late['assignment1_grade']) +(2 series with score of their assignments) +I understand the concept that if the p value is greater than the alpha value then the null hypothesis cant be neglected. +Im doing a course and instructor said that the alpha value here is 0.05 but how do you determine it.","The alpha value cannot be determined in the sense that there were a formula to calculate it. Instead, it is arbitrarily chosen, ideally before the study is conducted. +The value alpha = 0.05 is a common choice that goes back to a suggestion by Ronald Fisher in his influential book Statistical Methods for Research Workers (first published in 1925). The only particular reason for this value is that if the test statistic has a normal distribution under the null hypothesis, then for a two-tailed test with alpha = 0.05 the critical values of the test statistic will be its mean plus/minus 2 (more exactly, 1.96) times its standard deviation. +In fact, you don't need alpha when you calculate the p value, because you can just publish the p value and then every reader can decide whether to consider it low enough for any given purpose or not.",0.0,False,1,6942 +2020-07-31 14:50:10.383,Giving interactive control of a Python program to the user,"I need my Python program to do some stuff, and at a certain point give control to the user (like a normal Python shell when you run python3 or whatever) so that he can interact with it via command line. I was thinking of using pwntools's interactive() method but I' m not sure how I would use that for the local program instead of a remote. +How would I do that? +Any idea is accepted, if pwntools is not needed, even better.","Use IPython +If you haven't already, add the package IPython using pip, anaconda, etc. +Add to your code: +from IPython import embed +Then where you want a ""breakpoint"", add: +embed() +I find this mode, even while coding to be very efficient.",0.3869120172231254,False,1,6943 +2020-07-31 15:51:48.670,Python Coverage how to generate Unittest report,"In python I can get test coverage by coverage run -m unittest and the do coverage report -m / coverage html to get html report. +However, it does not show the actual unit test report. The unit test result is in the logs, but I would like to capture it in a xml or html, so I can integrate it with Jenkins and publish on each build. This way user does not have to dig into logs. +I tried to find solution to this but could not find any, please let me know, how we can get this using coverage tool. +I can get this using nose2 - nose2 --html-report --with-coverage --coverage-report html - this will generate two html report - one for unit test and other for coverage. But for some reason this fails when I run with actual project (no coverage data collected / reported)","Ok for those who end up here , I solved it with - +nose2 --html-report --with-coverage --coverage-report html --coverage ./ +The issue I was having earlier with 'no coverage data' was fixed by specifying the the directory where the coverage should be reported, in the command above its with --coverage ./",1.2,True,1,6944 +2020-08-01 13:20:07.317,Rename hundred or more column names in pandas dataframe,"I am working with the John Hopkins Covid data for personal use to create charts. The data shows cumulative deaths by country, I want deaths per day. Seems to me the easiest way is to create two dataframes and subtract one from the other. But the file has column names as dates and the code, e.g. df3 = df2 - df1 subtracts the columns with the matching dates. So I want to rename all the columns with some easy index, for example, 1, 2, 3, .... +I cannot figure out how to do this?","Thanks for the time and effort but I figured out a simple way. +for i, row in enumerate(df): +df.rename(columns = { row : str(i)}, inplace = True) +to change the columns names and then +for i, row in enumerate(df): +df.rename(columns = { row : str( i + 43853)}, inplace = True) +to change them back to the dates I want.",0.0,False,1,6945 +2020-08-02 09:58:49.600,JWT authorization and token leaks,"I need help understanding the security of JWT tokens used for login functionality. Specifically, how does it prevent an attack from an attacker who can see the user's packets? My understanding is that, encrypted or not, if an attacker gains access to a token, they'll be able to copy the token and use it to login themselves and access a protected resource. I have read that this is why the time-to-live of a token should be short. But how much does that actually help? It doesn't take long to grab a resource. And if the attacker could steal a token once, can't they do it again after the refressh? +Is there no way to verify that a token being sent by a client is being sent from the same client that you sent it to? Or am I missing the point?","how does it prevent an attack from an attacker who can see the user's packets? + +Just because you can see someone's packets doesn't mean that you can see the contents. HTTPS encrypts the traffic so even if someone manages to capture your traffic, they will no be able to extract JWT out of it. Every website that is using authentication should only run through HTTPS. If someone is able to perform man-in-the-middle attack then that is a different story. + +they'll be able to copy the token and use it to login themselves and access a protected resource + +Yes but only as the user they stole the token from. JWT are signed which means that you can't modify their content without breaking the signature which will be detected by the server (at least it is computationally infeasible to find the hash collision such that you could modify the content of the JWT). For highly sensitive access (bank accounts, medical data, enterprise cloud admin accounts...) you will need at least 2-factor authentication. + +And if the attacker could steal a token once, can't they do it again after the refressh? + +Possibly but that depends on how the token has been exposed. If the attacked sits on the unencrypted channel between you and the server then sure they can repeat the same process but this exposure might be a result of a temporary glitch/human mistake which might be soon repaired which will prevent attack to use the token once it expires. + +Is there no way to verify that a token being sent by a client is being sent from the same client that you sent it to? + +If the attacker successfully performs man-in-the-middle attack, they can forge any information that you might use to verify the client so the answer is no, there is no 100% reliable way to verify the client. + +The biggest issue I see with JWTs is not JWTs themselves but the way they are handled by some people (stored in an unencrypted browser local storage, containing PII, no HTTPS, no 2-factor authentication where necessary, etc...)",1.2,True,1,6946 +2020-08-02 12:15:56.920,Python runs in Docker but not in Kubernetes hosted in Raspberry Pi cluster running Ubuntu 20,"Here is the situation. +Trying to run a Python Flask API in Kubernetes hosted in Raspberry Pi cluster, nodes are running Ubuntu 20. The API is containerized into a Docker container on the Raspberry Pi control node to account for architecture differences (ARM). +When the API and Mongo are ran outside K8s on the Raspberry Pi, just using Docker run command, the API works correctly; however, when the API is applied as a Deployment on Kubernetes the pod for the API fails with a CrashLoopBackoff and logs show 'standard_init_linux.go:211: exec user process caused ""exec format error""' +Investigations show that the exec format error might be associated with problems related to building against different CPU architectures. However, having build the Docker image on a Raspberry Pi, and are successfully running the API on the architecture, I am unsure this could the source of the problem. +It has been two days and all attempts have failed. Can anyone help?","Fixed; however, something doesn't seem right. +The Kubernetes Deployment was always deployed onto the same node. I connected to that node and ran the Docker container and it wouldn't run; the ""exec format error"" would occur. So, it looks like it was a node specific problem. +I copied the API and Dockerfile onto the node and ran Docker build to create the image. It now runs. That does not make sense as the Docker image should have everything it needs to run. +Maybe it's because a previous image build against x86 (the development machine) remained in that nodes Docker cache/repository. Maybe the image on the node is not overwritten with newer images that have the same name and version number (the version number didn't increment). That would seem the case as the spin up time of the image on the remote node is fast suggesting the new image isn't copied on the remote node. That likely to be what it is. +I will post this anyway as it might be useful. + +Edit: allow me to clarify some more, the root of this problem was ultimately because there was no shared image repository in the cluster. Images were being manually copied onto each RPI (running ARM64) from a laptop (not running ARM64) and this manual process caused the problem. +An image build on the laptop was based from a base image incompatible with ARM64; this was manually copied to all RPI's in the cluster. This caused the Exec Format error. +Building the image on the RPI pulled a base image that supported ARM64; however, this build had to be done on all RPI because there was no central repository in the cluster that Kubernetes could pull newly build ARM64 compatible images to other RPI nodes in the cluster. +Solution: a shared repository +Hope this helps.",0.6730655149877884,False,1,6947 +2020-08-02 12:29:32.010,Getting json from html with same name,"I have issue with scraping page and getting json from it. += 3.8, DLLs are no longer imported from the +PATH. If gdalXXX.dll is in the PATH, then set the +USE_PATH_FOR_GDAL_PYTHON=YES environment variable to feed the PATH +into os.add_dll_directory(). + +I've been looking for a solution to this but can't seem to figure out how to fix this. Anybody has a solution?","use: +from osgeo import gdal +instead of: +import gdal",0.0,False,1,7107 +2020-11-06 04:17:49.740,How to Get coordinates of detected area in opencv using python,"I have been able to successfully detect an object(face and eye) using haar cascade classifier in python using opencv. When the object is detected, a rectangle is shown around the object. I want to get coordinates of mid point of the two eyes. and want to store them in a array. Can any one help me? how can i do this. any guide","I suppose you have the coordinates for the bounding boxes of both eyes. +Something like X1:X2 Y1:Y2 for both boxes. +You just have to find the center of these boxes: (X2-X1)/2+X1 and (Y2-Y1)/2+Y1 +You'll get two XY coordinates from this, basically just do the above again with these coordinates, and you'll get the center point",0.0,False,2,7108 +2020-11-06 04:17:49.740,How to Get coordinates of detected area in opencv using python,"I have been able to successfully detect an object(face and eye) using haar cascade classifier in python using opencv. When the object is detected, a rectangle is shown around the object. I want to get coordinates of mid point of the two eyes. and want to store them in a array. Can any one help me? how can i do this. any guide","So you already detected the eye? You also have a bounding box around the eye? +So your question comes down to calculatiing the distance between 2 bounding boxes and then dividing it by 2? +Or do I misunderstand? +If you need exact the center between the two eyes a good way to go about that would be to take the center of the 2 boxes bounding the 2 eyes. +Calculate the distance between those two points and divide it by 2. +If you're willing to post your code I'm willing to help more with writing code.",0.0,False,2,7108 +2020-11-06 12:51:43.300,How to search on Google with Selenium in Python?,I'm really new to web scraping. Is there anyone that could tell me how to search on google.com with Selenium in Python?,Selenium probably isn't the best. other libraries/tools would work better. BeautifulSoup is the first one that comes to mind,0.1352210990936997,False,1,7109 +2020-11-06 18:15:31.933,Download cloudtrail event,"I need some advise in one of my usecase regarding Cloudtrail and Python boto3. +I have some cloudtrail events like configured and i need to send the report of all those events manually by downloading the file of events. +I am planning to automate this stuff using python boto3. Can you please advise how can i use boto3 to get the cloudtrail events for some specific date i should paas at runtime along with the csv or json files downloaded and sent over the email. As of now i have created a python script which shows the cloudtrail event but not able to download the files. Please advise","My suggestions is to simply configure the deliver of those events to an S3 bucket, and you have there the file of events. This configuration is part of your trail configuration and doesn't need boto3. +You can then access events files stored on S3 using boto3 (personally the best way to interact with AWS resources), and manipulate those files as you prefer.",0.0,False,1,7110 +2020-11-07 02:37:34.713,Saving Tensorflow models with custom layers,"I read through the documentation, but something wasn't clear for me: if I coded a custom layer and then used it in a model, can I just save the model as SavedModel and the custom layer automatically goes within it or do I have to save the custom layer too? +I tried saving just the model in H5 format and not the custom layer. When I tried to load the model, I had an error on the custom layer not being recognized or something like this. Reading through the documentation, I saw that saving to custom objects to H5 format is a bit more involved. But how does it work with SavedModels?","If I understand your question, you should simply use tf.keras.models.save_model(,'file_name',save_format='tf'). +My understanding is that the 'tf' format automatically saves the custom layers, so loading doesn't require all libraries be present. This doesn't extend to all custom objects, but I don't know where that distinction lies. If you want to load a model that uses non-layer custom objects you have to use the custom_objects parameter in tf.keras.models.load_model(). This is only necessary if you want to train immediately after loading. If you don't intend to train the model immediately, you should be able to forego custom_objects and just set compile=False in load_model. +If you want to use the 'h5' format, you supposedly have to have all libraries/modules/packages that the custom object utilizes present and loaded in order for the 'h5' load to work. I know I've done this with an intializer before. This might not matter for layers, but I assume that it does. +You also need to implement get_config() and save_config() functions in the custom object definition in order for 'h5' to save and load properly.",0.0,False,1,7111 +2020-11-07 06:19:25.707,How to determine whether function returns an iterable object which calculates results on demand?,"How can one surelly tell that function retuns an iterable object, which calculates results on demand, and not an iterator, which returns already calculated results? +For e.g. function filter() from python's documentation says: + +Construct an iterator from those elements of iterable for which function returns true + +Reading that I cat tell that this function returns an object which implements iterable protocol but I can't be sure it won't eat up all my memory if use it with generator which reads values from 16gb file untill I read further and see the Note: + +Note that filter(function, iterable) is equivalent to the generator expression (item for item in iterable if function(item)) + +So, how does one can tell that function calculates returned results on demand and not just iterating over temporary lists which holds already calculated values? I have to inspect sources?","If the doc says that a function returns an iterator, it's pretty safe to assume it calculates items on the fly to save memory. If it did calculate all its items at once, it would almost certainly return a list.",1.2,True,1,7112 +2020-11-07 12:40:31.890,How to get only the whole number without rounding-off?,"how do you get only the whole number of a non-integer value without the use of rounding-off? I have searched for it and I seem to be having a hard time. For example: -Suppose I have 3 test cases and have to pass 4 integers separated by a white space for each such test case. -The input should look like this: -3 -1 0 4 3 -2 5 -1 4 -3 7 1 9 -I know about the split() method that helps you to separate integers with a space in between. But since I need to input only 4 integers, I need to know how to write the code so that the computer would take only 4 integers for each test case, and then the input line should automatically move, asking the user for input for the next test case. -Other than that, the other thing I am looking for is how to store each integer for each test case in some variable so I can access each one later.","For the first part, if you would like to store input in a variable, you would do the following... - (var_name) = input() -Or if you want to treat your input as an integer, and you are sure it is an integer, you would want to do this - (var_name) = int(input()) -Then you could access the input by calling up the var_name. -Hope that helped :D",0.0,False,1,6283 -2019-09-02 11:48:46.377,How to automatically update view once the database is updated in django?,"I have a problem in which I have to show data entered into a database without having to press any button or doing anything. -I am creating an app for a hospital, it has two views, one for a doctor and one for a patient. -I want as soon as the patient enters his symptoms, it shows up on doctor immediately without having to press any button. -I have no idea how to do this. -Any help would be appreciated. -Thanks in advance","You can't do that with Django solely. You have to use some JS framework (React, Vue, Angular) and WebSockets, for example.",0.0,False,1,6284 -2019-09-04 11:00:26.350,how do I give permission to bash to run to multiple gcloud commands from local jupyter notebook,"I am practicing model deployment to GCP cloud ML Engine. However, I receive errors stated below when I execute the following code section in my local jupyter notebook. Please note I do have bash installed in my local PC and environment variables are properly set. -%%bash -gcloud config set project $PROJECT -gcloud config set compute/region $REGION -Error messages: --bash: line 1: /mnt/c/Users/User/AppData/Local/Google/Cloud SDK/google-cloud-sdk/bin/gcloud: Permission denied --bash: line 2: /mnt/c/Users/User/AppData/Local/Google/Cloud SDK/google-cloud-sdk/bin/gcloud: Permission denied -CalledProcessError: Command 'b'gcloud config set project $PROJECT\ngcloud config set compute/region $REGION\n\n'' returned non-zero exit status 126.","Perhaps you installed Google Cloud SDK with root? -try -sudo gcloud config set project $PROJECT -and -sudo gcloud config set compute/region $REGION",0.0,False,1,6285 -2019-09-04 13:31:44.333,how to use breakpoint in mydll.dll using python3 and pythonnet,"I have function imported from a DLL file using pythonnet: -I need to trace my function(in a C# DLL) with Python.",you can hook a Visual Studio debugger to python.exe which runs your dll,0.0,False,1,6286 -2019-09-04 13:40:07.500,Python Oracle DB Connect without Oracle Client,"I am trying to build an application in python which will use Oracle Database installed in corporate server and the application which I am developing can be used in any local machine. -Is it possible to connect to oracle DB in Python without installing the oracle client in the local machine where the python application will be stored and executed? -Like in Java, we can use the jdbc thin driver to acheive the same, how it can be achieved in Python. -Any help is appreciated -Installing oracle client, connect is possible through cx_Oracle module. -But in systems where the client is not installed, how can we connect to the DB.","It is not correct that java can connect to oracle without any oracle provided software. -It needs a compatible version of ojdbc*.jar to connect. Similarly python's cx_oracle library needs oracle instant-client software from oracle to be installed. -Instant client is free software and has a small footprint.",0.2655860252697744,False,2,6287 -2019-09-04 13:40:07.500,Python Oracle DB Connect without Oracle Client,"I am trying to build an application in python which will use Oracle Database installed in corporate server and the application which I am developing can be used in any local machine. -Is it possible to connect to oracle DB in Python without installing the oracle client in the local machine where the python application will be stored and executed? -Like in Java, we can use the jdbc thin driver to acheive the same, how it can be achieved in Python. -Any help is appreciated -Installing oracle client, connect is possible through cx_Oracle module. -But in systems where the client is not installed, how can we connect to the DB.",Installing Oracle client is a huge pain. Could you instead create a Webservice to a system that does have OCI and then connect to it that way? This might end being a better solution rather than direct access.,0.0,False,2,6287 -2019-09-05 03:55:31.020,How to take multi-GPU support to the OpenNMT-py (pytorch)?,"I used python-2.7 version to run the PyTorch with GPU support. I used this command to train the dataset using multi-GPU. -Can someone please tell me how can I fix this error with PyTorch in OpenNMT-py or is there a way to take pytorch support for multi-GPU using python 2.7? -Here is the command that I tried. - - -CUDA_VISIBLE_DEVICES=1,2 - python train.py -data data/demo -save_model demo-model -world_size 2 -gpu_ranks 0 1 - - -This is the error: - -Traceback (most recent call last): - File ""train.py"", line 200, in - main(opt) - File ""train.py"", line 60, in main - mp = torch.multiprocessing.get_context('spawn') - AttributeError: 'module' object has no attribute 'get_context'","Maybe you can check whether your torch and python versions fit the openmt requiremen. -I remember their torch is 1.0 or 1.2 (1.0 is better). You have to lower your latest of version of torch. Hope that would work",0.0,False,1,6288 -2019-09-05 18:28:58.863,What does wave_read.readframes() return if there are multiple channels?,"I understand how the readframes() method works for mono audio input, however I don't know how it will work for stereo input. Would it give a tuple of two byte objects?","A wave file has: -sample rate of Wave_read.getframerate() per second (e.g 44100 if from an audio CD). -sample width of Wave_read.getsampwidth() bytes (i.e 1 for 8-bit samples, 2 for 16-bit samples) -Wave_read.getnchannels() channels (typically 1 for mono, 2 for stereo) -Every time you do a Wave_read.getframes(N), you get N * sample_width * n_channels bytes.",0.0,False,1,6289 -2019-09-07 03:28:26.677,Does SciPy have utilities for parsing and keeping track of the units associated with its constants?,"scipy.constants.physical_constants returns (value, unit, uncertainty) tuples for many specific physical constants. The units are given in the form of a string. (For example, one of the options for the universal gas constant has a unit field of 'J kg^-1 K^-1'.) -At first blush, this seems pretty useful. Keeping track of your units is very important in scientific calculations, but, for the life of me, I haven't been able to find any facilities for parsing these strings into something that can be tracked. Without that, there's no way to simplify the combined units after different values have been added, subtracted, etc with eachother. -I know I can manually declare the units of constants with separate libraries such as what's available in SymPy, but that would make ScyPy's own units completely useless (maybe just a convenience for printouts). That sounds pretty absurd. I can't imagine that ScyPy doesn't know how to deal with units. -What am I missing? + +w = 2.20 +w = 2.00 + +x = 2.50 +x = 2.00 + +y = 3.70 +y = 3.00 + +z = 4.50 +z = 4.00 + +Is it as simple as this or that might get wrong in some values? +x = 2.6 or x = 2.5 or x = 2.4 +x = int(x) +x = 2 + +Is it really simple as that? Thanks for answering this stewpid question.","you can just divided it into (1) +but use (//) like this: +x = x // 1",0.6730655149877884,False,1,7113 +2020-11-08 15:39:33.647,How to install OpenCV in Docker (CentOs)?,"I am trying to install OpenCV in a docker container (CentOS). +I tried installing python first and then tried yum install opencv-contrib but it doesn't work. +Can someone help me out as to how to install OpenCV in Docker (CentOS)?","To install OpenCV use the command: sudo yum install opencv opencv-devel opencv-python +And when the installation is completed use the command to verify: pkg-config --modversion opencv",0.0,False,1,7114 +2020-11-10 12:41:13.300,How can I bypass the 429-error from www.instagram.com?,"i'm solliciting you today because i've a problem with selenium. +my goal is to make a full automated bot that create an account with parsed details (mail, pass, birth date...) So far, i've managed to almost create the bot (i just need to access to gmail and get the confirmation code). +My problem is here, because i've tried a lot of things, i have a Failed to load resource: the server responded with a status of 429 () +So, i guess, instagram is blocking me. +how could i bypass this ?","Status code of 429 means that you've bombarded Instagram's server too many times ,and that is why Instagram has blocked your ip. +This is done mainly to prevent from DDOS attacks. +Best thing would be to try after some time ( there might be a Retry-After header in the response). +Also, increase the time interval between each request and set the specific count of number of requests made within a specified time (let's say 1 hr).",0.0,False,2,7115 +2020-11-10 12:41:13.300,How can I bypass the 429-error from www.instagram.com?,"i'm solliciting you today because i've a problem with selenium. +my goal is to make a full automated bot that create an account with parsed details (mail, pass, birth date...) So far, i've managed to almost create the bot (i just need to access to gmail and get the confirmation code). +My problem is here, because i've tried a lot of things, i have a Failed to load resource: the server responded with a status of 429 () +So, i guess, instagram is blocking me. +how could i bypass this ?","The answer is in the description of the HTTP error code. You are being blocked because you made too many requests in a short time. +Reduce the rate at which your bot makes requests and see if that helps. As far as I know there's no way to ""bypass"" this check by the server. +Check if the response header has a Retry-After value to tell you when you can try again.",0.0,False,2,7115 +2020-11-11 03:48:10.147,Tweepy API Search Filter,"I'm currently learning how to use the Tweepy API, and is there a way to filter quoted Tweets and blocked users? I'm trying to stop search from including quoted Tweets and Tweets from blocked users. I have filtered Retweets and replies already. +Here's what I have: +for tweet in api.search(q = 'python -filter:retweets AND -filter:replies', lang = 'en', count = 100):","To filter quotes, use '-filter:quote'",1.2,True,1,7116 +2020-11-11 22:28:13.737,Read a csv file from s3 excluding some values,"How can I read a csv file from s3 without few values. +Eg: list [a,b] +Except the values a and b. I need to read all the other values in the csv. I know how to read the whole csv from s3. sqlContext.read.csv(s3_path, header=True) but how do I exclude these 2 values from the file and read the rest of the file.","You don't. A file is a sequential storage medium. A CSV file is a form of text file: it's character-indexed. Therefore, to exclude columns, you have to first read and process the characters to find the column boundaries. +Even if you could magically find those boundaries, you would have to seek past those locations; this would likely cost you more time than simply reading and ignoring the characters, since you would be interrupting the usual, smooth block-transfer instructions that drive most file buffering. +As the comments tell you, simply read the file as is and discard the unwanted data as part of your data cleansing. If you need the file repeatedly, then cleanse it once, and use that version for your program.",0.2012947653214861,False,1,7117 +2020-11-12 19:06:49.007,python on windows 10 cannot upgrade modules in virtual environment,"I has been forced to develop python scripts on Windows 10, which I have never been doing before. +I have installed python 3.9 using windows installer package into C:\Program Files\Python directory. +This directory is write protected against regular user and I don't want to elevate to admin, so when using pip globally I use --user switch and python installs modules to C:\Users\AppData\Roaming\Python\Python39\site-packages and scripts to C:\Users\AppData\Roaming\Python\Python39\Scripts directory. +I don't know how he sets this weird path, but at least it is working. I have added this path to %Path% variable for my user. +Problems start, when I'm trying to use virtual environment and upgrade pip: + +I have created new project on local machine in C:\Users\Projects and entered the path in terminal. +python -m venv venv +source venv\Scrips\activate +pip install --upgrade pip + +But then I get error: +ERROR: Could not install packages due to an EnvironmentError: [WinError 5] Access denied: 'C:\Users\\AppData\Local\Temp\pip-uninstall-7jcd65xy\pip.exe' +Consider using the --user option or check the permissions. +So when I try to use --user flag I get: +ERROR: Can not perform a '--user' install. User site-packages are not visible in this virtualenv. +So my questions are: + +why it is not trying to install everything inside virtual enviroment (venv\Scripts\pip.exe)? +how I get access denied, when this folder suppose to be owned by my user? + +When using deprecated easy_install --upgrade pip everything works fine.",I recently had the same issue for some other modules. My solution was simply downgrade from python 3.9 to 3.7. Or make an virtual environment for 3.7 and use that and see how it works.,0.3869120172231254,False,1,7118 +2020-11-13 07:25:48.307,How to show a variable value on the webcam video stream? (python OpenCV),"I coded to open webcam video on a new window using OpenCV cv2.VideoCapture(0). +You can display text on webcam video using cv2.putText() command. But it displays string values only. +How to put varying values in the webcam video that is being displayed on a new window? +For example, if value of variable p is changing all the time, you can easily display it on the command window by writing print(p). +But how can we display values of p over the webcam video?","You can also show changing variables using cv2.putText() method. Just need to convert the variable into string using str() method. Suppose you want to show variable x that is for example an integer and it is always changing. You can use cv2.putText(frame, str(x), org, font, fontScale, color, thickness, cv2.LINE_AA) to do it (You should fill org,font, etc.).",1.2,True,1,7119 +2020-11-13 09:59:06.667,Is there any solution regarding to PyQt library doesn't work in Mac OS Big Sur?,"I've done some project using PyQt library for my class assignmnet. +And I need to check my application working before I submit it. +Today, 3 hours ago I updated my Mac book OS to Big Sur. +And I found out that PyQt library doesn't work. It doesn't show any GUI. +Are there someone know how to fix it?","Related to this, after upgrading to BigSur my app stopped launching its window...I am using the official Qt supported binding PySide2/shiboken2 +Upgrading from PySide2 5.12 to 5.15 fixed the issue. +Steps: + +Remove PySide2/shiboken2 +pip3 uninstall PySide2 +pip3 uninstall shiboken2 + +Reinstall +pip3 install PySide2",0.0,False,2,7120 +2020-11-13 09:59:06.667,Is there any solution regarding to PyQt library doesn't work in Mac OS Big Sur?,"I've done some project using PyQt library for my class assignmnet. +And I need to check my application working before I submit it. +Today, 3 hours ago I updated my Mac book OS to Big Sur. +And I found out that PyQt library doesn't work. It doesn't show any GUI. +Are there someone know how to fix it?","Rolling back to PyQt5==5.13.0 fixed the issue for me! +you should uninstall PyQt5 and then install it using +pip install PyQt5==5.13.0",0.5457054096481145,False,2,7120 +2020-11-13 23:42:07.327,access methods on one socketio namespace from a different one,"I have a flask application that uses flask-socketio and python-socketio to facilitate communication between a socketio server in the cloud and a display device via a hardware device. +I have a display namespace which exposes the display facing events, and also uses a separate client class which connects and talks to the server in the cloud. This works well as designed, but now I want to trigger the connection method in my client class from a different namespace. So far I have not been able to get this to work. +What I have tried is adding the display namespace class to the flask context, then passing that into the socketio.on_namespace() method. Then from the other namespace I am grabbing it from current_app and trying to trigger the connection to the cloud server. This returns a 'RuntimeError: working outside of application context' error. +So at this point I'm still researching how to do this correctly, but I was hoping someone has dealt with something like this before, and knows how to access methods on one namespace from a different one.","I found a solution. Instead of instantiating my client class from the display namespace, I instantiate it before I add the namespaces to socketio. Then I pass the client object into both namespaces when I call the socketio.on_namespace() method.",0.0,False,1,7121 +2020-11-15 07:45:39.937,pypi package imports python file instead of package,"After pip install package_name from my recently uploaded pypi package +It imports python filename directly after installing, +I wanted to use like below +import package_name or from package_name import python_file +but this doesnt work instead this works +import python_file even package is installed name is package_name +pypi package name package_name and +My directory structure is below + +package_name + +setup.py + +folder1 + +python_file + + + + + +In setup.py , i've used package_dir={'': 'folder_1'} +but even import folder_1 or from folder_1 import python_file didnt worked. +I tried if adding __init__.py inside folder_1, it didnt solved. +I've been following Mark Smith - Publish a (Perfect) Python Package on PyPI, +which told this way , but any idea what is happening, how can i solve it??","So what you actual did is to tell python that the root folder is folder_1. +This is not what you want. +You just need to tell that folder_1 (or actually replace it by package_name, see below) is a package and to declare it using: +packages = {'folder1'}. +Usually, people don't do it but let the function find_packages() to do the work for them by packages=find_packages() +In addition package folder should contain a __init__.py. +to conclude you need a folder structure like below and use find_packages(). +It is OK and even popular choice that the project name and it single main package have the same name. + +project_name + +setup.py +package_name + +__init__.py +python_file.py",1.2,True,1,7122 +2020-11-15 11:43:31.347,Checkers board in kivy,"What it is the best way to make a chessboard for checkers using Kivy framework? +I have board.png, white.png, black.png, white_q.png, black_q.png files already. I wonder how to assign to each black tile on my board.png its own coordinate. Should I create 32 transparent widgets placed on black tiles of board.png or it is impossible? And what widget to use for 24 checkers? Any ideas or it is too complicated using Kivy and I should use tkinter?","There are many ways you could do this. It isn't complicated, it's very easy. The best way depends more on how you want to structure your app than anything else. + +I wonder how to assign to each black tile on my board.png its own coordinate + +Set the pos attribute of a widget to control its position, or better in this case use a layout that does what you want. For instance, adding your squares to a GridLayout with the right number of columns will have the right effect without you needing to worry more about positioning them. + +Should I create 32 transparent widgets placed on black tiles of board.png or it is impossible? + +I don't understand what you're asking here. You can make transparent widgets if you want but I don't know why you'd want to. + +And what widget to use for 24 checkers? + +The real question is, what do you want the widget to do? e.g. if you want it to display an image then inherit from Image. +Overall this answer is very generic because your question is very generic. I suggest that if you're stuck, try to ask a more specific question about a task you're struggling with, and give a code example showing where you are now.",0.3869120172231254,False,1,7123 +2020-11-15 20:51:24.707,How to change the value of a variable at run time from another script at remote machine?,"I have a local computer A and remote computer B. Computer A has script client.py Computer B has server.py Script client.py has a variable port. Let's say port = 5535. +I am running client.py on Computer A, which is using the port number for socket communication. I need to change the port number to another port number while the client.py is running so it will switch to another server at runtime after notifying the client to change the port number. I am using pyzmq to send data from the client to the server sending a continuous stream of data. +Is this scenario possible and how can I do it?","Yes, it's possible. You may design / modify the (so far unseen) code so as to PUSH any such need to change a port# on-the-fly to the PULL-side, to release the 5535 and use another one. +The PULL-side shall then call .disconnect() and .close() methods, so as to release the said port 5535 ( plus notify that it has done so, perhaps by another PUSH/PULL to the .bind()-locked party, that it can now unbind and close the .bind()-locked port# 5535 too) and next setup a new connection to an announced ""new_port#"", received from the initial notification ( which ought have been already .bind()-locked on the PUSH-side, ought it not? :o) ). +That easy.",1.2,True,1,7124 +2020-11-16 09:47:54.700,without Loops to Sum Range of odd numbers,is there any way to sum odd numbers from 1 to n but without any loops and if there isn't a way how can i create this by fast algorithm to do this task in less than n loops.,"You can try the one below, which loop through from 1 to n, stepping 2 +sum(range(1,n,2))",0.0,False,1,7125 +2020-11-17 04:00:00.753,How do I activate python virtual environment from a different repo?,"So am working in a group project, we are using python and of the code is on GitHub. My question is how do I activate the virtual environment? Do I make one on my own using the ""python virtual -m venv env"" or the one that's on the repo, if there is such a thing. Thanks","Yes, you'll want to create your own with something like: python -m venv venv. The final argument specifies where your environment will live; you could put it anywhere you like. I often have a venv folder in Python projects, and just .gitignore it. +After you have the environment, you can activate it. On Linux: source venv/bin/activate. Once activated, any packages you install will go into it; you can run pip install -r requirements.txt for instance.",0.0,False,2,7126 +2020-11-17 04:00:00.753,How do I activate python virtual environment from a different repo?,"So am working in a group project, we are using python and of the code is on GitHub. My question is how do I activate the virtual environment? Do I make one on my own using the ""python virtual -m venv env"" or the one that's on the repo, if there is such a thing. Thanks","virtual env is used to make your original env clean. you can pip install virtualenv and then create a virtual env like virtualenv /path/to/folder then use source /path/to/folder/bin/activate to activate the env. then you can do pip install -r requirements.txt to install dependencies into the env. then everything will be installed into /path/to/folder/lib +alteratively, you can use /path/to/folder/bin/pip install or /path/to/folder/bin/python without activating the env.",0.2012947653214861,False,2,7126 +2020-11-17 12:28:03.713,Maintaining label encoding across different files in pandas,"I know how to use scikit-learn and pandas to encode my categorical data. I've been using the category codes in pandas for now which I later will transform into an OneHot encoded format for ML. +My issues is that I need to create a pre-processing pipeline for multiple files with the same data format. I've discovered that using the pandas category codes encoding is not consistent, even if the categories (strings) in the data are identical across multiple files. +Is there a way to do this encoding lexicographically so that it's done the same way across all files or is there any specific method that can be used which would result in the same encoding when applied on multiple files?","The LabelEncoder like all other Sklearn-Transformers has three certain methods: + +fit(): Creates the labels given some input data +transform(): Transforms data to the labels of the encoder instance. It must have called fit() before or will throw an error +fit_transform(): That's a convenience-method that will create the labels and transform the data directly. + +I'm guessing you are calling fit_transform everywhere. To fix this, just call the fit-method once (on a superset of all your data because it will throw an error if it encounters a label that was not present in the data you called fit on) and than use the transform method.",0.0,False,1,7127 +2020-11-18 17:55:34.847,Using Python to access DirectShow to create and use Virtual Camera(Software Only Camera),"Generally to create a Virtual Camera we need to create a C++ application and include DirectShow API to achieve this. But with the modules such as +win32 modules and other modules we can use win32 api which lets us use these apis in python. +Can anyone Help sharing a good documentation or some Sample codes for doing this?","There is no reliable way to emulate a webcam on Windows otherwise than supplying a driver. Many applications take simpler path with DirectShow, and emulate a webcam for a subset of DirectShow based applications (in particular, modern apps will be excluded since they don't use DirectShow), but even in this case you have to develop C++ camera enumation code and connect your python code with it.",0.3869120172231254,False,1,7128 +2020-11-19 19:45:23.240,No module names xlrd,"I am working out of R Studio and am trying to replicate what I am doing in R in Python. On my terminal, it is saying that I have xlrd already installed but when I try to import the package (import xlrd) in R Studio, it tells me: ""No module named 'xlrd'"". Does anyone know how to fix this?","I have solved this on my own. In your terminal, go to ls -a and this will list out applications on your laptop. If Renviron is there, type nano .Renviron to write to the Renviron file. Find where Python is stored on your laptop and type RETICULATE_PYTHON=(file path where Python is stored). ctrl + x to exit, y to save and then hit enter. Restart R studio and this should work for you.",0.3869120172231254,False,1,7129 +2020-11-20 13:54:11.863,How to Order a fraction of a Crypto (like Bitcoin) in zipline?,"Basically as you all know we can backtest our strategies in Zipline, the problem is that Zipline is developed for stock markets and the minimum order of an asset that can be ordered is 1 in those markets but in crypto markets we are able to order a fraction of a Crypto currency. +So how can I make zipline to order a fraction of Bitcoin base on the available capital?","You can simulate your test on a smaller scale, e.g. on Satoshi level (1e8). +I can think of two methods: + +Increase your capital to the base of 1e8, and leave the input as is. This way you can analyse the result in Satoshi, but you need to correct for the final portfolio value and any other factors that are dependent on the capital base. +Scale the input to Satoshi or any other level and change the handle_data method to either order on Satoshi level or based on your portfolio percentage using order_target_percent method. + +NOTE: Zipline rounds the inputs to 3 decimal points. So re-scaling to Satoshi turns prices that are lower than 5000 to NaN (not considering rounding errors for higher prices). My suggestion is to either use 1e5 for Bitcoin or log-scale.",0.0,False,1,7130 +2020-11-21 23:14:34.487,"Pandas, find and delete rows","Been searching for a while in order to understand how to do this basic task without any success which is very strange. +I have a dataset where some of the rows contain '-', I have no clue under which columns these values lie. +How do I search in the whole dataset (including all columns) for '-' and drop the rows containing this value? +thank you!","This is a bit more robust than wwnde's answer, as it will work if some of the columns aren't originally strings: +df.loc[~df.apply(lambda x: any('-' in str(col) for col in x), axis = 1)] +If you have data that's stored as datetime, it will display as having -, but will return an error if you check for inclusion without converting to str first. Negative numbers will also return True once converted to str. If you want different behavior, you'll have to do something more complicated, such as +df.loc[~df.apply(lambda x: any('-' in col if isinstance(col, str) else False for col in x), axis = 1)]",1.2,True,1,7131 +2020-11-22 09:23:23.773,"How to resize a depth map from size [400,400] into size [60,60]?","I have a depth map image which was obtained using a kinect camera. +In that image I have selected a region of size [400,400] and stored it as another image. +Now, I would like to know how to resize this image into a size of [x,y] in python.","I don't recommend to reduce resolution of depth map the same way like it is done for images. Imagine a scene with a small object 5 m before the wall: + +Using bicubic/bilinear algorithms you will get depth of something between the object and the wall. In reality there is just a free space in between. +Using nearest-neighbor interpolation is better but you are ignoring a lot of information and in some cases it may happed that the object just disappears. + +The best approach is to use the Mode function. Divide the original depth map into windows. Each window will represent one pixel in the downsized map. For each of them calculate the most frequent depth value. You can use Python's statistics.mode() function.",0.0,False,1,7132 +2020-11-22 16:19:49.853,Raspberry pi python editor,"I was writing code to make a facial recognition, but my code did not work because I was writing on verison 3, do you know how to download python 3 on the raspberry pi?","Linux uses package managers to download packages or programing languages +,raspberry pi uses apt(advanced package tool) +This is how you use APT to install python3: +sudo apt-get install python3 +OR +sudo apt install python3 +and to test if python3 installed correctly type: +python3 +If a python shell opens python3 has been installed properly",1.2,True,1,7133 +2020-11-23 15:05:15.013,how to authorize only flutter app in djano server?,"While I'm using Django as my backend and flutter as my front end. I want only the flutter app to access the data from django server. Is there any way to do this thing? +Like we use allowed host can we do something with that?",You can use an authentication method for it. Only allow for the users authenticated from your flutter app to use your backend.,0.3869120172231254,False,1,7134 +2020-11-23 17:14:54.653,pymongo getTimestamp without ObjectId,"in my mongodb, i have a collection where the docs are created not using ObjectId, how can I get the timestamp (generation_time in pymongo) of those docs? Thank you","If you don't store timestamps in documents, they wouldn't have any timestamps to retrieve. +If you store timestamps in some other way than via ObjectId, you would retrieve them based on how they are stored.",1.2,True,1,7135 +2020-11-24 05:55:23.327,using a pandas dataframe without headers to write to mysql with to_sql,"I have a dataframe created from an excel sheet (the source). +The excel sheet will not have a header row. +I have a table in mysql that is already created (the target). It will always be the exact same layout as the excel sheet. +source_data = pd.read_excel(full_path, sheet_name=sheet_name, skiprows=ignore_rows, header=None) +db_engine = [function the returns my mysql engine] +source_data.to_sql(name=table_name, con=db_engine, schema=schema_name, if_exists='append', index=False) +This fails with an error due to pandas using numbers as column names in the insert statement.. +[SQL: INSERT INTO [tablename] (0, 1) VALUES (%(0)s, %(1)s)] +error=(pymysql.err.OperationalError) (1054, ""Unknown column '0' in 'field list' +how can i get around this? Is there a different insert method i can use? do i really have to load up the dataframe with the proper column names from the table?","Maybe after importing the data into Pandas, you can rename the columns to something that is not a number, e.g. ""First"", ""Second"", etc. or [str(i) for i in range(len(source_data))] +This would resolve the issue of SQL being confused by the numerical labels.",0.0,False,2,7136 +2020-11-24 05:55:23.327,using a pandas dataframe without headers to write to mysql with to_sql,"I have a dataframe created from an excel sheet (the source). +The excel sheet will not have a header row. +I have a table in mysql that is already created (the target). It will always be the exact same layout as the excel sheet. +source_data = pd.read_excel(full_path, sheet_name=sheet_name, skiprows=ignore_rows, header=None) +db_engine = [function the returns my mysql engine] +source_data.to_sql(name=table_name, con=db_engine, schema=schema_name, if_exists='append', index=False) +This fails with an error due to pandas using numbers as column names in the insert statement.. +[SQL: INSERT INTO [tablename] (0, 1) VALUES (%(0)s, %(1)s)] +error=(pymysql.err.OperationalError) (1054, ""Unknown column '0' in 'field list' +how can i get around this? Is there a different insert method i can use? do i really have to load up the dataframe with the proper column names from the table?","Found no alternatives.. went with adding the column names to the data frame during the read.. +So first i constructed the list of column names +sql = (""select [column_name] from [table i get my metadata from];"") +db_connection = [my connection for sqlalchemy] +result = db_connection.execute(sql) +column_names = [] +for column in result: + column_names.append(column[0]) +And then i use that column listing in the read command: +source_data = pd.read_excel(full_path, sheet_name=sheet_name, skiprows=ignore_rows,header=None, names=column_names) +the to_sql statement then runs without error.",0.0,False,2,7136 +2020-11-24 18:45:59.360,Getting skeletal data in pykinect (xbox 360 version),"I'm having trouble finding any sort of documentation or instruction for pykinect, specifically for the xbox 360 version of the kinect. how do I get skeletal data or where do I find the docs?? if I wasn't clear here please let me know!","To use python with the kinect 360 you need the follwing: +python 2.7 +windows kinect sdk 1.8 +pykinect - NOT pykinect2",-0.3869120172231254,False,1,7137 +2020-11-25 09:51:23.410,How to implement a MIDI keyboard into python,"Looking to create a GUI based 25-key keyboard using PYQT5, which can support MIDI controller keyboards. However, I don’t know where to start (What libraries should I use and how do I go about finding a universal method to supporting all MIDI controller keyboards). I plan to potentially use the Mido Library, or PyUSB but I am still confused as to how to make this all function. Any starting guides would be much appreciated.","MIDI is a universal standard shared by all manufacturers, so you don't have to worry about ""supporting all MIDI controller keyboards"", you just have to worry about supporting the MIDI studio of your system. +You'll have to scan your environment to get the existing MIDI ports. With the list of existing ports you can let the user choose to which port he wants to send the events generated by your keyboard and/or from which port he wants to receive events that will animate the keyboard (for instance from a physical MIDI keyboard connected to your computer), possibly all available input ports. +To support input events, you'll need a kind of callback prepared to receive the incoming notes on and off (which are the main relevant messages for a keyboard) at any time. That also means that you have to filter the received events that are not of those types because, in MIDI, a stream of events is subject to contain many kinds of other events mixed with the notes (pitch bend, controllers, program change, and so on). +Finally notice that MIDI doesn't produce any sound by itself. So if you plane to hear something when you play on your keyboard, the produced MIDI events should be send to a device subject to produce the sound (for instance a synthesizer or virtual instrument) via a port that this device receives. +For the library, Mido seems to be a pretty good choice : it has all the features needed for such a project.",0.6730655149877884,False,1,7138 +2020-11-25 11:44:47.813,flask / flask_restful : calling routes in one blueprint from another route in a different blueprint,"I'm working on a very basic Web Application (built using flask and flask_restful) with unrelated views split into different blueprints. +Different blueprints deal with a different instance of a class. +Now I want to design a page with status(properties and value) of all the classes these blueprints are dealing with. The page is a kind of a control panel of sorts. +For this I want to call all the status routes (defined by me) in different blueprints from a single route(status page route) in a different blueprint. I have been searching for a while on how to make internal calls in Flask / Flask_restful, but haven't found anything specifically for this. So.... + +I would love to find out how to make these internal calls. +Also, is there any problem or convention against making internal calls. +I also thought of making use of the requests calls using Requests module, but that feels more like a hack. Is this the only option I got??? If yes, is there a way I dont have to hard code the url in them like using something close to url_for() in flask?? + +Thanks.. :)","I would love to find out how to make these internal calls. + + +Ans: use url_for() or Requests module, as u do for any other post or get method. + + +Also, is there any problem or convention against making internal calls ? + + +Ans: I didn't find any even after intensive searching. + + +I also thought of making use of the requests calls using Requests module, but that feels more like a hack. Is this the only option I +got??? If yes, is there a way I don't have to hard code the url in +them like using something close to url_for() in flask?? + + +Ans: If you don't wanna use Requests module, url_for() is the simplest and cleanest option there is. Hard coded path is the only option.",1.2,True,1,7139 +2020-11-25 19:10:03.817,"When doing runserver, keep getting new data loaded in my database","Every time I do a: python manage.py runserver +And I load the site, python gets data and puts this in my database. +Even when I already filled some info in the database. Enough to get a view of what I am working on. +Now it is not loading the information I want and instead putting in new information to add to the database so it can work with some data. +What is the reason my data in the database is not being processed? +And how do I stop new data being loaded into the database.","May be it is happening due to migration file first sometimes when you migrate models into database query language with same number +python manage.py makemigrations 0001 +This ""0001"" has to be changed everytime +To solve your problem once delete the migrations file and then again migrate all models and then try +Tell if this work",0.0,False,1,7140 +2020-11-26 13:38:11.537,How to find the stitch (seam) position between two images with OpenCV?,"I find many examples of passing a list of images, and returning a stitched image, but not much information about how these images have beeen stitched together. +In a project, we have a camera fixed still, pointing down, and coveyers pass underneath. The program detects objects and start recording images. However some objects do not enter completely in the image, so we need to capture multiple images and stich then together, but we need to know the position of the stitched image because there are other sensors synchronized with the captured image, and we need to also synchronize their readings within the stitched image (i.e. we know where the reading is within each single capture, but not if captures are stitched together). +In short, given a list of images, how can we find the coordinates of each images relative to each other?","Basically while stiching correspondence between two (or more) images are setup. This is done with some constant key points. After finding those key points the images are warped or transformed & put together, i.e. stitched. +Now those key points could be set/ noted as per a global coordinate system (containing all images). Then one can get the position after stitching too.",0.0,False,1,7141 +2020-11-27 03:21:57.860,Unable to change data types of certain columns read from xslx and by Pandas,"I import an Excel file with pandas and when I try to convert all columns to float64 for further manipulation. I have several columns that have a type like: +0 +column_name_1 float64 +column_name_1 float64 +dtype: object +and it is unable to do any calculations. May I ask how I could change this column type to float64?",I just solved it yesterday and it is because I have two same columns in the Data frame and it causes that when I try to access pd['something'] it automatically combine two columns together and then it becomes an object instead of float64,0.0,False,1,7142 +2020-11-28 07:18:31.807,How to update an py made exe file from my pc to people I have sent it to?,What I mean is that I have a py file which I have converted to an exe file. So I wanted to know in case I decide to update the py file then how do I make it if I have sent it to someone the same changes occur in his file as well whether the exe or py file.,"Put your version of the program on a file share, or make it otherwise available in the internet and build in an update check in the program. So that it checks the URL for a new version everytime it is started. +I guess this is the most common way to do something like that.",0.0,False,1,7143 +2020-11-29 05:47:22.927,Is there any way to return the turtle object that is clicked?,"I'm making a matching game where there are several cards faced upside down and the user has to match the right pairs. The cards faced upside down are all turtle objects. +For eg. if there are 8 faced down cards, there are 8 turtle objects. +I'm having some trouble figuring out how to select the cards since I don't know which turtle is associated with the particular card selected by the user. I do have a nested list containing all turtles and those with similar images are grouped together. Is there any way to return the turtle object selected by the user?","If i got your question, one way to do so is that you should provide some id attribute to each turtle which will identify it. Then you can check easily which turtle was selected by the user.",0.0,False,1,7144 +2020-11-29 10:11:25.313,Nativescript can't find six,"I installed Nativescript successfully and it works when running ns run android. +However, when I try to use ns run ios I get the ominous WARNING: The Python 'six' package not found.-error +Same happens, when I try to use ns doctor. +I tried EVERYTHING that I found on the web. Setting PATH, PYTHONPATH, re-install python, six and everything - nothing helped. +Re-install of six tells me Requirement already satisfied. +Any ideas how to make this work??? +I'm on MacOS Catalina.","It seems I have a total mess with paths and python installations on my Mac. +I found like 6 different pip-paths and like 4 different python paths. +Since I have no idea which ones I can delete, I tried installing six with all pip-versions I found and that helped. +How to clean up this mess is likely a subject for another thread :)",1.2,True,1,7145 +2020-12-01 03:18:45.860,I have different excel files in the same folder,"I have different excel files in the same folder, in each of them there are the same sheets. I need to select the last sheet of each file and join them all by the columns (that is, form a single table). The columns of all files are named the same. I think it is to identify the dataframe of each file and then paste them. But I do not know how","Just do what Recessive said and use a for loop to read the excel file one by one and do the following: + +excel_files = os.listdir(filepath) + + +for file in excel_files: + +read excel file sheet + + + + +save specific column to variable + + + + +end of loop + + + +concatenate each column from different variables to one dataframe",0.0,False,1,7146 +2020-12-01 16:05:05.240,Added more parameters to smtplib.SMTP in python,"Im trying to make a script that sent an email with python using smtp.smtplib , almost of examples i found while googling shows how to call this function with only smtpserver and port parameters. +i want to added other paramaters : domain and binding IP +i tried this : server = smtplib.SMTP(smtpserver, 25,'mydomain.com',5,'myServerIP') +I got this as error : TypeError: init() takes at most 5 arguments (6 given) +Can you suggest a way to do this?",This error is likely because the parameters are invalid (there is one too many). Try looking at the smtplib docs to see what parameters are valid,0.0,False,1,7147 +2020-12-02 00:32:33.690,How could i delete several lines of code at the same time in Jupiter notebook?,I want to delete/tab several lines of code at the same time in Jupiter notebook. how could i do that? Is there hot keys for that?,"While in the notebook, click to the left of the grey input box where it says In []: (You'll see the highlight color go from green to blue) +While it's blue, hold down shift and use your up arrow key to select the rows above or below +Press D twice +Click back into the cell and the highlight will turn back to green.",0.3869120172231254,False,1,7148 +2020-12-02 03:27:38.637,python : Compute columns of data frames and add them to new columns,"I want to make a new column by calculating existing columns. +For example df +df +no data1 data2 +1 10 15 +2 51 46 +3 36 20 +...... +i want to make this +new_df +no data1 data2 data1/-2 data1/2 data2/-2 data2/2 +1 10 15 -5 5 -7.5 7.5 +2 51 46 -25.5 25.5 -23 23 +3 36 20 -18 18 -9 9 +but i don't know how to make this as efficient as possible","To create a new df column based on the calculations of two or more other columns, you would have to define a new column and set it equal to your equation. For example: +df['new_col'] = df['col_1'] * df['col_2']",0.0,False,1,7149 +2020-12-02 08:33:53.520,How to decrypt django pbkdf2_sha256 algorthim password?,"I need user_password plaintext using Django. I tried many ways to get plaintext in user_password. but It's not working. So, I analyzed how the Django user password is generated. it's using the make_password method in the Django core model. In this method generating the hashed code using( pbkdf2_sha256) algorthm. If any possible to decrypt the password. +Example: +pbkdf2_sha256$150000$O9hNDLwzBc7r$RzJPG76Vki36xEflUPKn37jYI3xRbbf6MTPrWbjFrgQ=","As you have already seen, Django uses hashing method like SHA256 in this case. Hashing mechanisms basically use lossy compression method, so there is no way to decrypt hashed messages as they are irreversible. Because it is not encryption and there is no backward method like decryption. It is safe to store password in the hashed form, as only creator of the password should know the original password and the backend system just compares the hashes. +This is normal situation for most backend frameworks. Because this is made for security reasons so far. Passwords are hashed and saved in the database so that even if the malicious user gets access to the database, he can't find usefull information there or it will be really hard to crack the hashes with some huge words dictionary.",1.2,True,1,7150 +2020-12-02 10:02:42.763,Find answer to tcp packet in PCAP with scapy,"I parse pcap file with scapy python , and there is TCP packet in that pcap that I want to know what is the answer of this pcaket, How can I do that? +For example : client and server TCP stream +client-> server : ""hi"" +server-> client : ""how are you"" +When I get ""hi"" packet (with scapy) how can I get ""how are you"" ?","Look at the TCP sequence number of the message from the client. Call this SeqC. +Then look for the first message from the client whose TCP acknowledgement sequence is higher than SeqC (usually it will be equal to SeqC plus the size of the client's TCP payload). Call this PacketS1. +Starting with PacketS1, collect the TCP payloads from all packets until you see a packet sent by the server with the TCP PSH (push) flag set. This suggests the end of the application-layer message. Call these payloads PayloadS1 to PayloadSN. +Concatenate PayloadS1 to PayloadSN. This is the likely application-layer response to the client message.",0.6730655149877884,False,1,7151 +2020-12-02 14:42:06.810,How do I keep changes made within a python GUI?,"For, example If a button click turns the background blue, or changes the button's text, how do I make sure that change stays even after i go to other frames?",One way to go is to create a configuration file (e.g. conf.ini) where you store your changes or apply them to other dialogs. It will allow you to keep changes after an app restarted.,0.0,False,1,7152 +2020-12-04 09:56:10.630,raspberry pi using a webcam to output to a website to view,"I am currently working on a project in which I am using a webcam attached to a raspberry pi to then show what the camera is seeing through a website using a client and web server based method through python, However, I need to know how to link the raspberry pi to a website to then output what it sees through the camera while then also outputting it through the python script, but then i don't know where to start +If anyone could help me I would really appreciate it. +Many thanks.","So one way to do this with python would be to capture the camera image using opencv in a loop and display it to a website hosted on the Pi using a python frontend like flask (or some other frontend). However as others have pointed out, the latency on this would be so bad any processing you wish to do would be nearly impossible. +If you wish to do this without python, take a look at mjpg-streamer, that can pull a video feed from an attached camera and display it on a localhost website. The quality is fairly good on localhost. You can then forward this to the web (if needed) using port forwarding or an application like nginx. +If you want to split the recorded stream into 2 (to forward one to python and to broadcast another to a website), ffmpeg is your best bet, but the FPS and quality would likely be terrible.",0.0,False,1,7153 +2020-12-04 10:21:30.123,"Does python mne raw object represent a single trail? if so, how to average across many trials?","I'm new to python MNE and EEG data in general. +From what I understand, MNE raw object represent a single trial (with many channels). Am I correct? What is the best way to average data across many trials? +Also, I'm not quite sure what the mne.Epochs().average() represents. Can anyone pls explain? +Thanks a lot.","From what I understand, MNE raw object represent a single trial (with many channels). Am I correct? + +An MNE raw object represents a whole EEG recording. If you want to separate the recording into several trials, then you have to transform the raw object into an ""epoch"" object (with mne.Epochs()). You will receive an object with the shape (n_epochs, n_channels and n_times). + +What is the best way to average data across many trials? Also, I'm not quite sure what the mne.Epochs().average() represents. Can anyone pls explain? + +About ""mne.Epochs().average()"": if you have an ""epoch"" object and want to combine the data of all trials into one whole recording again (for example, after you performed certain pre-processing steps on the single trials or removed some of them), then you can use the average function of the class. Depending on the method you're choosing, you can calculate the mean or median of all trials for each channel and obtain an object with the shape (n_channels, n_time). +Not quite sure about the best way to average the data across the trials, but with mne.epochs.average you should be able to do it with ease. (Personally, I always calculated the mean for all my trials for each channel. But I guess that depends on the problem you try to solve)",1.2,True,1,7154 +2020-12-05 19:15:10.533,How can i have 2D bounding box on a sequence of RGBD frames from a 3D bounding box in point clouds?,"i have a 3d point clouds of my object by using Open3d reconstruction system ( makes point clouds by a sequence of RGBD frames) also I created a 3d bounding box on the object in point clouds +my question is how can I have 2d bounding box on all of the RGB frames at the same coordinates of 3d bounding box? +my idea Is to project 3d bb to 2d bb but as it is clear, the position of the object is different in each frame, so I do not know how can i use this approach? +i appreciate any help or solution, thanks","calculate points for the eight corners of your box +transform those points from the world frame into your chosen camera frame +project the points, apply lens distortion if needed. + +OpenCV has functions for some of these operations and supports you with matrix math for the rest. +I would guess that Open3d gives you pose matrices for all the cameras. you use those to transform from the world coordinate frame to any camera's frame.",1.2,True,1,7155 +2020-12-05 23:26:35.533,Create a schedule where a group of people all talk to each other - with restrictions,"Problem statement +I would like to achieve the following: +(could be used for example to organize some sort of a speeddating event for students) +Create a schedule so people talk to each other one-on-one and this to each member of the group. +but with restrictions. + +Input: list of people. (eg. 30 people) +Restrictions: some of the people should not talk to each other (eg. they know each other) +Output: List of pairs (separated into sessions) just one solution is ok, no need to know all of the possible outcomes + +Example +eg. Group of 4 people + +John +Steve +Mark +Melissa + +Restrictions: John - Mellisa -> NO +Outcome +Session one + +John - Steve +Mark - Melissa + +Session two + +John - Mark +Steve - Melissa + +Session three + +Steve - Mark + +John and Mellisa will not join session three as it is restriction. +Question +Is there a way to approach this using Python or even excel? +I am especially looking for some pointers how this problem is called as I assume this is some Should I look towards some solver? Dynamic programming etc?","Your given information is pretty generous, you have a set of all the students, and a set of no-go pairs (because you said it yourself, and it makes it easy to explain, just say this is a set of pairs of students who know each other). So we can iterate through our students list creating random pairings so long as they do not exist in our no-go set, then expand our no-go set with them, and recurse on the remaining students until we can not create any pairs that do not exist already in the no-go set (we have pairings so that every student has met all students).",0.0,False,1,7156 +2020-12-06 10:22:21.857,Is there any way to know the command-line options available for a separate program from Python?,"I am relatively new to the python's subprocess and os modules. So, I was able to do the process execution like running bc, cat commands with python and putting the data in stdin and taking the result from stdout. +Now I want to first know that a process like cat accepts what flags through python code (If it is possible). +Then I want to execute a particular command with some flags set. +I googled it for both things and it seems that I got the solution for second one but with multiple ways. So, if anyone know how to do these things and do it in some standard kind of way, it would be much appreciated.","In the context of processes, those flags are called arguments, hence also the argument vector called argv. Their interpretation is 100% up to the program called. In other words, you have to read the manpages or other documentation for the programs you want to call. +There is one caveat though: If you don't invoke a program directly but via a shell, that shell is the actual process being started. It then also interprets wildcards. For example, if you run cat with the argument vector ['*'], it will output the content of the file named * if it exists or an error if it doesn't. If you run /bin/sh with ['-c', 'cat *'], the shell will first resolve * into all entries in the current directory and then pass these as separate arguments to cat.",1.2,True,1,7157 +2020-12-06 10:45:49.563,Pandas: How to calculate the percentage of one column against another?,"I am just trying to calculate the percentage of one column against another's total, but I am unsure how to do this in Pandas so the calculation gets added into a new column. +Let's say, for argument's sake, my data frame has two attributes: + +Number of Green Marbles +Total Number of Marbles + +Now, how would I calculate the percentage of the Number of Green Marbles out of the Total Number of Marbles in Pandas? +Obviously, I know that the calculation will be something like this: + +(Number of Green Marbles / Total Number of Marbles) * 100 + +Thanks - any help is much appreciated!",df['percentage columns'] = (df['Number of Green Marbles']) / (df['Total Number of Marbles'] ) * 100,0.0,False,1,7158 +2020-12-06 15:58:58.593,int to str in python removes leading 0s,"So right now, I'm making a sudoku solver. You don't really need to know how it works, but one of the checks I take so the solver doesn't break is to check if the string passed (The sudoku board) is 81 characters (9x9 sudoku board). An example of the board would be: ""000000000000000000000000000384000000000000000000000000000000000000000000000000002"" +this is a sudoku that I've wanted to try since it only has 4 numbers. but basically, when converting the number to a string, it removes all the '0's up until the '384'. Does anyone know how I can stop this from happening?","There is no way to prevent it from happening, because that is not what is happening. Integers cannot remember leading zeroes, and something that does not exist cannot be removed. The loss of zeroes does not happen at conversion of int to string, but at the point where you parse the character sequence into a number in the first place. +The solution: keep the input as string until you don't need the original formatting any more.",1.2,True,1,7159 +2020-12-06 18:29:12.933,How does urllib3 determine which TLS extensions to use?,"I'd like to modify the Extensions that I send in the client Hello packet with python. +I've had a read of most of the source code found on GitHub for urllib3 but I still don't know how it determines which TLS extensions to use. +I am aware that it will be quite low level and the creators of urllib3 may just import another package to do this for them. If this is the case, which package do they use? +If not, how is this determined? +Thanks in advance for any assistance.",The HTTPS support in urllib3 uses the ssl package which uses the openssl C-library. ssl does not provide any way to directly fiddle with the TLS extension except for setting the hostname in the TLS handshake (i.e. server_name extension aka SNI).,1.2,True,1,7160 +2020-12-07 22:29:46.250,tkinter in Pycharm (python version 3.8.6),"I'm using Pycharm on Windows 10. +Python version: 3.8.6 +I've checked using the CMD if I have tkinter install python -m tkinter. It says I have version 8.6 +Tried: + +import tkinter. +I get ""No module named 'tkinter' "" + +from tkinter import *. +I get ""Unresolved reference 'tkinter'"" + +Installed future package but that didn't seem to change the errors. + + +Any suggestions on how to fix this issue? +Thank you!","You just verify in the project settings, sometimes Pycharm doesn't use the same interpreter.",-0.2012947653214861,False,2,7161 +2020-12-07 22:29:46.250,tkinter in Pycharm (python version 3.8.6),"I'm using Pycharm on Windows 10. +Python version: 3.8.6 +I've checked using the CMD if I have tkinter install python -m tkinter. It says I have version 8.6 +Tried: + +import tkinter. +I get ""No module named 'tkinter' "" + +from tkinter import *. +I get ""Unresolved reference 'tkinter'"" + +Installed future package but that didn't seem to change the errors. + + +Any suggestions on how to fix this issue? +Thank you!","You can try ""pip install tkinter"" in cmd",-0.2012947653214861,False,2,7161 +2020-12-07 23:17:05.743,how to convert a string to list I have a string how to convert it to a list?,"I have a string like: string = ""[1, 2, 3]"" +I need to convert it to a list like: [1, 2, 3] +I've tried using regular expression for this purpose, but to no avail","Try +[int(x) for x in arr.strip(""[]"").split("", "")], or if your numbers are floats you can do [float(x) for x in arr.strip(""[]"").split("", "")]",0.2655860252697744,False,1,7162 +2020-12-08 14:02:34.340,2D numpy array showing as 1D,"I have a numpy ndarray train_data of length 200, where every row is another ndarray of length 10304. +However when I print np.shape(train_data), I get (200, 1), and when I print np.shape(train_data[0]) I get (1, ), and when I print np.shape(train_data[0][0]) I get (10304, ). +I am quite confused with this behavior as I supposed the first np.shape(train_data) should return (200, 10304). +Can someone explains to me why this is happening, and how could I get the array to be in shape of (200, 10304)?","I'm not sure why that's happening, try reshaping the array: +B = np.reshape(A, (-1, 2))",0.0,False,1,7163 +2020-12-08 16:51:13.820,Multiple threads sending over one socket simultaneously?,"I have two python programs. Program 1 displays videos in a grid with multiple controls on it, and Program 2 performs manipulations to the images and sends it back depending on the control pressed in Program 1. +Each video in the grid is running in its own thread, and each video has a thread in Program 2 for sending results back. +I'm running this on the same machine though and I was unable to get multiple socket connections working to and from the same address (localhost). If there's a way of doing that - please stop reading and tell me how! +I currently have one socket sitting independent of all of my video threads in Program 1, and in Program 2 I have multiple threads sending data to the one socket in an array with a flag for which video the data is for. The problem is when I have multiple threads sending data at the same time it seems to scramble things and stop working. Any tips on how I can achieve this?","Regarding If there's a way of doing that - please stop reading and tell me how!. +There's a way of doing it, assuming you are on Linux or using WSL on Windows, you could use the hostname -I commend which will output an IP that looks like 192.168.X.X. +You can use that IP in your python program by binding your server to that IP instead of localhost or 127.0.0.1.",0.0,False,1,7164 +2020-12-08 20:00:28.467,"Grabbing values (Name, Address, Phone, etc.) from directory websites like TruePeopleSearch.com with Chrome Developer Tool","Good day everybody. I'm still learning parsing data with Python. I'm now trying to familiarize myself with Chrome Developer Tools. My question is when inspecting a directory website like TruePeopleSearch.com, how do I copy or view the variables that holds the data such as Name, Phone, and Address? I tried browsing the tool, but since I'm new with the Developer tool, I'm so lost with all the data. I would appreciate if the experts here points me to the right direction. +Thank you all!","Upon further navigating the Developer Console, I learned that these strings are located in these variables, by copying the JS paths. +NAME & AGE +document.querySelector(""#personDetails > div:nth-child(1)"").innerText +ADDRESS +document.querySelector(""#personDetails > div:nth-child(4)"").innerText +PHONE NUMBERS +document.querySelector(""#personDetails > div:nth-child(6)"").innerText +STEP 1 +From the website, highlight are that you need to inspect and click ""Inspect Element"" +STEP 2 +Under elements, right-click the highlighted part and copy the JS path +STEP 3 +Navigate to console and paste the JS path and add .innerText and press Enter",0.0,False,1,7165 +2020-12-09 07:30:40.480,Can you plot the accuracy graph of a pre-trained model? Deep Learning,"I am new to Deep Learning. I finished training a model that took 8 hours to run, but I forgot to plot the accuracy graph before closing the jupyter notebook. +I need to plot the graph, and I did save the model to my hard-disk. But how do I plot the accuracy graph of a pre-trained model? I searched online for solutions and came up empty. +Any help would be appreciated! Thanks!","What kind of framework did you use and which version? In the future problem, you may face, this information can play a key role in the way we can help you. +Unfortunately, for Pytorch/Tensorflow the model you saved is likely to be saved with only the weights of the neurons, not with its history. Once Jupyter Notebook is closed, the memory is cleaned (and with it, the data of your training history). +The only thing you can extract is the final loss/accuracy you had. +However, if you regularly saved a version of the model, you can load them and compute manually the accuracy/loss that you need. Next, you can use matplotlib to reconstruct the graph. +I understand this is probably not the answer you were looking for. However, if the hardware is yours, I would recommend you to restart training. 8h is not that much to train a model in deep learning.",0.0,False,1,7166 +2020-12-09 13:03:41.490,"How do I handle communication between object instances, or between modules?","I appear to be missing some fundamental Python concept that is so simple that no one ever talks about it. I apologize in advance for likely using improper description - I probably don't know enough to ask the question correctly. +Here is a conceptual dead end I have arrived at: +I have an instance of Class Net, which handles communicating with some things over the internet. +I have an instance of Class Process, which does a bunch of processing and data management +I have an instance of Class Gui, which handles the GUI. +The Gui instance needs access to Net and Process instances, as the callbacks from its widgets call those methods, among other things. +The Net and Process instances need access to some of the Gui instances' methods, as they need to occasionally display stuff (what it's doing, results of queries, etc) +How do I manage it so these things talk to each other? Inheritance doesn't work - I need the instance, not the class. Besides, inheritance is one way, not two way. +I can obviously instantiate the Gui, and then pass it (as an object) to the others when they are instantiated. But the Gui then won't know about the Process and Net instances. I can of course then manually pass the Net and Process instances to the Gui instance after creation, but that seems like a hack, not like proper practice. Also the number of interdependencies I have to manually pass along grows rather quickly (almost factorially?) with the number of objects involved - so I expect this is not the correct strategy. +I arrived at this dead end after trying the same thing with normal functions, where I am more comfortable. Due to their size, the similarly grouped functions lived in separate modules, again Net, Gui, and Process. The problem was exactly the same. A 'parent' module imports 'child' modules, and can then can call their methods. But how do the child modules call the parent module's methods, and how do they call each other's methods? Having everything import everything seems fraught with peril, and again seems to explode as more objects are involved. +So what am I missing in organizing my code that I run into this problem where apparently all other python users do not?","The answer to this is insanely simple. +Anything that needs to be globally available to other modules can be stored its own module, global_param for instance. Every other module can import global_param, and then use and modify its contents as needed. This avoids any issues with circular importing as well. +Not sure why it took me so long to figure this out...",0.3869120172231254,False,1,7167 +2020-12-09 18:38:18.553,"On single gpu, can TensorFlow train a model which larger than GPU memory?","If I have a single GPU with 8GB RAM and I have a TensorFlow model (excluding training/validation data) that is 10GB, can TensorFlow train the model? +If yes, how does TensorFlow do this? +Notes: + +I'm not looking for distributed GPU training. I want to know about single GPU case. +I'm not concerned about the training/validation data sizes.","No you can not train a model larger than your GPU's memory. (there may be some ways with dropout that I am not aware of but in general it is not advised). Further you would need more memory than even all the parameters you are keeping because your GPU needs to retain the parameters along with the derivatives for each step to do back-prop. +Not to mention the smaller batch size this would require as there is less space left for the dataset.",0.0,False,1,7168 +2020-12-09 19:13:03.913,How would I use a bot to send multiple reactions on one message? Discord.py,this is kind of a dumb question but how would I make a discord.py event to automatically react to a message with a bunch of different default discord emojis at once. I am new to discord.py,You have to use on_message event. Its a default d.py function. It is an automatic function.,0.0,False,1,7169 +2020-12-10 05:08:39.017,How can I get my server to UDP multicast to clients across the internet? Do I need a special multicast IP address?,"I am creating a multiplayer game and I would like the communication between my server program (written in python) and the clients (written in c# - Unity) to happen via UDP sockets. +I recently came across the concept of UDP Multicast, and it sounds like it could be much better for my use case as opposed to using UDP Unicast , because my server needs to update all of the clients (players) with the same content every interval. So, rather than sending multiple identical packets to all the clients with UDP unicast, I would like to be able to only send one packet to all the clients using multicast, which sounds much more efficient. +I am new to multicasting and my questions are: +How can I get my server to multicast to clients across the internet? +Do I need my server to have a special public multicast IP address? If so how do I get one? +Is it even possible to multicast across the internet? or is multicasting available only within my LAN? +And what are the pros and cons with taking the multicast approach? +Thank you all for your help in advance!!","You can't multicast on the Internet. Full stop. +Basically, multicast is only designed to work when there's someone in charge of the whole network to set it up. As you noted, that person needs to assign the multicast IP addresses, for example.",1.2,True,1,7170 +2020-12-10 07:37:54.630,Create symlink on a network drive to a file on same network drive (Win10),"Problem statement: +I have a python 3.8.5 script running on Windows 10 that processes large files from multiple locations on a network drive and creates .png files containing graphs of the analyzed results. The graphs are all stored in a single destination folder on the same network drive. It looks something like this +Source files: +\\drive\src1\src1.txt +\\drive\src2\src2.txt +\\drive\src3\src3.txt +Output folder: +\\drive\dest\out1.png +\\drive\dest\out2.png +\\drive\dest\out3.png +Occasionally we need to replot the original source file and examine a portion of the data trace in detail. This involves hunting for the source file in the right folder. The source file names are longish alphanumerical strings so this process is tedious. In order to make it less tedious I would like to creaty symlinks to the orignal source files and save them side by side with the .png files. The output folder would then look like this +Output files: +\\drive\dest\out1.png +\\drive\dest\out1_src.txt +\\drive\dest\out2.png +\\drive\dest\out2_src.txt +\\drive\dest\out3.png +\\drive\dest\out3_src.txt +where \\drive\dest\out1_src.txt is a symlink to \\drive\src1\src1.txt, etc. +I am attempting to accomplish this via +os.symlink('//drive/dest/out1_src.txt', '//drive/src1/src1.txt') +However no matter what I try I get + +PermissionError: [WinError 5] Access is denied + +I have tried running the script from an elevated shell, enabling Developer Mode, and running +fsutil behavior set SymlinkEvaluation R2R:1 +fsutil behavior set SymlinkEvaluation R2L:1 +but nothing seems to work. There is absolutely no problem creating the symlinks on a local drive, e.g., +os.symlink('C:/dest/out1_src.txt', '//drive/src1/src1.txt') +but that does not accomplish my goals. I have also tried creading links on the local drive per above then then copying them to the network location with +shutil.copy(src, dest, follow_symlinks = False) +and it fails with the same error message. Attempts to accomplish the same thing directly in the shell from an elevated shell also fail with the same ""Access is denied"" error message +mklink \\drive\dest\out1_src.txt \\drive\src1\src1.txt +It seems to be some type of a windows permission error. However when I run fsutil behavior query SymlinkEvaluation in the shell I get + +Local to local symbolic links are enabled. +Local to remote symbolic links are enabled. +Remote to local symbolic links are enabled. +Remote to remote symbolic links are enabled. + +Any idea how to resolve this? I have been googling for hours and according to everything I am reading it should work, except that it does not.","Open secpol.msc on PC where the newtork share is hosted, navigate to Local Policies - User Rights Assignment - Create symbolic links and add account you use to connect to the network share. You need to logoff from shared folder (Control Panel - All Control Panel Items - Credential Manager or maybe you have to reboot both computers) and try again.",0.0,False,1,7171 +2020-12-11 11:57:46.063,How to downgrade python from 3.9.0 to 3.6,"I'm trying to install PyAudio but it needs a Python 3.6 installation and I only have Python 3.9 installed. I tried to switch using brew and pyenv but it doesn't work. +Does anyone know how to solve this problem?","You may install multiple versions of the same major python 3.x version, as long as the minor version is different in this case x here refers to the minor version, and you could delete the no longer needed version at anytime since they are kept separate from each other. +so go ahead and install python 3.6 since it's a different minor from 3.9, and you could then delete 3.9 if you would like to since it would be used over 3.6 by the system, unless you are going to specify the version you wanna run.",1.2,True,1,7172 +2020-12-11 16:40:32.080,Running functions siultaneoulsy in python,"I am making a small program in which I need a few functions to check for something in the background. +I used module threading and all those functions indeed run simultaneously and everything works perfectly until I start adding more functions. As the threading module makes new threads, they all stay within the same process, so when I add more, they start slowing each other down. +The problem is not with the CPU as it's utilization never reaches 100% (i5-4460). I also tried the multiprocessing module which creates a new process for function, but then it seems that variables can't be shared between different processes or I don't know how. (newly started process for each function seems to take existing variables with itself, but my main program cannot access any of the changes that function in the separate process makes or even new variables it creates) +I tried using the global keyword but it seems to have no effect in multiprocessing as it does in threading. +How could I solve this problem? +I am pretty sure that I have to create new processes for those background functions but I need to get some feedback from them and that part I don't know to solve.",I ended up using multiprocessing Value,1.2,True,1,7173 +2020-12-11 21:06:25.180,Python not using proper pip,"I'm running CentOS 8 that came with native Python 3.6.8. I needed Python 3.7 so I installed Python 3.7.0 from sources. Now, python command is unknown to the system, while commands python3 and python3.7 both use Python 3.7. +All good until now, but I can't seem to get pip working. +Command pip returns command not found, while python3 -m pip, python3.7 -m pip, python3 -m pip3, and python3.7 -m pip3 return No module named pip. Only pip command that works is pip3. +Now whatever package I install via pip3 does not seem to install properly. Example given, pip3 install tornado returns Requirement already satisfied, but when I try to import tornado in Python 3.7 I get ModuleNotFoundError: No module named 'tornado'. Not the same thing can be said when I try to import it in Python 3.6, which works flawlessly. From this, I understand that my pip only works with Python 3.6, and not with 3.7. +Please tell me how can I use pip with Python 3.7, thank you.","It looks like your python3.7 does not have pip. +Install pip for your specific python by running python3.7 -m easy_install pip. +Then, install packages by python3.7 -m pip install +Another option is to create a virtual environment from your python3.7. The venv brings pip into it by default. +You create venv by python3.7 -m venv ",1.2,True,2,7174 +2020-12-11 21:06:25.180,Python not using proper pip,"I'm running CentOS 8 that came with native Python 3.6.8. I needed Python 3.7 so I installed Python 3.7.0 from sources. Now, python command is unknown to the system, while commands python3 and python3.7 both use Python 3.7. +All good until now, but I can't seem to get pip working. +Command pip returns command not found, while python3 -m pip, python3.7 -m pip, python3 -m pip3, and python3.7 -m pip3 return No module named pip. Only pip command that works is pip3. +Now whatever package I install via pip3 does not seem to install properly. Example given, pip3 install tornado returns Requirement already satisfied, but when I try to import tornado in Python 3.7 I get ModuleNotFoundError: No module named 'tornado'. Not the same thing can be said when I try to import it in Python 3.6, which works flawlessly. From this, I understand that my pip only works with Python 3.6, and not with 3.7. +Please tell me how can I use pip with Python 3.7, thank you.","I think the packages you install will be installed for the previous version of Python. I think you should update the native OS Python like this: + +Install the python3.7 package using apt-get +sudo apt-get install python 3.7 +Add python3.6 & python3.7 to update-alternatives: +sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.6 1 +sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.7 2 +Update python3 to point to Python 3.7: +`sudo update-alternatives --config python3 +Test the version: +python3 -V",0.0,False,2,7174 +2020-12-13 14:28:27.847,How to communicate with Cylon BMS controller,"I try to communicate with Cylon device (UC32) by Bacnet protocol (BAC0) but I can not discover any device. And I try with Yabe and it does not have any result. +Is there any document describing how to create my communication driver? +Or any technique which can be uswed to connect with this device?","(Assuming you've set the default gateway address - for it to know where to return it's responses, but only if necessary.) +If we start with the assumption that maybe the device is not (by default) listening for broadcasts or having some issue sending it - a bug maybe (although probably unlikely), then you could send a unicast/directed message, e.g. use the Read-Property service to read back the (already known) BOIN (BACnet Object Instance Number), but you would need a (BACnet) client (application/software) that provides that option, like possibly one of the 'BACnet stack' cmd-line tools or maybe via the (for most part) awesome (but advanced-level) 'VTS (Visual Test Shell)' tool. +As much as it might be possible to discover what the device's BOIN (BACnet Object Instance Number) is, it's better if you know it already (- as a small few device's might not make it easy to discover - i.e. you might have to resort to using a round-robin bruteforce approach, firing lots of requests - one after the other with only the BOIN changed/incremented by 1, until you receive/see a successful response).",0.3869120172231254,False,1,7175 +2020-12-13 15:07:08.070,Create PM2 Ecosystem File from current processes,"I'm running a few programs (NodeJS and python) in my server (Ubuntu 20.04). I use PM2 CLI to create and manage processes. Now I want to manage all process through an echo system file. But when I run pm2 ecosystem, it just creates a sample file. I want to save my CURRENT PROCESSES to the echo system file and modify it. Anyone know how to save pm2 current process as echo system file?","If you use pm2 startup pm2 creates a file named ~/.pm2/dump.pm2 with all running processes (with too many parameters, as it saves the whole environment in the file) Edit: -I know that SciPy is a stack, and I am well aware of what libraries are part of it. My questions is about if SciPy knows how to work with the very units it spits out with its constants (or if I have to throw out those units and manually redefine everything). As far as I can see, it can't actually parse its own unit strings (and nothing else in the ecosystem seems to know how to make heads or tails of them either). This doesn't make sense to me because if SciPy proper can't deal with these units, why would they be there in the first place? Not to mention, keeping track of your units across your calculations is the exact kind of thing you need to do in science. Forcing manual redefinitions of all the units someone went through the trouble of associating with all these constants doesn't make sense.","No, scipy the library does not have any notion of quantities with units and makes no guarantees when operating on quantities with units (from e.g. pint, astropy.Quantity or other objects from other unit-handling packages).",0.0,False,1,6290 -2019-09-07 11:50:52.290,LightGBM unexpected behaviour outside of jupyter,"I have this strange but when I'm using a LightGBM model to calculate some predictions. -I trained a LightGBM model inside of jupyter and dumped it into a file using pickle. This model is used in an external class. -My problem is when I call my prediction function from this external class outside of jupyter it always predicts an output of 0.5 (on all rows). When I use the exact same class inside of jupyter I get the expected output. In both cases the exact same model is used with the exact same data. -How can this behavior be explained and how can I achieve to get the same results outside of jupyter? Has it something to do with the fact I trained the model inside of jupyter? (I can't imagine why it would, but atm have no clue where this bug is coming from) -Edit: Used versions: -Both times the same lgb version is used (2.2.3), I also checked the python version which are equal (3.6.8) and all system paths (sys.path output). The paths are equal except of '/home/xxx/.local/lib/python3.6/site-packages/IPython/extensions' and '/home/xxx/.ipython'. -Edit 2: I copied the code I used inside of my jupyter and ran it as a normal python file. The model made this way works now inside of jupyter and outside of it. I still wonder why this bug accrued.",It can't be a jupyter problem since jupyter is just an interface to communicate with python. The problem could be that you are using different python environment and different version of lgbm... Check import lightgbm as lgb and lgb.__version__ on both jupyter and your python terminal and make sure there are the same (or check if there has been some major changements between these versions),0.3869120172231254,False,1,6291 -2019-09-08 16:32:01.487,Create Python setup,I have to create a setup screen with tk that starts only at the first boot of the application where you will have to enter names etc ... a sort of setup. Does anyone have any ideas on how to do so that A) is performed only the first time and B) the input can be saved and used in the other scripts? Thanks in advance,"Why not use a file to store the details? You could use a text file or you could use pickle to save a python object then reload it. On starting your application you could check to see if the file exists and contains the necessary information, if it doesn't you can activate your setup screen, if not skip it.",0.3869120172231254,False,1,6292 -2019-09-09 13:09:00.117,What is the best way to combine two data sets that depend on each other?,"I am encountering a task and I am not entirely sure what the best solution is. -I currently have one data set in mongo that I use to display user data on a website, backend is in Python. A different team in the company recently created an API that has additional data that I would let to show along side the user data, and the data from the newly created API is paired to my user data (Shows specific data per user) that I will need to sync up. -I had initially thought of creating a cron job that runs weekly (as the ""other"" API data does not update often) and then taking the information and putting it directly into my data after pairing it up. -A coworker has suggested caching the ""other"" API data and then just returning the ""mixed"" data to display on the website. -What is the best course of action here? Actually adding the data to our data set would allow us to have 1 source of truth and not rely on the other end point, as well as doing less work each time we need the data. Also if we end up needing that information somewhere else in the project, we already have the data in our DB and can just use it directly without needing to re-organize/pair it. -Just looking for general pro's and cons for each solution. Thanks!","Synchronization will always cost more than federation. I would either A) embrace CORS and integrate it in the front-end, or B) create a thin proxy in your Python App. -Which you choose depends on how quickly this API changes, whether you can respond to those changes, and whether you need graceful degradation in case of remote API failure. If it is not mission-critical data, and the API is reliable, just integrate it in the browser. If they support things like HTTP cache-control, all the better, the user's browser will handle it. -If the API is not scalable/reliable, then consider putting in a proxy server-side so that you can catch errors and provide graceful degradation.",1.2,True,1,6293 -2019-09-09 20:26:07.763,pandas pd.options.display.max_rows not working as expected,"I’m using pandas 0.25.1 in Jupyter Lab and the maximum number of rows I can display is 10, regardless of what pd.options.display.max_rows is set to. -However, if pd.options.display.max_rows is set to less than 10 it takes effect and if pd.options.display.max_rows = None then all rows show. -Any idea how I can get a pd.options.display.max_rows of more than 10 to take effect?","min_rows displays the number of rows to be displayed from the top (head) and from the bottom (tail) it will be evenly split..despite putting in an odd number. If you only want a set number of rows to be displayed without reading it into the memory, -another way is to use nrows = 'putnumberhere'. -e.g. results = pd.read_csv('ex6.csv', nrows = 5) # display 5 rows from the top 0 - 4 -If the dataframe has about 100 rows and you want to display only the first 5 rows from the top...NO TAIL use .nrows",-0.2012947653214861,False,1,6294 -2019-09-11 00:46:34.683,Using tensorflow object detection for either or detection,"I have used Tensorflow object detection for quite awhile now. I am more of a user, I dont really know how it works. I am wondering is it possible to train it to recognize an object is something and not something? For example, I want to detect cracks on the tiles. Can i use object detection to do so where i show an image of a tile and it can tell me if there is a crack (and also show the location), or it will tell me if there is no crack on the tile? -I have tried to train using pictures with and without defect, using 2 classes (1 for defect and 1 for no defect). But the results keep showing both (if the picture have defect) in 1 picture. Is there a way to show only the one with defect? -Basically i would like to do defect checking. This is a simplistic case of 1 defect. but the actual case will have a few defects. -Thank you.","In case you're only expecting input images of tiles, either with defects or not, you don't need a class for no defect. -The API adds a background class for everything which is not the other classes. -So you simply need to state one class - defect, and tiles which are not detected as such are not defected. -So in your training set - simply give bounding boxes of defects, and no bounding box in case of no defect, and then your model should learn to detect the defects as mentioned above.",1.2,True,1,6295 -2019-09-11 16:52:17.283,How can I find memory leaks without external packages?,"I am writing a data mining script to pull information off of a program called Agisoft PhotoScan for my lab. PhotoScan uses its own Python library (and I'm not sure how to access pip for this particular build), which has caused me a few problems installing other packages. After dragging, dropping, and praying, I've gotten a few packages to work, but I'm still facing a memory leak. If there is no way around it, I can try to install some more packages to weed out the leak, but I'd like to avoid this if possible. -My understanding of Python garbage collection so far is, when an object loses its reference, it should be deleted. I used sys.getrefcount() to check all my variables, but they all stay constant. I have a hunch that the issue could be in the mysql-connector package I installed, or in PhotoScan itself, but I am not sure how to go about testing. I will be more than happy to provide code if that will help!","It turns out that the memory leak was indeed with the PhotoScan program. I've worked around it by having a separate script open and close it, running my original script once each time. Thank you all for the help!",0.0,False,1,6296 -2019-09-15 06:56:39.743,Start cmd and run multiple commands in the created cmd instance,"I am trying to start cmd window and then running a chain of cmds in succession one after the other in that cmd window. -something like start cmd /k pipenv shell && py manage.py runserver the start cmd should open a new cmd window, which actually happens, then the pipenv shell should start a virtual environment within that cmd instance, also happens, and the py manage.py runserver should run in the created environment but instead it runs where the script is called. -Any ideas on how I can make this work?","Your py manage.py runserver command calling python executor in your major environment. In your case, you could use pipenv run manage.py runserver that detect your virtual env inside your pipfile and activate it to run your command. An alternative way is to use virtualenv that create virtual env directly inside your project directory and calling envname\Scripts\activate each time you want to run something inside your virtual env.",0.2012947653214861,False,1,6297 -2019-09-15 21:33:55.463,"structured numpy ndarray, how to get values","I have a structured numpy ndarray la = {'val1':0,'val2':1} and I would like to return the vals using the 0 and 1 as keys, so I wish to return val1 when I have 0 and val2 when I have 1 which should have been straightforward however my attempts have failed, as I am not familiar with this structure. -How do I return only the corresponding val, or an array of all vals so that I can read in order?","Just found out that I can use la.tolist() and it returns a dictionary, somehow? when I wanted a list, alas from there on I was able to solve my problem.",0.0,False,1,6298 -2019-09-16 15:19:19.583,impossible to use pip,"I start on python, I try to use mathplotlib on my code but I have an error ""ModuleNotFoundError: No module named 'matplotlib'"" on my cmd. So I have tried to use pip on the cmd: pip install mathplotlib. -But I have an other error ""No python at 'C:...\Microsoft Visual Studio..."" -Actually I don't use microsoft studio anymore so I usinstall it but I think I have to change the path for the pip modul but I don't know how... I add the link of the script of the python folder on the variables environment but it doesn't change anything. How can I use pip ? -Antoine","Your setup seems messed up. A couple of ideas: - -long term solution: Uninstall everything related to Python, make sure your PATH environment variables are clean, and reinstall Python from scratch. -short term solution: Since py seems to work, you could go along with it: py, py -3 -m pip install , and so on. -If you feel comfortable enough you could try to salvage what works by looking at the output of py -0p, this should tell you where are the Python installations that are potentially functional, and you could get rid of the rest.",0.0,False,1,6299 -2019-09-16 16:45:45.577,How to create button based chatbot,"I have created a chatbot using RASA to work with free text and it is working fine. As per my new requirement i need to build button based chatbot which should follow flowchart kind of structure. I don't know how to do that what i thought is to convert the flowchart into graph data structure using networkx but i am not sure whether it has that capability. I did search but most of the examples are using dialogue or chat fuel. Can i do it using networkx. -Please help.","Sure, you can. -You just need each button to point to another intent. The payload of each button should point have the /intent_value as its payload and this will cause the NLU to skip evaluation and simply predict the intent. Then you can just bind a trigger to the intent or use the utter_ method. -Hope that helps.",1.2,True,1,6300 -2019-09-16 19:35:35.813,Teradataml: Remove all temporary tables created by Teradata MLE functions,In teradataml how should the user remove temporary tables created by Teradata MLE functions?,At the end of a session call remove_context() to trigger the dropping of tables.,0.0,False,1,6301 -2019-09-17 06:03:09.647,How to inherit controller of a third party module for customization Odoo 12?,"I have a module with a controller and I need to inherit it in a newly created module for some customization. I searched about the controller inheritance in Odoo and I found that we can inherit Odoo's base modules' controllers this way: -from odoo.addons.portal.controllers.portal import CustomerPortal, pager as portal_pager, get_records_pager -but how can I do this for a third party module's controller? In my case, the third party module directory is one step back from my own module's directory. If I should import the class of a third party module controller, how should I do it?","It is not a problem whether you are using a custom module.If the module installed in the database you can import as from odoo.addons. -Eg : from odoo.addons.your_module.controllers.main import MyClass",1.2,True,1,6302 -2019-09-17 13:31:40.087,how to deal with high cardinal categorical feature into numeric for predictive machine learning model?,"I have two columns of having high cardinal categorical values, one column(area_id) has 21878 unique values and other has(page_entry) 800 unique values. I am building a predictive ML model to predict the hits on a webpage. -column information: -area_id: all the locations that were visited during the session. (has location code number of different areas of a webpage) -page_entry: describes the landing page of the session. -how to change these two columns into numerical apart from one_hot encoding? -thank you.","One approach could be to group your categorical levels into smaller buckets using business rules. In your case for the feature area_id you could simply group them based on their geographical location, say all area_ids from a single district (or for that matter any other level of aggregation) will be replaced by a single id. Similarly, for page_entry you could group similar pages based on some attributes like nature of the web page like sports, travel, etc. In this way you could significantly reduce the number dimensions of your variables. -Hope this helps!",0.0,False,1,6303 -2019-09-18 17:09:01.753,How to restrict the maximum size of an element in a list in Python?,"Problem Statement: -There are 5 sockets and 6 phones. Each phone takes 60 minutes to charge completely. What is the least time required to charge all phones? -The phones can be interchanged along the sockets -What I've tried: -I've made a list with 6 elements whose initial value is 0. I've defined two functions. Switch function, which interchanges the phone one socket to the left. Charge function, which adds value 10(charging time assumed) to each element, except the last (as there are only 5 sockets). As the program proceeds, how do I restrict individual elements to 60, while other lower value elements still get added 10 until they attain the value of 60?","In the charge function, add an if condition that checks the value of the element. -I'm not sure what you're add function looks like exactly, but I would define the pseudocode to look something like this: -if element < 60: -add 10 to the element -This way, if an element is greater than or equal to 60, it won't get caught by the if condition and won't get anything added to it.",0.0,False,2,6304 -2019-09-18 17:09:01.753,How to restrict the maximum size of an element in a list in Python?,"Problem Statement: -There are 5 sockets and 6 phones. Each phone takes 60 minutes to charge completely. What is the least time required to charge all phones? -The phones can be interchanged along the sockets -What I've tried: -I've made a list with 6 elements whose initial value is 0. I've defined two functions. Switch function, which interchanges the phone one socket to the left. Charge function, which adds value 10(charging time assumed) to each element, except the last (as there are only 5 sockets). As the program proceeds, how do I restrict individual elements to 60, while other lower value elements still get added 10 until they attain the value of 60?","You cannot simply restrict the maximum element size. What you can do is check the element size with a if condition and terminate the process. -btw, answer is 6x60/5=72 mins.",0.0,False,2,6304 -2019-09-18 18:44:22.307,how to display plot images outside of jupyter notebook?,"So, this might be an utterly dumb question, but I have just started working with python and it's data science libs, and I would like to see seaborn plots displayed, but I prefer to work with editors I have experience with, like VS Code or PyCharm instead of Jupyter notebook. Of course, when I run the python code, the console does not display the plots as those are images. So how do I get to display and see the plots when not using jupyter?","You can try to run an matplotlib example code with python console or ipython console. They will show you a window with your plot. -Also, you can use Spyder instead of those consoles. It is free, and works well with python libraries for data science. Of course, you can check your plots in Spyder.",0.0,False,1,6305 -2019-09-19 18:35:33.863,Tasks linger in celery amqp when publisher is terminated,"I am using Celery with a RabbitMQ server. I have a publisher, which could potentially be terminated by a SIGKILL and since this signal cannot be watched, I cannot revoke the tasks. What would be a common approach to revoke the tasks where the publisher is not alive anymore? -I experimented with an interval on the worker side, but the publisher is obviously not registered as a worker, so I don't know how I can detect a timeout","There's nothing built-in to celery to monitor the producer / publisher status -- only the worker / consumer status. There are other alternatives that you can consider, for example by using a redis expiring key that has to be updated periodically by the publisher that can serve as a proxy for whether a publisher is alive. And then in the task checking to see if the flag for a publisher still exists within redis, and if it doesn't the task returns doing nothing.",0.6730655149877884,False,2,6306 -2019-09-19 18:35:33.863,Tasks linger in celery amqp when publisher is terminated,"I am using Celery with a RabbitMQ server. I have a publisher, which could potentially be terminated by a SIGKILL and since this signal cannot be watched, I cannot revoke the tasks. What would be a common approach to revoke the tasks where the publisher is not alive anymore? -I experimented with an interval on the worker side, but the publisher is obviously not registered as a worker, so I don't know how I can detect a timeout","Another solution, which works in my case, is to add the next task only if the current processed ones are finished. In this case the queue doesn't fill up.",1.2,True,2,6306 -2019-09-19 19:03:13.597,"Python ""Magic methods"" are realy methods?","I know how to use magical methods in python, but I would like to understand more about them. -For it I would like to consider three examples: -1) __init__: -We use this as constructor in the beginning of most classes. If this is a method, what is the object associated with it? Is it a basic python object that is used to generate all the other objects? -2) __add__ -We use this to change the behaviour of the operator +. The same question above. -3) __name__: -The most common use of it is inside this kind of structure:if __name__ == ""__main__"": -This is return True when you are running the module as the main program. -My question is __name__ a method or a variable? If it is a variable what is the method associated with it. If this is a method, what is the object associated with it? -Since I do not understand very well these methods, maybe the questions are not well formulated. I would like to understand how these methods are constructed in Python.","The object is the class that's being instantiated, a.k.a. the Foo in Foo.__init__(actual_instance) -In a + b the object is a, and the expression is equivalent to a.__add__(b) -__name__ is a variable. It can't be a method because then comparisons with a string would always be False since a function is never equal to a string",0.2012947653214861,False,1,6307 -2019-09-19 21:07:37.810,Python - how to check if user is on the desktop,"I am trying to write a program with python that works like android folders bit for Windows. I want the user to be able to single click on a desktop icon and then a window will open with the contents of the folder in it. After giving up trying to find a way to allow single click to open a desktop application (for only one application I am aware that you can allow single click for all files and folders), I decided to check if the user clicked in the location of the file and if they were on the desktop while they were doing that. So what I need to know is how to check if the user is viewing the desktop in python. -Thanks, -Harry -TLDR; how to check if user is viewing the desktop - python","I don't know if ""single clicking"" would work in any way but you can use Pyautogui to automatically click as many times as you want",0.0,False,1,6308 -2019-09-20 11:50:30.050,How to fine-tune a keras model with existing plus newer classes?,"Good day! -I have a celebrity dataset on which I want to fine-tune a keras built-in model. SO far what I have explored and done, we remove the top layers of the original model (or preferably, pass the include_top=False) and add our own layers, and then train our newly added layers while keeping the previous layers frozen. This whole thing is pretty much like intuitive. -Now what I require is, that my model learns to identify the celebrity faces, while also being able to detect all the other objects it has been trained on before. Originally, the models trained on imagenet come with an output layer of 1000 neurons, each representing a separate class. I'm confused about how it should be able to detect the new classes? All the transfer learning and fine-tuning articles and blogs tell us to replace the original 1000-neuron output layer with a different N-neuron layer (N=number of new classes). In my case, I have two celebrities, so if I have a new layer with 2 neurons, I don't know how the model is going to classify the original 1000 imagenet objects. -I need a pointer on this whole thing, that how exactly can I have a pre-trained model taught two new celebrity faces while also maintaining its ability to recognize all the 1000 imagenet objects as well. -Thanks!","With transfer learning, you can make the trained model classify among the new classes on which you just trained using the features learned from the new dataset and the features learned by the model from the dataset on which it was trained in the first place. Unfortunately, you can not make the model to classify between all the classes (original dataset classes + second time used dataset classes), because when you add the new classes, it keeps their weights only for classification. -But, let's say for experimentation you change the number of output neurons (equal to the number of old + new classes) in the last layer, then it will now give random weights to these neurons which on prediction will not give you meaningful result. -This whole thing of making the model to classify among old + new classes experimentation is still in research area. -However, one way you can achieve it is to train your model from scratch on the whole data (old + new).",0.5457054096481145,False,1,6309 +This file is similar to the output of the command pm2 prettylist",1.2,True,1,7176 +2020-12-13 20:33:50.563,"Git, heroku, pre-receive hook declined","So I was trying to host a simple python script on Heroku.com, but encountered this error. After a little googling, I found this on the Heroku's website: git, Heroku: pre-receive hook declined, Make sure you are pushing a repo that contains a proper supported app ( Rails, Django etc.) and you are not just pushing some random repo to test it out. +Problem is I have no idea how these work, and few tutorials I looked up were for more detailed use of those frameworks. What I need to know is how can i use them with a simple 1 file python script. Thanks in advance.","Okay I got it. It was about some unused modules in requirements.txt, I'm an idiot for not reading the output properly ‍♂️",0.0,False,1,7177 +2020-12-13 23:30:31.457,How to get author's Discord Tag shown,"How do I display the user's Name + Discord Tag? As in: +I know that; +f""Hello, <@{ctx.author.id}>"" +will return the user, and being pinged. +(@user) +And that; +f""Hello, {ctx.author.name}"" +will return the user's nickname, but WITHOUT the #XXXX after it. +(user) +But how do I get it to display the user's full name and tag? +(user#XXXX)",To get user#XXXX you can just do str(ctx.author) (or just put it in your f-string and it will automatically be converted to a string). You can also do ctx.author.discriminator to get their tag (XXXX).,0.2012947653214861,False,1,7178 +2020-12-14 15:50:01.883,How to scrape data from multiple unrelated sections of a website (using Scrapy),"I have made a Scrapy web crawler which can scrape Amazon. It can scrape by searching for items using a list of keywords and scrape the data from the resulting pages. +However, I would like to scrape Amazon for large portion of its product data. I don't have a preferred list of keywords with which to query for items. Rather, I'd like to scrape the website evenly and collect X number of items which is representative of all products listed on Amazon. +Does anyone know how scrape a website in this fashion? Thanks.","I'm putting my comment as an answer so that others looking for a similar solution can find it easier. +One way to achieve this is to going through each category (furniture, clothes, technology, automotive, etc.) and collecting a set number of items there. Amazon has side/top bars with navigation links to different categories, so you can let it run through there. +The process would be as follows: + +Follow category urls from initial Amazon.com parse +Use a different parse function for the callback, one that will scrape however many items from that category +Ensure that data is writing to a file (it will probably be a lot of data) + +However, such an approach would not be representative in the proportions of each category in the total Amazon products. Try looking for a ""X number of results"" label for each category to compensate for that. Good luck with your project!",1.2,True,1,7179 +2020-12-16 08:12:51.783,How to change colors of pip error messages in windows powershell,"The error messages printed by pip in my Windows PowerShell are dark red on dark blue (default PowerShell background). This is quite hard to read and I'd like to change this, but I couldn't find any hint to how to do this. Even not, if this is a default in Python applied to all stderr-like output, or if it's specific to pip. +My configuration: Windows 10, Python 3.9.0, pip 20.2.3. +Thanks for your help!","Coloring in pip is done via ANSI escape sequences. So the solution to this problem would be, to either change the way, PowerShell displays ANSI colors or the color scheme pip uses. Pip provides though a command-line switch '--no-color' which can be used to deactivate coloring the output.",0.0,False,1,7180 +2020-12-16 12:06:31.327,python api verified number usinf firebase,"I will create python api using Django +now I trying to verify phone number using firebase authentication end send SMS to user but I don't know how I will do","The phone number authentication in Firebase is only available from it's client-side SDKs, so the code that runs directly in your iOS, Android or Web app. It is not possible to trigger sending of the SMS message from the server. +So you can either find another service to send SMS messages, or to put the call to send the SMS message into the client-side code and then trigger that after it calls your Django API.",1.2,True,1,7181 +2020-12-16 16:21:38.647,ImportError: No module named 'sklearn.compose' with scikit-learn==0.23.2,"I'm fully aware of the previous post regarding this error. That issue was with scikit-learn < 0.20. But I'm having scikit-learn 0.23.2 and I've tried uninstall and reinstall 0.22 and 0.23 and I still have this error. +Followup: Although pip list told me the scikit-learn version is 0.23.2, but when I ran sklearn.__version__, the real version is 0.18.1. Why and how to resolve this inconsistency? (Uninstall 0.23.2 didn't work)","[RESOLVED] +It turned out that my Conda environment has different sys.path as my jupyter environment. The jupyter environment used the system env, which is due to the fact that I installed the ipykernel like this: python -m ipykernel install without use --user flag. The correct way should be to do so within the Conda env and run pip install jupyter",0.0,False,1,7182 +2020-12-17 08:39:49.780,How can I transform a list to array quickly in the framework of Mxnet?,"I have a list which has 8 elements and all of those elements are arrays whose shape are (3,480,364).Now I want to transform this list to array as (8,3,480,364).When I use the array=nd.array(list) this command,it will takes me a lot of time and sometimes it will send 'out of memory' error.When I try to use this command array=np.stack(list.aixs=0),when I debug the code,it will stay at this step and can't run out the result.So I wonder how can I transform a list to array quickly when I use the Mxnet framework?","Your method of transforming a list of lists into an array is correct, but an 'out of memory' error means you are running out of memory, which would also explain the slowdown. +How to check how much RAM you have left: +on Linux, you can run free -mh in the terminal. +How to check how much memory a variable takes: +The function sys.getsizeof tells you memory size in bytes. +You haven't said what data type your arrays have, but, say, if they're float64, that's 8 bytes per element, so your array of 8 * 3 * 480 * 364 = 4193280 elements should only take up 4193280 * 8 bytes = about 30 Mb. So, unless you have very little RAM left, you probably shouldn't be running out of memory. +So, I'd first check your assumptions: does your list really only have 8 elements, do all the elements have the same shape of (3, 480, 364), what is the data type, do you create this array once or a thousand times? You can also check the size of a list element: sys.getsizeof(list[0]). +Most likely this will clear it up, but what if your array is really just too big for your RAM? +What to do if an array doesn't fit in memory +One solution is to use smaller data type (dtype=np.float32 for floating point, np.int32 or even np.uint8 for small integer numbers). This will sacrifice some precision for floating point calculations. +If almost all elements in the array are zero, consider using a sparse matrix. +For training a neural net, you can use a batch training algorithm and only load data into memory in small batches.",0.0,False,1,7183 +2020-12-18 05:07:49.150,How do you set up a python project to be able to send to others without them having to manually copy and paste the code into an editor,"I made a cool little project for my friend, basically a timer using tkinter, but I am confused on how to let them access this project without having vscode or pycharm. Is it possible for them to just see the Tkinter window or something like that? Is there an application for this? Sorry if this is a stupid question.","You can just built an .exe (Application) of your project. Then just share the application file and anyone can use the application through .exe. You can use pyinstaller to convert your python code to exe. +pip install pyinstaller +then cd to the project folder then run the following command +pyinstaller --onefile YourFileName.py +if you want to make exe without console showing up then use this command +pyinstaller --onefile YourFileName.py --noconsole",0.6730655149877884,False,1,7184 +2020-12-18 06:28:45.840,Deploy Python Web Scraping files on Azure cloud(function apps),"I have 2 python files that do Web scraping using Selenium and Beautifulsoup and store the results in separate CSV files say file1.csv and file2.csv. Now, I want to deploy these files on the Azure cloud, I know Azure function apps will be ideal for this. But, I don't know how Functions app will support Selenium driver on it. +Basically, I want to time trigger my 2 web scraping files and store the results in two separate files file1.csv and file2.csv that will be stored in blob storage on Azure cloud. Can someone help me with this task? +How can I use the selenium driver on Azure functions app?","Deploying on virtual machines or EC2 is the only option that one can use to achieve this task. +Also, with Heroku, we will be able to run selenium on the cloud by adding buildpacks. But when it comes to storing the files, we will not be able to store files on heroku as heroku does not persist the files. So, VMs or EC2 instances are the only options for this task.",1.2,True,1,7185 +2020-12-18 19:17:18.420,Do I have to sort dates chronologically to use pandas.DataFrame.ewm?,"I need to calculate EMA for a set of data from csv file where dates are in descending order. +When I apply pandas.DataFrame.ewm I get EMA for the latest (by date) equal to the value. This is because ewm starts observation from top to bottom in DataFrame. +So far, I could not find option to make it reverse for ewm. So I guess, I will have to reverse all my dataset. +Maybe somebody knows how to make ewm start from bottom values? +Or is it recommended to always use datetimeindex sorted chronologically? From oldest values on top to newest on the bottom?","From pandas' documentation: + +Times corresponding to the observations. Must be monotonically increasing and datetime64[ns] dtype. + +I guess, datetimeindex must be chronological..",1.2,True,1,7186 +2020-12-19 15:35:48.737,How should I handle a data set with around 300000 small groups of data tables?,"I have a data science project in Python and I wonder how to manage my data. Some details about my situation: + +My data consists of a somewhat larger number of football matches, currently around 300000, and it is supposed to grow further as time goes on. Attached to each match are a few tables with different numbers of rows/columns (but similar column formats across different matches). +Now obviously I need to iterate through this set of matches to do some computations. So while I don’t think that I can hold the whole database in memory, I guess it would make sense to have at least chunks in memory, do computations on that chunk, and release it. +At the moment I have split everything up into single matches, gave each match an ID and created a folder for each match with the ID as folder name. Then I put the corresponding tables as small individual csv files into the folder that belongs to a given match. Additionally, I have an „overview“ DataFrame with some „metadata“ columns, one row per match. I put this row as a small json into each match folder for convenience as well. +I guess there would also be other ways to split the whole data set into chunks than match-wise, but for prototyping/testing my code with a small number of matches, it actually turned out to be quite handy to be able to go to a specific match folder in a file manager and look at one of these tables in a spreadsheet program (although similar inspections could obviously also be made from code in appropriate settings). But now I am at the point where this huge number of quite small files/folders slows down the OS so much that I need to do something else. +Just to be able to deal with the data at all right now, I simply created an additional layer of folder hierarchy like „range-0“ contains folders 0-9999, „range-1“ contains 10000-19999 etc. But I‘m not sure if this is the way to go forward. +Maybe I could simply save one chunk - whatever one chunk is - as a json file, but would lose some of the ease of the manual inspection. +At least everything is small enough, so that I can do my statistical analyses on a single machine, such that I think I can avoid map/reduce-type algorithms. +On another note, I have close to zero knowledge about database frameworks (I have written a few lines of SQL in my life), and I guess I would be the only person making requests to my database, so I am in doubt that this makes sense. But in case it does, what are the advantages of such an approach? + +So, to the people out there having some experience with handling data in such projects - what kind of way to manage my data, on a conceptual level, would you suggest or recommend to use in such a setting (independent of specific tools/libraries etc.)?","Your arrangement is not bad at all. We are not used to think of it this way, but modern filesystems are themselves very efficient (noSQL) databases. +All you have to do is having auxiliary files to work as indexes and metadata so your application can find its way. From your post, it looks like you already have that done to some degree. +Since you don't give more specific details of the specific files and data you are dealing with, we can't suggest specific arrangements. If the data is proper to be arranged in an SQL tabular representation, you could get benefits from putting all of it in a database and use some ORM - you'd also have to write adapters to get the Python object data into Pandas for your numeric analysis if you that, and it might end up being a superfluous layer if you are already getting it to work. +So, just be sure that whatever adaptations you do to get the files easier to deal with by hand (like the extra layer of folders you mention), don't get in the way of your code - i.e., make your code so that it automatically find its way across this, or any extra layers you happen to create (this can be as simple as having the final game match folder full path as a column in your ""overview"" dataframe)",1.2,True,1,7187 +2020-12-19 18:54:45.050,pip install a specific version of PyQt5,"I am using spyder & want to install finplot. However when I did this I could not open spyder and had to uninstall & reinstall anconda. +The problem is to do with PyQt5 as I understand. The developer of finplot said that one solution would be to install PyQt5 version 5.9. + +Error: spyder 4.1.3 has requirement pyqt5<5.13; python_version >= ""3"", but you'll have pyqt5 5.13.0 which is incompatible + +My question is how would I do this? To install finplot I used the line below, + +pip install finplot + +Is there a way to specify that it should only install PyQt5?","As far as I understand you just want to install PyQT5 version 9.0.You can try this below if you got pip installed on your machine + +pip install PyQt5==5.9 + +Edit: First you need to uninstall your pyQT5 5.13 + +pip uninstall PyQt5",0.6730655149877884,False,1,7188 +2020-12-19 22:58:12.080,Running another script while sharing functions and variable as in jupyter notebook,"I have a notebook that %run another notebook under JupyterLab. They can call back and forth each other functions and share some global variables. +I now want to convert the notebooks to py files so it can be executed from the command line. +I follow the advice found on SO and imported the 2nd file into the main one. +However, I found out that they can not call each other functions. This is a major problem because the 2nd file is a service to the main one, but it uses continuously functions that are part of the main one. +Essentially, the second program is non-GUI and it is driven by the main one which is a GUI program. Thus whenever the service program needs to print, it checks to see if a flag is set that tells it that it runs in a GUI mode, and then instead of simple printing it calls a function in the main one which knows how to display it on the GUI screen. I want to keep this separation. +How can I achieve it without too much change to the service program?","I ended up collecting all the GUI functions from the main GUI program, and putting them into a 3rd file in a class, including the relevant variables. +In the GUI program, just before calling the non GUI program (the service one) I created the class and set all the variables, and in the call I passed the class. +Then in the service program I call the functions that are in the class and got the variables needed from the class as well. +The changes to the service program were minor - just reading the variables from the class and change the calls to the GUI function to call the class functions instead.",0.0,False,1,7189 +2020-12-19 23:06:07.333,How to evaluate trained model Average Precison and Mean AP with IOU=0.3,"I trained a model using Tensorflow object detection API using Faster-RCNN with Resnet architecture. I am using tensorflow 1.13.1, cudnn 7.6.5, protobuf 3.11.4, python 3.7.7, numpy 1.18.1 and I cannot upgrade the versions at the moment. I need to evaluate the accuracy (AP/mAP) of the trained model with the validation set for the IOU=0.3. I am using legacy/eval.py script on purpose since it calculates AP/mAP for IOU=0.5 only (instead of mAP:0.5:0.95) +python legacy/eval.py --logtostderr --pipeline_config_path=training/faster_rcnn_resnet152_coco.config --checkpoint_dir=training/ --eval_dir=eval/ +I tried several things including updating pipeline config file to have min_score_threshold=0.3: +eval_config: { +num_examples: 60 +min_score_threshold: 0.3 +.. +Updated the default value in the protos/eval.proto file and recompiled the proto file to generate new version of eval_pb2.py +// Minimum score threshold for a detected object box to be visualized +optional float min_score_threshold = 13 [default = 0.3]; +However, eval.py still calculates/shows AP/mAP with IOU=0.5 +The above configuration helped only to detect objects on the images with confidence level < 0.5 in the eval.py output images but this is not what i need. +Does anybody know how to evaluate the model with IOU=0.3?",I finally could solve it by modifing hardcoded matching_iou_threshold=0.5 argument value in multiple method arguments (especially def __init) in the ../object_detection/utils/object_detection_evaluation.py,1.2,True,1,7190 +2020-12-20 12:53:05.890,random_state in random forest,"I just learned how to use the machine learning model Random Forest; however, although I read about the random_state parameter, I couldn't understand what it does. For example, what is the difference between random_state = 0 and random_state = 300 +Can someone please explain?","train_test_split splits arrays or matrices into random train and test subsets. That means that everytime you run it without specifying random_state, you will get a different result, this is expected behavior. +When you use random_state=any_value then your code will show exactly same behaviour when you run your code.",0.0,False,3,7191 +2020-12-20 12:53:05.890,random_state in random forest,"I just learned how to use the machine learning model Random Forest; however, although I read about the random_state parameter, I couldn't understand what it does. For example, what is the difference between random_state = 0 and random_state = 300 +Can someone please explain?",Random forests introduce stochasticity by randomly sampling data and features. Running RF on the exact same data may produce different outcomes for each run due to these random samplings. Fixing the seed to a constant i.e. 1 will eliminate that stochasticity and will produce the same results for each run.,0.0,False,3,7191 +2020-12-20 12:53:05.890,random_state in random forest,"I just learned how to use the machine learning model Random Forest; however, although I read about the random_state parameter, I couldn't understand what it does. For example, what is the difference between random_state = 0 and random_state = 300 +Can someone please explain?","In addition, most people use the number 42 when we use random_state. +For example, random_state = 42 and there's a reason for that. +Below is the answer. +The number 42 is, in The Hitchhiker's Guide to the Galaxy by Douglas Adams, the ""Answer to the Ultimate Question of Life, the Universe, and Everything"", calculated by an enormous supercomputer named Deep Thought over a period of 7.5 million years. Unfortunately, no one knows what the question is",0.0,False,3,7191 +2020-12-20 23:15:46.737,Get the number of boosts in a server discord.py,"I am trying to make a server info command and I want it to display the server name, boost count, boost members and some other stuff as well. +Only problem is I have looked at the docs and searched online and I cant find out how to find the boost information. +I dont have any code as Ive not found any code to try and use for myself +Is there any way to get this information?","Guild Name - guild_object.name +Boost count - guild_object.premium_subscription_count +Boosters, the people who boosted the server - guild_object.premium_subscribers +If your doing this in a command as I assume, use ctx.guild instead of guild_object. For anything further, you can re-read the docs as all of the above information is in it under the discord.Guild",1.2,True,1,7192 +2020-12-21 17:02:29.590,find frequency of a int appear in a list of interval,"I were given a list of intervals, for example [[10,40],[20,60]] and a list of position [5,15,30] +we should return the frequency of position appeared in the list, the answer would be [[5,0],[15,1],[30,2]] because 5 didn't cover by the interval and 15 was covered once, 30 was covered twice. +If I just do a for loop the time complexity would be O(m*n) m is the number of the interval, n is the number of position +Can I preprocess the intervals and make it faster? I was thinking of sort the interval first and use binary search but I am not sure how to implement it in python, Can someone give me a hint? Or can I use a hashtable to store intervals? what would be the time complexity for that?","You can use a frequency array to preprocess all interval data and then query for any value to get the answer. Specifically, create an array able to hold the min and max possible end-points of all the intervals. Then, for each interval, increment the frequency of the starting interval point and decrease the frequency of the value just after the end interval. At the end, accumulate this data for the array and we will have the frequency of occurrence of each value between the min and max of the interval. Each query is then just returning the frequency value from this array. + +freq[] --> larger than max-min+1 (min: minimum start value, max: maximum end value) +For each [L,R] --> freq[L]++, freq[R+1] = freq[R+1]-1 +freq[i] = freq[i]+freq[i-1] +For any query V, answer is freq[V] + +Do consider tradeoffs when range is very large compared to simple queries, where simple check for all may suffice.",0.0,False,1,7193 +2020-12-22 08:56:10.800,"Convert Json format String to Link{""link"":""https://i.imgur.com/zfxsqlk.png""}","I try to convert this String to only the link: {""link"":""https://i.imgur.com/zfxsqlk.png""} +I'm trying to create a discord bot, which sends random pictures from the API https://some-random-api.ml/img/red_panda. +With imageURL = json.loads(requests.get(redpandaurl).content) I get the json String, but what do I have to do that I only get the Link like this https://i.imgur.com/zfxsqlk.png +Sorry if my question is confusingly written, I'm new to programming and don't really know how to describe this problem.","What you get from json.loads() is a Python dict. You can access values in the dict by specifying their keys. +In your case, there is only one key-value pair in the dict: ""link"" is the key and ""https://i.imgur.com/zfxsqlk.png"" is the value. You can get the link and store it in the value by appending [""link""] to your line of code: +imageURL = json.loads(requests.get(redpandaurl).content)[""link""]",0.0,False,1,7194 +2020-12-23 07:39:41.123,Finding or building a python security profiler,"I want a security profiler for python. Specifically, I want something that will take as input a python program and tell me if the program tries to make system calls, read files, or import libraries. If such a security profiler exists, where can I find it? If no such thing exists and I were to write one myself, where could I have my profiler 'checked' (that is, verified that it works). +If you don't find this question appropriate for SO, let me know if there is another SE site I can post this on, or if possible, how I can change/rephrase my question. Thanks","Usually, python uses an interpreter called CPython. It is hard to say for python code by itself if it opens files or does something special, due a lot of python libraries and interpreter itself are written in C, and system calls/libc calls can happen only from there. Also python syntax by itself can be very obscure. +So, by answering your suspect: I suspect this would need specific knowledge of the python programming language, it does not look like that, due it is about C language. +You can think it is possible to patch CPython itself. Well it is not correct too as I guess. A lot of shared libraries use C/C++ code as CPython itself. Tensorflow, for example. +Going further, I guess it is possible to do following things: + +patch the compiler which compiles C/C++ code for CPython/modules, which is hard I guess. +just use an usual profiler, and trace which files, directories and calls are used by python itself for operation, and whitelist them, due they are needed, which is the best option by my opinion (AppArmor for example). +maybe you can be interested in the patching of CPython itself, where it is possible to hook needed functions and calls to external C libraries, but it can be annoying due you will have to revise every added library to your project, and also C code is often used for performance (e.g. json module), which doesn't open too much things.",1.2,True,1,7195 +2020-12-23 23:02:35.663,How can I let the user of an Django Admin Page control which list_display fields are visible?,"I have an ModelAdmin with a set of fields in list_display. +I want the user to be able to click a checkbox in order to add or remove these fields. +Is there a straightforward way of doing this? I've looked into Widgets but I'm not sure how they would change the list_display of a ModelAdmin","To do this I had to + +Override an admin template (and change TEMPLATES in settings.py). I added a form with checkboxes so user can set field +Add a new model and endpoint to update it (the model stores the fields to be displayed, the user submits a set of fields in the new admin template) +Update admin.py, overriding get_list_display so it sets fields based on the state of the model object updated",1.2,True,1,7196 +2020-12-24 16:49:26.270,What is the difference between a+=1 and a=+1..?,"how to understand difference between a+=1 and a=+1 in Python? +it seems that they're different. when I debug them in Python IDLE both were having different output.","It really depends on the type of object that a references. +For the case that a is another int: +The += is a single operator, an augmented assignment operator, that invokes a=a.__add__(1), for immutables. It is equivalent to a=a+1 and returns a new int object bound to the variable a. +The =+ is parsed as two operators using the normal order of operations: + ++ is a unary operator working on its right-hand-side argument invoking the special function a.__pos__(), similar to how -a would negate a via the unary a.__neg__() operator. += is the normal assignment operator + +For mutables += invokes __iadd__() for an in-place addition that should return the mutated original object.",0.1016881243684853,False,2,7197 +2020-12-24 16:49:26.270,What is the difference between a+=1 and a=+1..?,"how to understand difference between a+=1 and a=+1 in Python? +it seems that they're different. when I debug them in Python IDLE both were having different output.","a+=1 is a += 1, where += is a single operator meaning the same as a = a + 1. +a=+1 is a = + 1, which assigns + 1 to the variable without using the original value of a",0.2012947653214861,False,2,7197 +2020-12-24 19:05:39.640,different python files sharing the same variables,"I would like to know please, how can I define variables in a python file and share these variables with their values with multiple python files?","You can create a python module +Create a py file inside that module define variables and import that module in the required places.",0.0,False,2,7198 +2020-12-24 19:05:39.640,different python files sharing the same variables,"I would like to know please, how can I define variables in a python file and share these variables with their values with multiple python files?","To do this, you can create a new module specifically for storing all the global variables your application might need. For this you can create a function that will initialize any of these globals with a default value, you only need to call this function once from your main class, then you can import the globals file from any other class and use those globals as needed.",1.2,True,2,7198 +2020-12-25 13:44:08.990,How to connect a Python Flask backend to a React front end ? How does it work together?,I am making a website. And I want to know how to connect React js to my Flask back end. I have tried searching online but unfortunately it was not what I am looking for. If you know how to do it please recomend me some resources. And I also want to know the logic of how Flask and React work together.,"Flask is a backend micro-service and react is a front-end framework. Flask communicates with the database and makes the desired API hit points. The backend listens for any API request and sends the corresponding response as a JSON format. So using React you can make HTTP requests to the backend. +For testing purposes have the backend and frontend separated and communicate only using the REST APIs. For production, use the compiled js of React as static files and render only the index.html of the compiled react from the backend. +P.S: I personally recommend Django rest framework over flask if you are planning to do huge project.",1.2,True,1,7199 +2020-12-26 19:08:35.663,AES 128 bit encryption of bitstream data in python,"I am trying to encrypt a bitstream data or basically a list of binary data like this [1,0,1,1,1,0,0,1,1,0,1,1,0,1] in python using AES encryption with block size of 128bit, the problem is that i want the output to be binary data as well and the same size as the original binary data list, is that possible?how do i do that?","Yes, there are basically two ways: + +You have a unique value tied to the data (for instance if they are provided in sequence then you can create a sequence number) then you can simply use the unique value as nonce and then use AES encryption in counter mode. Counter mode doesn't expand the data but it is insecure if no nonce is supplied. Note that you do need the nonce when decrypting. + +You use format preserving encryption or FPE such as FF1 and FF3 defined by NIST. There are a few problems with this approach: + +there are issues with these algorithms if the amount of input data is minimal (as it seems to be in your case); +the implementations of FF1 and FF3 are generally hard to find; +if you have two unique bit values then they will result in identical ciphertext. + + + +Neither of these schemes provide integrity or authenticity of the data obviously, and they by definition leak the size of the plaintext.",1.2,True,1,7200 +2020-12-26 21:26:15.483,Running encrypted python code using RSA or AES encryption,"As I was working on a project the topic of code obfuscation came up, as such, would it be possible to encrypt python code using either RSA or AES and then de-code it on the other side and run it?. And if it's possible how would you do it?. I know that you can obfuscate code using Base64, or XOR, but using AES or RSA would be an interesting application. This is simply a generic question for anyone that may have an idea on how to do it. I am just looking to encrypt a piece of code from point A, send it to point B, have it decrypted at point B and run there locally using either AES or RSA. It can be sent by any means, as long as the code itself is encrypted and unreadable.","Yes this is very possible but would require some setup to work. +First off Base64 is an encoder for encoding data from binary/bytes to a restricted ascii/utf subset for transmission usually over http. Its not really an obfuscator, more like a packager for binary data. +So here is what is needed for this to work. + +A pre-shared secret key that both point A and point B have. This key cannot be transmitted along with the code since anyone who gets the encrypted code would also get the key to decrypt it. + +There would need to be an unencrypted code/program that allows you to insert that pre-shared key to use to decrypt the encrypted code that was sent. Can't hardcode the key into the decryptor since again anyone with the decryptor can now decrypt the code and also if the secrey key is leaked you would have to resend out the decryptor to use a different key. + +Once its decrypted the ""decryptor"" could save that code to a file for you to run or run the code itself using console commands or if its a python program you can call eval or use importlib to import that code and call the function within. +WARNING: eval is known to be dangerous since it will execute whatever code it reads. If you use eval with code you dont trust it can download a virus or grab info from your computer or anything really. DO NOT RUN UNTRUSTED CODE. + + +Also there is a difference between AES and RSA. One is a symmetric cipher and the other is asymmetric. Both will work for what you want but they require different things for encryption and decryption. One uses a single key for both while the other uses one for encryption and one for decryption. So something to think about.",1.2,True,1,7201 +2020-12-29 07:50:36.320,How to send and receive data (and / or data structures) from a C ++ script to a Python script?,"I am working on a project that needs to do the following: + +[C++ Program] Checks a given directory, extracts all the names (full paths) of the found files and records them in a vector. +[C++ Program] ""Send"" the vector to a Python script. +[Python Script] ""Receive"" the vector and transform it into a List. +[Python Script] Compares the elements of the List (the paths) against the records of a database and removes the matches from the List (removes the paths already registered). +[Python Script] ""Sends"" the processed List back to the C++ Program. +[C++ Program] ""Receives"" the List, transforms it into a vector and continues its operations with this processed data. + +I would like to know how to send and receive data structures (or data) between a C ++ Script and a Python Script. +For this case I put the example of a vector transforming into a List, however I would like to know how to do it for any structure or data in general. +Obviously I am a beginner, that is why I would like your help on what documentation to read, what concepts should I start with, what technique should I use (maybe there is some implicit standard), what links I could review to learn how to communicate data between Scripts of the languages ​​I just mentioned. +Any help is useful to me.","If the idea is to execute the python script from the c++ process, then the easiest would be to design the python script to accept input_file and output_file as arguments and the c++ program should write the input_file, start the script and read the output_file. +For simple structures like list-of-strings, you can simply write them as text files and share, but for more complex types, you can use google-protocolbuffers to do the marshalling/unmarshalling. +if the idea is to send/receive data between two already stared process, then you can use the same protocol buffers to encode data and send/receive via sockets between each other. Check gRPC",0.0,False,1,7202 +2020-12-30 17:33:11.363,Unable to get LabJack U3 model loaded into PyCharm properly,I am trying to use a LabJack product U3 using Python and I am using PyCharm for development of my code. I am new to both Python and PyCharm FYI. In the LabJack documentation they say to run python setup.py install in the directory I down loaded there Python links for using there device. I did this and when run under straight Python console can get the import u3 to run and am able to access the U3 device. Yet when I run this in PyCharm I can not get it to run. It always tells me module not found. I have asked LabJack for help but they do not know PyCharm. I have looked on the net but I can seem to see how to get the module properly under PyCharm. Could i please get some help on how to do this properly?,First Yll download that module inside of pycharm settings if it's still not working then import module in terminal of pycharm then try to run you're python script,0.0,False,1,7203 +2020-12-31 05:11:06.240,Hyper-prparameter tuning and classification algorithm comparation,"I have a doubt about classification algorithm comparation. +I am doing a project regarding hyperparameter tuning and classification model comparation for a dataset. +The Goal is to find out the best fitted model with the best hyperparameters for my dataset. +For example: I have 2 classification models (SVM and Random Forest), my dataset has 1000 rows and 10 columns (9 columns are features) and 1 last column is lable. +First of all, I splitted dataset into 2 portions (80-10) for training (800 rows) and tesing (200rows) correspondingly. After that, I use Grid Search with CV = 10 to tune hyperparameter on training set with these 2 models (SVM and Random Forest). When hyperparameters are identified for each model, I use these hyperparameters of these 2 models to test Accuracy_score on training and testing set again in order to find out which model is the best one for my data (conditions: Accuracy_score on training set < Accuracy_score on testing set (not overfiting) and which Accuracy_score on testing set of model is higher, that model is the best model). +However, SVM shows the accuracy_score of training set is 100 and the accuracy_score of testing set is 83.56, this means SVM with tuning hyperparameters is overfitting. On the other hand, Random Forest shows the accuracy_score of training set is 72.36 and the accuracy_score of testing set is 81.23. It is clear that the accuracy_score of testing set of SVM is higher than the accuracy_score of testing set of Random Forest, but SVM is overfitting. +I have some question as below: +_ Is my method correst when I implement comparation of accuracy_score for training and testing set as above instead of using Cross-Validation? (if use Cross-Validation, how to do it? +_ It is clear that SVM above is overfitting but its accuracy_score of testing set is higher than accuracy_score of testing set of Random Forest, could I conclude that SVM is a best model in this case? +Thank you!","I would suggest splitting your data into three sets, rather than two: + +Training +Validation +Testing + +Training is used to train the model, as you have been doing. The validation set is used to evaluate the performance of a model trained with a given set of hyperparameters. The optimal set of hyperparameters is then used to generate predictions on the test set, which wasn't part of either training or hyper parameter selection. You can then compare performance on the test set between your classifiers. +The large decrease in performance on your SVM model on your validation dataset does suggest overfitting, though it is common for a classifier to perform better on the training dataset than an evaluation or test dataset.",0.0,False,1,7204 +2020-12-31 06:41:56.733,Equivalent gray value of a color given the LAB values,"I have an RGB image and I converted to Lab colorspace. Now, I want to convert the image in LAB space to grayscale one. I know L NOT = Luminance. +So, any idea how to get the equivalent gray value of a specific color in lab space? +I'm looking for a formula or algorithm to determine the equivalent gray value of a color given the LAB values.","The conversion from Luminance Y to Lightness L* is defined by the CIE 1976 Lightness Function. Put another way, L* transforms linear values into non-linear values that are perceptually uniform for the Human Visual System (HVS). With that in mind, your question is now dependent on what kind of gray you are looking for, if perceptually uniform and thus non-linear, the Lightness channel from CIE Lab* is actually that of CIE 1976 and is appropriate. If you need something linear, you would have to convert back to CIE XYZ tristimulus values and use the Y channel.",0.3869120172231254,False,1,7205 +2020-12-31 13:28:50.363,"Creating a JSON file in python, where they are not separated by commas","I'm looking to create the below JSON file in python. I do not understand how I can have multiple dictionaries that are not separated by commas so when I use the JSON library to save the dictionary to disk, I get the below JSON; +{""text"": ""Terrible customer service."", ""labels"": [""negative""], ""meta"": {""wikiPageID"": 1}} +{""text"": ""Really great transaction."", ""labels"": [""positive""], ""meta"": {""wikiPageID"": 2}} +{""text"": ""Great price."", ""labels"": [""positive""], ""meta"": {""wikiPageID"": 3}} +instead of a list of dictionaries like below; +[{""text"": ""Terrible customer service."", ""labels"": [""negative""], ""meta"": {""wikiPageID"": 1}}, +{""text"": ""Really great transaction."", ""labels"": [""positive""], ""meta"": {""wikiPageID"": 2}}, +{""text"": ""Great price."", ""labels"": [""positive""], ""meta"": {""wikiPageID"": 3}}] +The difference is, in the first example, each line is a dictionary and they are not in a list or separated by commas. +Whereas in the second example, which is what I'm able to come up with is a list of dictionaries, each dictionary separated by a comma. +I'm sorry if this a stupid question I have been breaking my head over this for weeks, and have not been able to come up with a solution. +Any help is appreciated. +And thank you in advance.","The way you want to store the Data in one file isn't possible with JSON. +Each JSOn file can only contain one Object. This means that you can either have one Object defined within curly braces, or an Array of objects as you mentioned. +If you want to store each Object as a JSON object you should use separate files each containing a single Object.",0.0,False,1,7206 +2020-12-31 21:45:40.700,save user input data in kivy and store for later use/analysis python,"I am a kivy n00b, using python, and am not sure if this is the right place to ask. +Can someone please explain how a user can input data in an Android app, and how/where it is stored (SQL table, csv, xml?). I am also confused as to how it can be extended/used for further analysis. +I think it should be held as a SQL table, but do not understand how to save/set up a SQL table in an android app, nor how to access it. Similarly, how to save/append/access a csv/xml document, nor how if these are made, how they are secure from accidental deletion, overwriting, etc +In essence, I want to save only the timestamp a user enters some data, and the corresponding values (max 4). +User input would consist of 4 variables, x1, x2, x3, x4, and I would write a SQL statement along the lines: insert into data.table timestamp, x1, x2, x3, x4, and then to access the data something along the lines of select * from data.table and then do/show stuff. +Can someone offer suggestions on what resources to read? How to set up a SQL Server table in an android app?","This works basically the same way on android as on the desktop: you have access to the local filesystem to create/edit files (at least within the app directory), so you can read and write whatever data storage format you like. +If you want to use a database, sqlite is the simplest and most obvious option.",1.2,True,1,7207 +2021-01-01 02:54:19.350,"Django: Channels and Web Socket, how to make group chats exclusive","Eg i have a chat application, +however, i realised that for my application, as long as you have the link to the chat, you can enter. how do I prevent that, and make it such that only members of the group chat can access the chat. Something like password protected the url to the chat, or perhaps something like whatsapp. Does anyone have any suggestion and reference material as to how I should build this and implement the function? Thank you!","I am in the exact same condition as you.What I am thinking of doing +is +Store group_url and the respective user_ids (which we get from django's authentication) in a table(with two columns group_url and allowed_user_ids) or in Redis. +Then when a client connects to a channel,say chat/1234 (where 1234 is the group_url),we get the id of that user using self.scope['user'].id and check them in the table. +If the user_id is in the respected group_url,we accept the connection.Else reject the connection. I am new to this too.Suggest me if you find a better approach",1.2,True,1,7208 +2021-01-01 21:31:38.310,Discord.py get user with Name#0001,"How do I get the user/member object in discord.py with only the Name#Discriminator? I searched now for a few hours and didn't find anything. I know how to get the object using the id, but is there a way to convert Name#Discriminator to the id? +The user may not be in the Server.","There's no way to do it if you aren't sure they're in the server. If you are, you can search through the servers' members, but otherwise, it wouldn't make sense. Usernames/Discriminators change all the time, while IDs remain unique, so it would become a huge headache trying to implement that. Try doing what you want by ID, or searching the server.",0.0,False,1,7209 +2021-01-03 12:30:31.743,Get embed footer from reaction message,"I want the person who used the command to be able to delete the result. I have put the user's ID in the footer of the embed, and my question is: how do I get that data from the message where the user reacted to. +reaction.message.embed.footer doesn't work. I currently don't have code as I was trying to get that ID first. +Thanks in advance!","discord.Message object has no attribute embed, but it has embeds. It returns you a list of embeds that the message has. So you can simply do: reaction.message.embeds[0].footer.",1.2,True,1,7210 +2021-01-03 19:40:29.317,How to do auto login in python with sql database?,how can I make a login form that will remember the user so that he does not have to log in next time.,"Some more information would be nice but if you want to use a database for this then you would have to create a entry for the user information last entered. +And then on reopening the programm you would check if there are any entrys and if yes load it. +But I think that writing the login information to a file on you pc would be a lot easier. So you run the steps from above just writing to a file instead of a database. +I am not sure how you would make this secure because you can't really encrypt the password because you would need a password or key of some type and that password or key would be easy to find in the source code especially in python. It would be harder to find in other compiler based programming languages but also somewhere. And if you would use a database you would have a password for that but that would also lay on the hardrive if not encrypted otherwise but there we are where we started. +So as mentioned above a database would be quite useless for a task like this because it doesn't improve anything and is a hassle for beginners to setup.",0.0,False,1,7211 +2021-01-04 08:15:55.150,Cloudwatch Alarm for Aurora Data Dump Automation to S3 Bucket,"I need your advice on something that I'm working on as a part of my work. +I'm working on automating the Aurora Dump to S3 bucket every midnight. As a part of it, I have created a ec2 instance that generates the dump and I have written a python script using boto3 which moves the dump to S3 bucket every night. +I need to intimate a list of developers if the data dump doesn't take place for some reason. +As of now, I'm posting a message to SNS topic which notifies the developers if the backup doesn't happen. But I need to do this with Cloudwatch and I'm not sure how to do it. +Your help will be much appreciated. ! Thanks!",I have created a custom metric to which I have attached a Cloudwatch alarm and it gets triggered if there's an issue in data backup process.,0.0,False,1,7212 +2021-01-04 20:54:14.400,Installations on WSL?,"I use Python Anaconda and Visual Studio Code for Data Science and Machine Learning projects. +I want to learn how to use Windows Subsystem for Linux, and I have seen that tools such as Conda or Git can be installed directly there, but I don't quite understand the difference between a common Python Anaconda installation and a Conda installation in WSL. +Is one better than the other? Or should I have both? How should I integrate WSL into my work with Anaconda, Git, and VS Code? What advantages does it have or what disadvantages? +Help please, I hate not installing my tools properly and then having a mess of folders, environment variables, etc.","If you use conda it's better to install it directly on Windows rather than in WSL. Think of WSL as a virtual machine in your current PC, but much faster than you think. +It's most useful use would be as an alternate base for docker. You can run a whole lot of stuff with Windows integration from WSL, which includes VS Code. You can lauch VS code as if it is run from within that OS, with all native extension and app support. +You can also access the entire Windows filesystem from WSL and vice versa, so integrating Git with it won't be a bad idea",1.2,True,1,7213 +2021-01-04 23:27:42.213,discord.py get all permissions a bot has,So I am developing a Bot using discord.py and I want to get all permissions the Bot has in a specific Guild. I already have the Guild Object but I don't know how to get the Permissions the Bot has. I already looked through the documentation but couln't find anything in that direction...,"From a Member object, like guild.me (a Member object similar to Bot.user, essentially a Member object representing your bot), you can get the permissions that member has from the guild_permissions attribute.",1.2,True,1,7214