Q_CreationDate
stringlengths
23
23
Title
stringlengths
11
149
Question
stringlengths
25
6.53k
Answer
stringlengths
15
5.1k
Score
float64
-1
1.2
Is_accepted
bool
2 classes
N_answers
int64
1
17
Q_Id
int64
0
6.76k
2020-03-18 13:18:18.393
'odict_items' object is not subscriptable how to deal with this?
I've tried to run this code on Jupyter notebook python 3: class CSRNet(nn.Module): def __init__(self, load_weights=False): super(CSRNet, self).__init__() self.frontend_feat = [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'M', 512, 512, 512] self.backend_feat = [512, 512, 512,256,128,64] self.frontend = make_layers(self.frontend_feat) self.backend = make_layers(self.backend_feat,in_channels = 512,dilation = True) self.output_layer = nn.Conv2d(64, 1, kernel_size=1) if not load_weights: mod = models.vgg16(pretrained = True) self._initialize_weights() for i in range(len(self.frontend.state_dict().items())): self.frontend.state_dict().items()[i][1].data[:] = mod.state_dict().items()[i][1].data[:] it displays 'odict_items' object is not subscriptable as an error in the last line of code!!how to deal with this?
In python3, items() returns a dict_keys object, you should try to convert it to a list: list(self.frontend.state_dict().items())[i][1].data[:] = list(mod.state_dict().items())[i][1].data[:]
0.386912
false
1
6,623
2020-03-18 14:26:59.470
Is there any way that I can insert a python file into a html page?
I am currently trying to create a website that displays a python file (that is in the same folder as the html file) on the website, but I'm not sure how to do so. So I just wanted to ask if anyone could describe the process of doing so (or if its even possible at all).
Displaying "a python file" and displaying "the output" (implied "of a python script's execution) are totally different things. For the second one, you need to configure your server to run Python code. There are many ways to do so, but the two main options are 1/ plain old cgi (slow, outdated and ugly as f..k but rather easy to setup - if your hosting provides support for it at least - and possibly ok for one single script in an otherwise static site) and 2/ a modern web framework (flask comes to mind) - much cleaner, but possibly a bit overkill for one simple script. In both cases you'll have to learn about the HTTP protocol.
0
false
1
6,624
2020-03-19 00:25:35.587
How to append data to specific column Pandas?
I have 2 dataframes: FinalFrame: Time | Monday | Tuesday | Wednesday | ... and df (Where weekday is the current day, whether it be monday tuesday etc): WEEKDAY I want to append the weekday's data to the correct column. I will need to constantly keep appending weekdays data as weeks go by. Any ideas on how to tackle this?
You can add index of week days instead of their name. For example, weekdays = ['Mon', Tue', 'Wed', 'Thu', 'Fri','Sat', 'Sun'] Time | 0 | 1 | 2 ....
0
false
2
6,625
2020-03-19 00:25:35.587
How to append data to specific column Pandas?
I have 2 dataframes: FinalFrame: Time | Monday | Tuesday | Wednesday | ... and df (Where weekday is the current day, whether it be monday tuesday etc): WEEKDAY I want to append the weekday's data to the correct column. I will need to constantly keep appending weekdays data as weeks go by. Any ideas on how to tackle this?
So the way you could do it isolates the series by saying weekday[whatever day you are looking at .append.
0
false
2
6,625
2020-03-20 17:35:30.097
Displaying data from my python script on a webpage
My case: I want to display the meal plan from my University on my own online "Dashboard". I've written my python script to scrape that data and I get the data I need (plain Text). Now I need to put it on my website but I don't know how to start. On my first searching sessions, I have found something with CGI but I have no clue how to use it:( Is there maybe an even easier way to solve my problem? Thanks
I suggest you to use the Django, if you don't want to use django, you can edit your output in HTML formate and publish html page, directly.
0
false
1
6,626
2020-03-21 17:11:32.527
How to read the documentation of a certain module?
I've just finished my course of Python, so now I can write my own script. So to do that I started to write a script with the module Scapy, but the problem is, the documentation of Scapy is used for the interpreter Scapy, so I don't know how to use it, find the functions, etc. I've found a few tutorials in Internet with a few examples but it's pretty hard. For example, I've found in a script the function "set_payload" to inject some code in the layer but I really don't know where he found this function. What's your suggestion for finding how a module works and how to write correctly with it? Because I don't really like to check and pick through other scripts on Internet.
If I have understood the question correctly, roughly what you are asking is how to find the best source to understand a module. If you are using an inbuilt python module, the best source is the python documentation. Scapy is not a built-in python module. So you may have some issues with some of the external modules (by external I mean the ones you need to explicitly install). For those, if the docs aren't enough, I prefer to look at some of the github projects that may use that module one way or the other and most of the times it works out. If it doesn't, then I go to some blogs or some third party tutorials. There is no right way to do it, You will have to put in the effort where its needed.
1.2
true
1
6,627
2020-03-21 17:12:42.693
Efficient Way to Run Multiple Instances of the Same Discord Bot (Discord)
I have a Discord bot I use on a server with friends. The problem is some commands use web scraping to retrieve the bot response, so until the bot is finished retrieving the answer, the bot is out of commission/can't handle new commands. I want to run multiple instances of the same bot on my host server to handle this, but don't know how to tell my code "if bot 1 is busy with a command, use bot 2 to answer the command" Any help would be appreciated!
async function myFunction () {} this should fix your problem having multiple instances could be possible with threads, but this is just a much more easy way
-0.201295
false
1
6,628
2020-03-21 18:13:00.810
Efficient unions and intersections in 2D plane when y_min = 0
If got the following problem: Given a series of rectangles defined by {x_min, height and x_max}, I want to efficiently compute their intersection and union, creating a new series. For instance, if I got S1 = [{1,3,3}] and S2 = [{2,3,5}], the union would result in S3 = [{1,3,5}] and intersection in S3 = [{2,3,3}]. This would be a fairly simple case, but when S1 and S2 are a list of rectangles (unordered) It get's a little bit tricky. My idea is trying some divide and conquer strategy, like using a modificated mergesort, and in the merge phase try to also merge those buildings. But I'm a little bit unsure about how to express this. Basically I can't write down how to compare two rectangles with those coordinates and decide if they have to be in S3, or if I have to create a new one (for the intersection). For the union I think the idea has to be fairly similar, but the negation (i.e. if they don't interesct). This has to be O(nlogn) for sure, given this is in a 2D plane I surely have to sort it. Currently my first approach is O(n^2). Any help how to reduce the complexity? PD: The implementation I'm doing is in Python
I tried to write the whole thing out in psudo-code, and found that it was too psudo-y to be useful and too code-y to be readable. Here's the basic idea: You can sort each of your input lists in O(n*log(n)). Because we assume there's no overlap within each series, we can now replace each of those lists with lists of the form {start, height}. We can drop the "end" attribute by having a height-0 element start where the last element should have ended. (or not, if two elements were already abutting.) Now you can walk/recurse/pop your way through both lists in a single pass, building a new list of {start, height} outputs as you go. I see no reason you couldn't be building both your union and intersection lists at the same time. Cleanup (conversion to a minimal representation in the original format) will be another pass, but still O(n). So the problem is O(n*log(n)), and could be O(n) if you could finagle a way to get your input pre-sorted.
0
false
1
6,629
2020-03-22 20:43:39.077
Python Methods (where to find additional resources)
I am learning Python by reading books, and I have a question about methods. Basically, all of the books that I am reading touch on methods and act like they just come out of thin air. For example, where can I find a list of all methods that can be applied? I can't find any documentation that lists all methods. The books are using things like .uppercase and .lowercase but is not saying where to find other methods to use, or how to see which ones are available and where. I would just like to know what I am missing. Thanks. Do I need to dig into Python documentation to find all of the methods?
There is a lot of function in Python's modules. If you want to learn were you can find them, you should ask what you want. For example there is a random module and you can find some functions like random.randint.
0
false
1
6,630
2020-03-23 18:58:48.680
How to reach and add new row to web server database which made in Django framework?
I am trying to create a web server which has Django framework and I am struggling with outer world access to server. While saying outer world I am trying to say a python program that created out of Django framework and only connects it from local PC which has only Internet connection. I can't figured it out how can I do this. I am building this project in my local host, so I create the "outer world python program" outside of the project file. I think this simulation is proper. I am so new in this web server/Django field. So maybe I am missing an essential part. If this happened here I'm sorry but I need an answer and I think it is possible to do. Thanks in advance...
Django generated fields in database are just standard fields. The tables are named like 'applicationname'_'modelname', you are free to do requests to the database directly, without django. If you want to do it through django, your outer program can request a web page from your web server, and deal with it. (You may want to take a look at RESTs frameworks)
1.2
true
1
6,631
2020-03-23 22:03:37.527
Failed to load dynlib/dll (Pyintaller)
After using the pyintaller to transfer the py file to exe file, the exe file throws the error: "Failed to load dynlib/dll". Here is the error line: main.PyInstallerImportError: Failed to load dynlib/dll 'C:\Users\YANGYI~1\AppData\Local\Temp\_MEI215362\sklearn\.libs\vcomp140.dll'. Most probably this dynlib/dll was not found when the application was frozen. [1772] Failed to execute script 2 after get this, I did check the path and I did not find a folder called "_MEI215362" in my Temp folder, I have already made all files visible. Also, I have re-download the VC but and retransferring the file to exe, but it didn't work. Any ideas how to fix the issue? Thank you in advance!
I also encountered a similar problem like Martin. In my case, however, it was the ANSI64.dll missing... So, I simply put the particular dll file into the dist directory. Lastly, I keep the exe and related raw data files (e.g. xlsx, csv) inside the "dist" folder and to run the compiled program. It works well for me.
0
false
1
6,632
2020-03-23 23:50:42.940
How to delete drawn objects with OpenCV in Python?
How to delete drawn objects with OpenCV in Python ? I draw objects on click (cv2.rectangle, cv2.circle) ... Then I would like to delete only drawn objects. I know that i need to make a layer in behind of the real image and to draw on another one. But I do not know how to implement this in code.
Have a method or something that when it's executed, will replace the image with stuff drawn on it with an original unaltered image. It's best to create a clone of your original image to draw on.
0
false
1
6,633
2020-03-24 23:04:42.153
Explain the necessity of database drivers, libraries, dlls in a python application that interacts with a remote database?
I have written a python script that connects to a remote Oracle database and inserts some data into its tables. In the process I had to first import cx_Oracle package and install Oracle InstantClient on my local computer for the script to execute properly. What I don't understand is why did I have to install InstantClient? I tried to read through the docs but I believe I am missing some fundamental understanding of how databases work and communicate. Why do I need all the external drivers, dlls, libraries for a python script to be able to communicate with a remote db? I believe this makes packaging and distribution of a python executable much harder. Also what is InstantClient anyway? Is it a driver? What is a driver? Is it simply a collection of "programs" that know how to communicate with Oracle databases? If so, why couldn't that be accomplished with a simple import of a python package? This may sound like I did not do my own research beforehand, but I'm sorry, I tried, and like I said, I believe I am missing some underlying fundamental knowledge.
We have a collection of drivers that allow you to communicate with an Oracle Database. Most of these are 'wrappers' of a sort that piggyback on the Oracle Client. Compiled C binaries that use something we call 'Oracle Net' (not to be confused with .NET) to work with Oracle. So our python, php, perl, odbc, etc drivers are small programs written such that they can be used to take advantage of the Oracle Client on your system. The Oracle Client is much more than a driver. It can include user interfaces such as SQL*Plus, SQL*Loader, etc. Or it can be JUST a set of drivers - it depends on which exact package you choose to download and install. And speaking of 'install' - if you grab the Instant Client, there's nothing to install. You just unzip it and update your environment path bits appropriately so the drivers can be loaded.
1.2
true
1
6,634
2020-03-25 10:26:21.650
Module not appearing in jupyter
I'm having issues with importing my modules into jupyter. I did the following: Create virtual env Activate it (everything below is in the context of my venv) install yahoo finance module: pip install yfinance open python console and import it to test if working > OK! open jupyter notebook import yfinance throws ModuleNotFoundError: No module named 'yfinance' Any suggestions on how to fix this?
try this one in your jupyter and the run it !pip install yfinance
0
false
1
6,635
2020-03-26 04:29:58.543
How do I create a template to store my HTML file when creating a web app with Python's Flask Framework in the PyCharm IDE?
I am trying to do a tutorial through FreeCodeCamp using Python's Flask Framework to create a web app in PyCharm and I am stuck on a section where it says 'Flask looks for HTML files in a folder called template. You need to create a template folder and put all your HTML files in there.' I am confused on how to make this template folder; is it just a regular folder or are there steps to create it and drag/drop the HTML files to it? Any tips or info would be of great help!!!
As the tutorial ask you, you have to create a folder call "templates" (not "template"). In PyCharm you can do this by right-clicking on the left panel and select New I Directory. In this folder you can then create your template files (right click on the newly created folder and select New I File, then enter the name of your file with the .html extension). By default, flask looks in the "templates" folder to find your template when you call render_template("index.html"). Notice that you don’t put the full path of your file at the first parameter but just the relative path to the "templates" folder.
1.2
true
1
6,636
2020-03-26 08:51:13.303
How to implement dct when the input image size is not a scale of 8?
I learned that if one needs to implement dct on a image of size (H, W), one needs a matrix A that is of size (8, 8), and one needs to use this A to compute with a (8, 8) region F on the image. That means if the image array is m, one needs to compute m[:8, :8] first, and then m[8:16, 8:16], and so on. How could I implement this dct when input image size is not a scale of 8. For example, when image size is (12, 12) that cannot hold two (8, 8) windows, how could I implement dct ? I tried opencv and found that opencv can cope with this scenario, but I do not know how it implemented it.
The 8x8 is called a "Minimum Coded Unit" (MCU) in the specification, though video enthusiasts call them "macroblocks". Poorer implementations will pad to fill with zeroes - which can cause nasty effects. Better implementations pad to fill by repeating the previous pixel from the left if padding to the right, or from above if padding downwards. Note that only the right side and bottom of an image can be padded.
1.2
true
1
6,637
2020-03-26 10:29:19.013
Re-assign backslash to three dots in Python
Is it possible in Python to re-assign the backslash character to something else, like to the three dots? I hate the backslash character. It looks ugly. There’s a long line in my code I really need to use the \ character. But I’d rather use the ... character. I just need a simple yes/no answer. Is it possible? And in the case of yes, tell me how to re-assign that ugly thing.
Python syntactically uses the backslash to represent the escape character, as do other languages such as Java and C. As far as I am aware this cannot be overwritten unless you want to change the language itself.
0
false
1
6,638
2020-03-26 16:45:21.790
How do I receive a variable from python flask to JavaScript?
I've seen how to make a post request from JavaScript to get data from the server, but how would I do this flipped. I want to trigger a function in the flask server that will then dynamically update the variable on the JavaScript side to display. Is there a way of doing this in a efficient manner that does not involve a periodic iteration. I'm using an api and I only want to the api to be called once to update.
There are three basic options for you: Polling - With this method, you would periodically send a request to the server (maybe every 5 seconds for example) and ask for an update. The upside is that it is easy to implement. The downside is that many requests will be unnecessary. It sounds like this isn't a great option for you. Long Polling - This method means you would open a request up with the server and leave the request open for a long period of time. When the server gets new information it will send a response and close the request - after which the client will immediately open up a new "long poll" request. This eliminates some of the unnecessary requests with regular polling, but it is a bit of a hack as HTTP was meant for a reasonably short request response cycle. Some PaaS providers only allow a 30 second window for this to occur for example. Web Sockets - This is somewhat harder to setup, but ultimately is the best solution for real time server to client (and vice versa) communication. A socket connection is opened between the server and client and data is passed back and forth whenever either party would like to do so. Javascript has full web socket support now and Flask has some extensions that can help you get this working. There are even great third party managed solutions like Pusher.com that can give you a working concept very quickly.
1.2
true
1
6,639
2020-03-26 20:40:16.497
Display result (image) of computation in website
I have a python script that generates a heightmap depending on parameters, that will be given in HTML forms. How do I display the resulting image on a website? I suppose that the form submit button will hit an endpoint with the given parameters and the script that computes the heightmap runs then, but how do I get the resulting image and display it in the website? Also, the computation takes a few seconds, so I suppose I need some type of task queue to not make the server hang in the meanwhile. Tell me if I'm wrong. It's a bit of a general question because I myself don't know the specifics of what I need to use to accomplish this. I'm using Flask in the backend but it's a framework-agnostic question.
Save the image to a file. Return a webpage that contains an <IMG SRC=...> element. The SRC should be a URL pointing at the file. For example, suppose you save the image to a file called "temp2.png" in a subdirectory called "scratch" under your document root. Then the IMG element would be <IMG SRC="/scratch/temp2.png"> . If you create and save the image in the same program that generates the webpage that refers to it, your server won't return the page until the image has been saved. If that only takes a few seconds, the server is unlikely to hang. Many applications would take that long to calculate a result, so the people who coded the server would make sure it can handle such delays. I've done this under Apache, Tomcat, and GoServe (an OS/2 server), and never had a problem. This method does have the disadvantage that you'll need to arrange for each temporary file to be deleted after an expiry period such as 12 hours or whenever you think the user won't need it any more. On the webpage you return, if the image is something serious that the user might want to keep, you could warn them that this will happen. They can always download it. To delete the old files, write a script that checks when they were last updated, compares that with the current date and time, and deletes those files that are older than your expiry period. You'll need a way to automatically run it repeatedly. On Unix systems, if you have shell access, the "cron" command is one way to do this. Googling "cron job to delete files older than 1 hour on web server" finds a lot of discussion of methods. Be very careful when coding any automatic-deletion script, and test it thoroughly to make sure it deletes the right files! If you make your expiry period a variable, you can set it to e.g. 1 minute or 5 minutes when testing, so that you don't need to wait for ages. There are ways to stream your image back without saving it to a file, but what I'm recommending is (apart possibly from the file deleter) easy to code and debug. I've used it in many different projects.
1.2
true
1
6,640
2020-03-26 22:11:19.490
How to create a dynamic website using python connected to a database
I would like to create a website where I show some text but mainly dynamic data in tables and plots. Let us assume that the user can choose whether he wants to see the DAX or the DOW JONES prices for a specific timeframe. I guess these data I have to store in a database. As I am not experienced with creating websites, I have no idea what the most reasonable setup for this website would be. Would it be reasonable for this example to choose a database where every row corresponds of 9 fields, where the first column is the timestamp (lets say data for every minute), the next four columns correspond to the high, low, open, close price of DAX for this timestamp and columns 5 to 9 correspond to high, low, open, close price for DOW JONES? Could this be scaled to hundreds of columns with a reasonable speed of the database? Is this an efficient implementation? When this website is online, you can choose whether you want to see DAX or DOW JONES prices for a specific timeframe. The corresponding data would be chosen via python from the database and plotted in the graph. Is this the general idea how this will be implemented? To get the data, I can run another python script on the webserver to dynamically collect the desired data and write them in the database? As a total beginner with webhosting (is this even the right term?) it is very hard for me to ask precise questions. I would be happy if I could find out whats the general structure I need to create the website, the database and the connection between both. I was thinking about amazon web services.
You could use a database, but that doesn't seem necessary for what you described. It would be reasonable to build the database as you described. Look into SQL for doing so. You can download a package XAMPP that will give you pretty much everything you need for that. This is easily scalable to hundreds of thousands of entries - that's what databases are for. If your example of stock prices is actually what you are trying to show, however, this is completely unnecessary as there are already plenty of databases that have this data and will allow you to query them. What you would really want in this scenario is an API. Alpha Vantage is a free service that will serve you data on stock prices, and has plenty of documentation to help you get it set up with python. I would structure the project like this: Use the python library Flask to set up the back end. In addition to instantiating the Flask app, instantiate the Alpha Vantage class as well (you will need to pip install both of these). In one of the routes you declare under Flask, use the Alpha Vantage api to get the data you need and simply display it to the screen. If I am assuming you are a complete beginner, one or more of those steps may not make sense to you, in which case take them one at a time. Start by learning how to build a basic Flask app, then look at the API. YouTube is your friend for both of these things.
0
false
1
6,641
2020-03-27 01:17:46.150
Python3: Does the built-in function "map" have a bug?
The following I had with Python 3.8.1 (on macOS Mojave, 10.14.6, as well as Python 3.7 (or some older) on some other platforms). I'm new to computing and don't know how to request an improvement of a language, but I think I've found a strange behaviour of the built-in function map. As the code next(iter(())) raises StopIteration, I expected to get StopIteration from the following code: tuple(map(next, [iter(())])) To my surprise, this silently returned the tuple ()! So it appears the unpacking of the map object stopped when StopIteration came from next hitting the "empty" iterator returned by iter(()). However, I don't think the exception was handled right, as StopIteration was not raised before the "empty" iterator was picked from the list (to be hit by next). Did I understand the behaviour correctly? Is this behaviour somehow intended? Will this be changed in a near future? Or how can I get it? Edit: The behaviour is similar if I unpack the map object in different ways, such as by list, for for-loop, unpacking within a list, unpacking for function arguments, by set, dict. So I believe it's not tuple but map that's wrong. Edit: Actually, in Python 2 (2.7.10), the "same" code raises StopIteration. I think this is the desirable result (except that map in this case does not return an iterator).
Did I understand the behavior correctly? Not quite. map takes its first argument, a function, and applies it to every item in some iterable, its second argument, until it catches the StopIteration exception. This is an internal exception raised to tell the function that it has reached the end of the object. If you're manually raising StopIteration, it sees that and stops before it has the chance to process any of the (nonexistent) objects inside the list.
0.135221
false
1
6,642
2020-03-27 03:41:18.327
SocketIO + Flask Detect Disconnect
I had a different question here, but realized it simplifies to this: How do you detect when a client disconnects (closes their page or clicks a link) from a page (in other words, the socket connection closes)? I want to make a chat app with an updating user list, and I’m using Flask on Python. When the user connects, the browser sends a socket.emit() with an event and username passed in order to tell the server a new user exists, after which the server will message all clients with socket.emit(), so that all clients will append this new user to their user list. However, I want the clients to also send a message containing their username to the server on Disconnect. I couldn’t figure out how to get the triggers right. Note: I’m just using a simple html file with script tags for the page, I’m not sure how to add a JS file to go along with the page, though I can figure it out if it’s necessary for this.
Figured it out. socket.on('disconnect') did turn out to be right, however by default it pings each user only once a minute or so, meaning it took a long time to see the event.
1.2
true
1
6,643
2020-03-27 05:09:55.327
Is it possible to create labels.txt manually?
I recently convert my model to tensorflow lite but I only got the .tflite file and not a labels.txt for my Android project. So is it possible to create my own labels.txt using the classes that I used to classify? If not, then how to generate labels.txt?
You should be able to generate and use your own labels.txt. The file needs to include the label names in the order you provided them in training, with one name per line.
1.2
true
1
6,644
2020-03-27 17:12:09.890
Python 3.7 pip install - The system cannot find the path specified
I am using Python 3.7 (Activestate) on a windows 10 laptop. All works well until I try to use pip to install a package (any package). From command prompt, when entering "pip install anyPackage" I get an error - "The system cannot find the path specified." no other explanation or detail. Python is installed in "C:\Python37" and this location is listed in the Control Panel > System > Environment Variables > User Variables. In the Environment Variables > System Variables I have: C:\Python37\ C:\Python37\DLLs\ C:\Python37\Script\ C:\Python37\Tools\ C:\Python37\Tools\ninja\ Any suggestions on how to get rid of that error, and make pip work? Many thanks to all
Short : make sure that pip.exe and python.exe are running from the same location. If they don't (perhaps due to PATH environment variable), just delete the one that you don't need. Longer: when running pip install, check out where it tries to get python For instance, in my own computer, it was: pip install Fatal error in launcher: Unable to create process using '"c:\program files\python39\python.exe" "C:\Program Files\Python39\Scripts\pip.exe" ': The system cannot find the file specified. Then I ran: 'where python.exe' // got several paths. 'where pip.exe' // got different paths. removed the one that I don't use. Voila.
0.995055
false
1
6,645
2020-03-28 04:17:15.583
Change which python im using in terminal MacOs Catalina
First of all, im really new at Machine Learning and Anaconda Recently I´ve Installed Anaconda for MachineLearning but now when i try to run my old scripts from my terminal, all my packages are not there, even pip or numpy or pygame y don´t know how to change to my old python directory, I really don´t know how this works, please help me. I´m on MacOs Catalina
First of all, Python 3 is integrated in macOS X Catalina, just type python3. For pip, you can use pip3. Personally, I would prefer native over conda when using mac. Next, you need to get all the modules up from your previous machine by pip freeze > requirements.txt or pip3 freeze > requirements.txt If you have the list already, either it's from your previous machine or from a GitHub project repo, just install it via pip3 in your terminal: pip3 install -r requirements.txt If not, you have to manually install via pip3, for example: pip3 install pygame etc. After all dependencies are done installed, just run your .py file as usual. Last, but not least, welcome to the macOS X family!
0.545705
false
1
6,646
2020-03-28 05:07:30.247
How to locate module inside PyCharm?
I am a beginner in python 3. I want to locate where the time module is in PyCharm to study it's aspects/functions further. I can't seem to find it in the library. Can someone show me an example on how to find it ? I know there are commands to find files, but I am not advanced enough to use them.
I think you may have a misconception - the time module is part of your Python installation, which PyCharm makes use of when you run files. Depending on your setup, you may be able to view the Python files under "external libraries" in your project viewer, but you could also view them from your file system, wherever Python is installed.
0
false
1
6,647
2020-03-28 05:40:42.997
How to emit different messages to different users based on certain criteria
I am building a chat application using flask socketio and I want to send to a specific singular client and I'm wondering how to go about this. I get that emit has broadcast and include_self arguments to send to all and avoid sending oneself, but how exactly would I go about maybe emitting to a single sid? I've built this application using standard TCP/UDP socket connection where upon client connecting, there socket info was stored in a dictionary mapped to their user object with attributes that determined what would be sent and when I wanted to emit something to the clients I would iterate through this and be able to control what was being sent. I'm hoping some mastermind could help me figure out how to do this in flask socket io
I ended up figuring it out. Using the flask request module, you can obtain the users sid using request.sid, which can be stored and emitted to within the room parameter emit(..... room=usersid
0.201295
false
1
6,648
2020-03-28 11:38:23.503
How to populate module internal progress status to another module?
let us say I have a python 3.7+ module/script A which does extensive computations. Furthermore, module A is being used inside another module B, where module A is fully encapsulated and module B doesn't know a lot of module's A internal implementation, just that it somehow outputs the progress of its computation. Consider me as the responsible person (person A) for module A, and anyone else (person B) that doesn't know me, is writing module B. So person A is writing basically an open library. What would be the best way of module A to output its progress? I guess it's a design decision. Would a getter in module A make sense so that module B has to always call this getter (maybe in a loop) in order to retrieve the progress of A? Would it possible to somehow use a callback function which is implemented in module A in such a way that this function is called every time the progress updates? So that this callback returns the current progress of A. Is there maybe any other approach to this that could be used? Pointing me towards already existing solutions would be very helpful!
Essentially module B want to observe module A as it goes though extensive computation steps. And it is up to module A to decide how to compute progress and share this with module B. Module B can't compute progress as it doesn't know details of computation. So its is good use of observer pattern. Module A keeps notifying B about its progress. Form of progress update is also important. It can in terms of percentage, or "step 5 of 10" or time. It will actually define the notification payload structure with which module A will notify module B.
0
false
1
6,649
2020-03-28 15:31:06.740
Where does kivy.storage.jsonstore saves its files?
I have a kivy app, where I use JsonStorage. Where does kivy save the json files, so how can I find it?
I just found out the json file is on the same level as the kivy_venv folder
1.2
true
1
6,650
2020-03-30 13:27:37.467
Python wait for request to be processed by queue and continue processing based on response
I have the following setup: One thread which runs a directory crawler and parses documents Another thread which processes database requests it gets in a queue - there are two basic database requests that come through - mark document processed (write operation) and is document already processed (select operation) I understand that an sqlite connection object cannot be shared across threads, so the connection is maintained in the database thread. I am new to threading though and in my parser thread I want to check first if a document has been processed which means a database call, but obviously cannot do this call directly and have to send the request to the database thread which is fine. However, where I am stuck is I am not sure how to make the parser thread wait for the result of the "has document been processed" request in the database thread. Is this where a threading event would come in? Thanks in advance for your help!
Thanks to stovfl, used a threading event to realise this. Thanks again!
0
false
1
6,651
2020-03-30 18:29:23.523
How to accesss a python virtual enviroment when the command prompt is accidentally closed?
I opened a virtual enviroment and accidentally closed the command prompt window in Windows. I wanted to delete the virtual enviroment folder, but when I tried, it says program is running which still uses the files. So how do I get back to the virtual enviroment, without opening a new one?
Just kill the daemon-process by command in Ctrl+Alt+Del interface. Then you can delete a folder
0
false
1
6,652
2020-03-30 19:49:43.310
Recommended python scientific workflow management tool that defines dependency completeness on parameter state rather than time?
It's past time for me to move from my custom scientific workflow management (python) to some group effort. In brief, my workflow involves long running (days) processes with a large number of shared parameters. As a dependency graph, nodes are tasks that produce output or do some other work. That seems fairly universal in workflow tools. However, key to my needs is that each task is defined by the parameters it requires. Tasks are instantiated with respect to the state of those parameters and all parameters of its dependencies. Thus if a task has completed its job according to a given parameter state, it is complete and not rerun. This parameter state is NOT the global parameter state but only what is relevant to that part of the DAG. This reliance on parameter state rather than time completed appears to be the essential difference between my needs and existing tools (at least what I have gathered from a quick look at Luigi and Airflow). Time completed might be one such parameter, but in general it is not the time that determines a (re)run of the DAG, but whether the parameter state is congruent with the parameter state of the calling task. There are non-trivial issues (to me) with 'parameter explosion' and the relationship to parameter state and the DAG, but those are not my question here. My question -- which existing python tool would more readily allow defining 'complete' with respect to this parameter state? It's been suggested that Luigi is compatible with my needs by writing a custom complete method that would compare the metadata of built data ('targets') with the needed parameter state. How about Airflow? I don't see any mention of this issue but have only briefly perused the docs. Since adding this functionality is a significant effort that takes away from my 'scientific' work, I would like to start out with the better tool. Airflow definitely has momentum but my needs may be too far from its purpose. Defining the complete parameter state is needed for two reasons -- 1) with complex, long running tasks, I can't just re-run the DAG every time I change some parameter in the very large global parameter state, and 2) I need to know how the intermediate and final results have been produced for scientific and data integrity reasons.
I looked further into Luigi and Airflow and as far as I could discern neither of these is suitable for modification for my needs. The primary incompatibility is that these tools are fundamentally based on predetermined DAGs/workflows. My existing framework operates on instantiated and fully specified DAGs that are discovered at run-time rather than concisely described externally -- necessary because knowing whether each task is complete, for a given request, is dependent on many combinations of parameter values that define the output of that task and the utilized output of all upstream tasks. By instantiated, I mean the 'intermediate' results of individual runs each described by the full parameter state (variable values) necessary to reproduce (withstanding any stochastic element) identical output from that task. So a 'Scheduler' that operates on a DAG ahead of time is not useful. In general, most existing workflow frameworks, at least in python, that I've glanced at appear more to be designed to automate many relatively simple tasks in an easily scalable and robust manner with parallelization, with little emphasis put on the incremental building up of more complex analyses with results that must be reused when possible designed to link complex and expensive computational tasks the output of which may likely in turn be used as input for an additional unforeseen analysis. I just discovered the 'Prefect' workflow this morning, and am intrigued to learn more -- at least it is clearly well funded ;-). My initial sense is that it may be less reliant on pre-scheduling and thus more fluid and more readily adapted to my needs, but that's just a hunch. In many ways some of my more complex 'single' tasks might be well suited to wrap an entire Prefect Flow if they played nicely together. It seems my needs are on the far end of the spectrum of deep complicated DAGs (I will not try to write mine out!) with never ending downstream additions. I'm going to look into Prefect and Luigi more closely and see what I can borrow to make my framework more robust and less baroque. Or maybe I can add a layer of full data description to Prefect... UPDATE -- discussing with Prefect folks, clear that I need to start with the underlying Dask and see if it is flexible enough -- perhaps using Dask delayed or futures. Clearly Dask is extraordinary. Graphchain built on top of Dask is a move in the right direction by facilitating permanent storage of 'intermediate' output computed over a dependency 'chain' identified by hash of code base and parameters. Pretty close to what I need, though with more explicit handling of those parameters that deterministically define the outputs.
0.386912
false
1
6,653
2020-04-01 00:12:57.697
Is it possible to "customize" python?
Can I change the core functionality of Python, for example, rewrite it to use say("Hello world") instead of print("Hello world")? If this is possible, how can this be done?
yes you can just write say = print say("hello")
0
false
1
6,654
2020-04-01 14:28:25.297
Python Arcsin Arccos radian and degree
I am working on wind power and speed u and vare zonal and meridional wind. (I have the values of these 2 vectors) The wind speed is calculated by V = np.sqrt(u2*v2) Wind direction is given by α between 0 and 360 degree I know this relation holds - u / V = sin( abs(α)) and - v / V = cos( abs(α)) In python I am using np.arccos and np.arcsin to try to find α between 0 and 360 with the 2 equation above. For the first one, it returns the radian so I convert with np.rad2deg(...) but it gives me a value between 0 and 180 degree for the second one, I also try to convert but it returns me a valus between 0 and 90. Anyone knows how to code it? I am lost :(
The underlying problem is mathematics: cos(-x) == cos(x), so the function acos has only values in the [0,pi] interval. And for equivalent reasons asin has values in [-pi/2,pi/2] one. But trigonometric library designers know about that, and provide a special function (atan2) which uses both coordinates (and not a ratio) to give a value in the [-pi, pi] interval. That being said, be careful when processing wind values. A 360 wind is a wind coming from the North, and 90 is a wind coming from the East. Which is not the way mathematicians count angles...
0.386912
false
1
6,655
2020-04-01 19:04:15.633
How to play ogg files on python
I looked everywhere, I don't find a way to properly play Ogg files, they all play wav! My question is: Does somebody knows how to play Ogg files in python? If somebody knows how I'll be very thankful :) (I am on windows)
The easiest way is probably to start a media player application to play the file using subprocess.Popen. If you already have a media player associated with Ogg files installed, using the start command should work.
0
false
1
6,656
2020-04-02 14:27:00.470
Restore file tabs above main editor in Spyder
I was modifying the layout in Spyder 4.1.1 and somehow lost the filename tabs (names of opened .py files) that used to appear above the central editor window. These were the tabs that had the 'X' button in them that allowed you to quickly close them. I've been toggling options in the View and Tools menus but can't seem to get it back. Anyone know how to restore this?
Try it. From menu View --> Panes --> Editor. Clicking on Editor and putting a tick there should bring that back if I understand your question properly
0.201295
false
2
6,657
2020-04-02 14:27:00.470
Restore file tabs above main editor in Spyder
I was modifying the layout in Spyder 4.1.1 and somehow lost the filename tabs (names of opened .py files) that used to appear above the central editor window. These were the tabs that had the 'X' button in them that allowed you to quickly close them. I've been toggling options in the View and Tools menus but can't seem to get it back. Anyone know how to restore this?
(Spyder maintainer here) You can restore the tab bar in our editor by going to the menu Tools > Preferences > Editor > Display and selecting the option called Show tab bar.
1.2
true
2
6,657
2020-04-02 15:13:40.530
In a child widget, how do I get the instance of a parent widget in kivy
How do I get the instance of a parent widget from within the child widget in kivy? This is so that I can remove the child widget from within the child widget class from the parent widget.
use parent.<attribute> or root.ids.<id-of-the-widget-you-need>
0
false
1
6,658
2020-04-02 23:53:56.067
Python version in Visual Studio console
I have set the interpreter to 3.8.2 but when I type in the console python --version it gives me the python 2.7.2. Why is that and how to change the console version so I can run my files with Python 3? In windows console I have of course python 3 when I type the --version.
The console displayed by VSCode is basically an ordinary terminal. When you run the python file from VSCode using the green arrow at the top, it will call the appropriate python version displayed at the bottom of the VSCode window. You can also see what VSCode executes in the terminal seeing to which python its pointing to.
0.201295
false
2
6,659
2020-04-02 23:53:56.067
Python version in Visual Studio console
I have set the interpreter to 3.8.2 but when I type in the console python --version it gives me the python 2.7.2. Why is that and how to change the console version so I can run my files with Python 3? In windows console I have of course python 3 when I type the --version.
(Assuming you use Visual Studio Code with the Python Extension) The interpreter set in visual studio has nothing to do with the terminal python version when you run python --version. python --version is bound to what python version is bound to 'python' in your environment variables. Try python3 --version in the visual studio console to see what version is bound to python3. If this is the right version, use python3 in the visual studio console from now on.
0
false
2
6,659
2020-04-03 15:17:33.063
Print an UTF8-encoded smiley
I am writing an ReactionRoles-Discord-Bot in Python (discord.py). This Bot saves the ReactionRoles-Smileys as UFT8-Encoded. The type of the encoded is bytes but it's converted to str to save it. The string looks something like "b'\\xf0\\x9f\\x98\\x82'". I am using EMOJI_ENCODED = str(EMOJI.encode('utf8')) to encode it, but bytes(EMOJI_ENCODED).decode('utf8') isn't working. Do you know how to decode it or how to save it in a better way?
The output of str() is a Unicode string. EMOJI is a Unicode string. str(EMOJI.encode('utf8')) just makes a mangled Unicode string. The purpose of encoding is to make a byte string that can be saved to a file/database/socket. Simply do b = EMOJI.encode() (default is UTF-8) to get a byte string and s = b.decode() to get the Unicode string back.
0
false
1
6,660
2020-04-06 15:12:45.273
How can I see the source code for a python library?
I currently find myself using the bs4/BeautifulSoup library a lot in python, and have recently been wondering how it works. I would love to see the source code for the library and don't know how. Does anyone know how to do this? Thanks.
If you are using any IDE, you can right click on imported line and goto Implementation. Otherwise you can find the source code in <python_installtion_path>\Lib\site-packages directory.
0.135221
false
2
6,661
2020-04-06 15:12:45.273
How can I see the source code for a python library?
I currently find myself using the bs4/BeautifulSoup library a lot in python, and have recently been wondering how it works. I would love to see the source code for the library and don't know how. Does anyone know how to do this? Thanks.
Go to the location where python is installed and inside the python folder, you will have a folder called Lib you can find all the packages there open the required python file you will get the code. example location: C:\Python38\Lib
0
false
2
6,661
2020-04-08 01:37:55.407
Identifying positive pixels after color deconvolution ignoring boundaries
I am analyzing histology tissue images stained with a specific protein marker which I would like to identify the positive pixels for that marker. My problem is that thresholding on the image gives too much false positives which I'd like to exclude. I am using color deconvolution (separate_stains from skimage.color) to get the AEC channel (corresponding to the red marker), separating it from the background (Hematoxylin blue color) and applying cv2 Otsu thresholding to identify the positive pixels using cv2.threshold(blur,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU), but it is also picking up the tissue boundaries (see white lines in the example picture, sometimes it even has random colors other than white) and sometimes even non positive cells (blue regions in the example picture). It's also missing some faint positive pixels which I'd like to capture. Overall: (1) how do I filter the false positive tissue boundaries and blue pixels? and (2) how do I adjust the Otsu thresholding to capture the faint red positives? Adding a revised example image - top left the original image after using HistoQC to identify tissue regions and apply the mask it identified on the tissue such that all of the non-tissue regions are black. I should tru to adjust its parameters to exclude the folded tissue regions which appear more dark (towards the bottom left of this image). Suggestions for other tools to identify tissue regions are welcome. top right hematoxylin after the deconvolution bottom left AEC after the deconvolution bottom right Otsu thresholding applied not the original RGB image trying to capture only the AEC positives pixels but showing also false positives and false negatives Thanks
I ended up incorporating some of the feedback given above by Chris into the following possible unconventional solution for which I would appreciate getting feedback (to the specific questions below but also general suggestions for improvement or more effective/accurate tools or strategy): Define (but not apply yet) tissue mask (HistoQC) after optimizing HistoQC script to remove as much of the tissue folds as possible without removing normal tissue area Apply deconvolution on the original RGB image using hax_from_rgb Using the second channel which should correspond to the red stain pixels, and subtract from it the third channel which as far as I see corresponds to the background non-red/blue pixels of the image. This step removes the high values in the second channel that which up because of tissue folds or other artifacts that weren't removed in the first step (what does the third channel correspond to? The Green element of RGB?) Blur the adjusted image and threshold based on the median of the image plus 20 (Semi-arbitrary but it works. Are there better alternatives? Otsu doesn't work here at all) Apply the tissue regions mask on the thresholded image yielding only positive red/red-ish pixels without the non-tissue areas Count the % of positive pixels relative to the tissue mask area I have been trying to apply, as suggested above, the tissue mask on the deconvolution red channel output and then use Otsu thresholding. But it failed since the black background generated by the applying the tissue regions mask makes the Otsu threshold detect the entire tissue as positive. So I have proceeded instead to apply the threshold on the adjusted red channel and then apply the tissue mask before counting positive pixels. I am interested in learning what am I doing wrong here. Other than that, the LoG transformation didn't seem to work well because it produced a lot of stretched bright segments rather than just circular blobs where cells are located. I'm not sure why this is happening.
0.101688
false
1
6,662
2020-04-09 08:59:03.497
Python: how to write a wrapper to make all variables decalared inside a function globals?
I want a function to be run as if it was written in the main program, i.e. all the variables defined therein can be accessed from the main program. I don't know if there's a way to do that, but I thought a wrapper that gives this behaviour would be cool. It's just hacky and I don't know how to start writing it.
I have pieces of code written inside functions, and I really want to run them and have all the variables defined therein after run without having to write the lengthy return statements. How can I do that? That's what classes are for. Write a class with all your functions as methods, and use instance attributes to store the shared state. Problem solved, no global required.
1.2
true
1
6,663
2020-04-09 15:52:14.077
Python time series using FB Prophet with covid-19
I have prepared a time series model using FB Prophet for making forecasts. The model forecasts for the coming 30 days and my data ranges from Jan 2019 until Mar 2020 both months inclusive with all the dates filled in. The model has been built specifically for the UK market I have already taken care of the following: Seasonality Holidaying Effect My question is, that how do I take care of the current COVID-19 situation into the same model? The cases that I am trying to forecast are also dependent on the previous data at least from Jan 2020. So in order to forecast I need to take into account the current coronavirus situation as well that would impact my forecasts apart from seasonality and holidaying effect. How should I achieve this?
I have had the same issue with COVID at my work with sales forecasting. The easy solution for me was to make an additional regressor which indicates the COVID period, and use that in my model. Then my future is not affected by COVID, unless I tell it that it should be.
0
false
1
6,664
2020-04-10 21:53:54.647
How to get auto completion on jupyter notebook?
I am new to Python language programming. I found that we can have auto completion on Jupyter notebook. I found this suggestion: "The auto-completion with Jupyter Notebook is so weak, even with hinterland extension. Thanks for the idea of deep-learning-based code auto-completion. I developed a Jupyter Notebook Extension based on TabNine which provides code auto-completion based on Deep Learning. Here's the Github link of my work: jupyter-tabnine. It's available on pypi index now. Simply issue following commands, then enjoy it:) pip3 install jupyter-tabnine, jupyter nbextension install --py jupyter_tabnine, jupyter nbextension enable --py jupyter_tabnine, jupyter serverextension enable --py jupyter_tabnine" I did 4 steps installation and it looked installed well. However, when I tried using Jupyter notebook its auto completion didn't work. Basically my question is please help how to get auto completion on Jupiter notebook? Thank you very much.
Press tab twice while you are writing your code and the autocomplete tab will show for you. Just select one and press enter
0.545705
false
2
6,665
2020-04-10 21:53:54.647
How to get auto completion on jupyter notebook?
I am new to Python language programming. I found that we can have auto completion on Jupyter notebook. I found this suggestion: "The auto-completion with Jupyter Notebook is so weak, even with hinterland extension. Thanks for the idea of deep-learning-based code auto-completion. I developed a Jupyter Notebook Extension based on TabNine which provides code auto-completion based on Deep Learning. Here's the Github link of my work: jupyter-tabnine. It's available on pypi index now. Simply issue following commands, then enjoy it:) pip3 install jupyter-tabnine, jupyter nbextension install --py jupyter_tabnine, jupyter nbextension enable --py jupyter_tabnine, jupyter serverextension enable --py jupyter_tabnine" I did 4 steps installation and it looked installed well. However, when I tried using Jupyter notebook its auto completion didn't work. Basically my question is please help how to get auto completion on Jupiter notebook? Thank you very much.
After installing Nbextensions, go to Nbextensions in jupyter notebook, tick on Hinterland. Then reopen your jupyter notebook.
1.2
true
2
6,665
2020-04-10 22:10:08.740
Get pixel boundary coordinates from binary image in Python (not edges)
I have a binary image containing a single contiguous blob, with no holes. I would like create a polygon object based on the exterior edges of the edge pixels. I know how to get the edge pixels themselves, but I want the actual coordinates of the pixel boundaries, sorted clockwise or counter-clockwise. All of the pixels have integer coordinates. For example, say I have a single pixel at (2,2). The vertices of the polygon would be: (2.5, 2.5) (2.5, 1.5) (1.5, 1.5) (1.5, 2.5) (2.5, 2.5) Is there an exact, non-approximate way to do this? Preferably in Python?
Based on the comments, here is the approach that I implemented: multiply all pixel coordinates by 10, so that we'll only deal with integers. For each pixel, generate the 4 corners by adding +/- 5. For example, for (20,20), the corners are (25, 25) (25, 15) (15, 15) (15, 25) (25, 25). And store all the corners in a list. Count the occurrences of each corner. If the count is odd, it is a corner to the blob. Making the coordinates integers makes this step easy. Counting floats has issues. Divide the blob corner coordinates by 10, getting back the original resolution. Sort the corners clockwise using a standard algorithm.
1.2
true
1
6,666
2020-04-11 11:45:48.813
Packaging Kivy application to Android - Windows
I finished writing the code for a simple game using Kivy. I am having a problem converting it to Android APK, since I am using a windows computer. From some earlier research I got to know that using a Virtual machine is recommended, but I have no idea on how to download and use one :(, and if my slow PC can handle it... please help me. If possible, kindly recommend another way to convert to APK. I am a beginner at coding as a whole, please excuse me if my question is stupid.
you could just try downloading a virtual box and installing linux operating system or you could directly install it and keep it a drive called F or E and you could just install python on that and all the required pakages and start the build using buildozer as it is not available for windows. So try doing it. But I need to do it just now. Tell me after you have tried that cuz there are a lot of people online on youtube who would heloo you doing that work
0
false
1
6,667
2020-04-11 14:36:32.187
How to stop vscode python terminal without deleting the log?
when clicking "Run Python file in terminal" how do I stop the script? the only way that I found is by clicking the trashcan which deletes the log in the terminal.
When a Python file is running in the terminal you can hit Ctrl-C to try and interrupt it.
1.2
true
1
6,668
2020-04-11 21:09:01.457
python equivalent to matlab mvnrnd?
I was just wondering how to go from mvnrnd([4 3], [.4 1.2], 300); in MATLAB code to np.random.multivariate_normal([4,3], [[x_1 x_2],[x_3 x_4]], 300) in Python. My doubt namely lays on the sigma parameter, since, in MATLAB, a 2D vector is used to specify the covariance; whereas, in Python, a matrix must be used. What is the theoretical meaning on that and what is the practical approach to go from one to another, for instance, in this case? Also, is there a rapid, mechanical way? Thanks for reading.
Although python expects a matrix, it is essentially a symmetric covariance matrix. So it has to be a square matrix. In 2x2 case, a symmetric matrix will have mirrored non diagonal elements. I believe in python, it should look like [[.4 1.2],[1.2 .4]]
0
false
1
6,669
2020-04-12 14:40:25.627
how to get value of decision variable after maximum iteration limit in gekko
I have written my code in python3 and solved it using Gekko solver. After 10000 iterations, I am getting the error maximum iteration reached and solution not found. So can I get the value of decision variables after the 10000th iteration? I mean even when the maximum iteration is reached the solver must have a value of decision variable in the last iteration. so I want to access that values. how can I do that?
Question: 1) I am solving an MINLP problem with APOPT Solver. And my decision variables are defined as integer. I have retrieved the result of 10,000th iteration as you suggested. but the Decision variables values are non-integer. So why APOPT Solver is calculating a non-integer solution? Answer: There is an option on what is classified as an integer. The default tolerance is any number within 0.05 of an integer value. you can change this by: m.solver_options = ['minlp_integer_tol 1'] 2) I am running the code for "m.options.MAX_ITER=100" and using m = GEKKO() i.e. using remote server. But my code is still running for 10000th iterations. Answer: Can do it alternatively by: m.solver_options = ['minlp_maximum_iterations 100'] Thanks a lot to Prof. John Hedengren for the prompt replies. Gekko
0
false
1
6,670
2020-04-12 17:33:33.530
How to find the index of each leaf or node in a Decision Tree?
The main question is to find which leaf node each sample is classified. There are thousands of posts on using tree.apply. I am well aware of this function, which returns the index of the leaf node. Now, I would like to add the leaf index in the nodes of the graph (which I generate by using Graphviz). Drawing the enumeration technique used for the indexes won't work. The decision tree that I am developing is quite big. Therefore, I need to be able to print the leaf index in the graph. Another option that I am open to is to generate an array with all the leaf indexes (in the same order) of the leaf nodes of the decision tree. Any hint on how to do this?
There is a parameter node_ids of the command export_graphviz. When this parameter is set to True, then the indexes are added on the label of the decision tree.
1.2
true
1
6,671
2020-04-12 18:23:13.403
Is it possible to reuse a widget in Tkinter? If so, how can I do it using classes?
I'm using classes and such to make a calculator in Tkinter, however I want to be able to be able to reuse widgets for multiple windows. How can I do this if this is possible?
A widget may only exist in one window at a time, and cannot be moved between windows (the root window and instances of Toplevel).
0.201295
false
2
6,672
2020-04-12 18:23:13.403
Is it possible to reuse a widget in Tkinter? If so, how can I do it using classes?
I'm using classes and such to make a calculator in Tkinter, however I want to be able to be able to reuse widgets for multiple windows. How can I do this if this is possible?
As you commented: I'm making a calculator, as mentioned and I want to have a drop down menu on the window, that when selected it closes the root window and opens another, and I want to have the drop down menu on all the different pages, 5 or 6 in all In this case, just write a function that creates the menu. Then call that function when creating each of the windows.
1.2
true
2
6,672
2020-04-12 23:48:11.210
Distributed computing for multiplying numbers
Can you show me how I can multiply two integers which are M bits long using at most O(N^1.63) processors in O(N) parallel time in python. I think that karatsuba algorithm would work but I don't understand how can I implement it parallely.
Yes, It is parallel karatsuba algorithm.
-0.386912
false
1
6,673
2020-04-13 08:07:07.280
how can i find IDLE in my mac though i installed my python3 with pip?
I entered '''idle''' on terminal and it only shows me python2 that has already been here. How can i see python3 idle on my mac while i installed python3 with pip?
you can specify the version idle3
0
false
1
6,674
2020-04-13 10:32:47.843
how i ask for input in my telegram chat bot python telebot
I am trying to get input from the user and send this input to all bot subscribers. so I need to save his input in variable and use it after this in send_message method but I don't know how to make my bot wait for user input and what method I should use to receive user input thanks :]
If you want to get an user input, the logic is a bit different. I suppose you are using longpolling. When the bot asks the user for input, you can just save a boolean/string in a global variable, let's suppose the variable is user_input: You receive the update, and ask the user for input, then you set user_input[user's id]['input'] = true Then when you receive another update you just check that variable with an if (if user_input[userid]['input']: # do something). If your problem is 403 Forbidden for "user has blocked the bot", you can't do anything about it.
0
false
1
6,675
2020-04-13 10:39:34.980
How to monitor key presses in Python 3.7 using IDLE on Mac OSX
Using Python 3.7 with IDLE, on Mac. I want to be able to monitor key presses and immediately return either the character or its ASCII or Unicode value. I can't see how to use pynput with Idle. Any ideas please?
You can't. IDLE uses the tkinter interface to the tcl/tk GUI framework. The IDLE doc has a section currently title 'Running user code' with this paragraph. When Shell has the focus, it controls the keyboard and screen. This is normally transparent, but functions that directly access the keyboard and screen will not work. These include system-specific functions that determine whether a key has been pressed and if so, which.
1.2
true
1
6,676
2020-04-13 14:37:13.480
OpenCV built from source: Pycharm doesn't get autocomplete information
I'm trying to install OpenCV into my python environment (Windows), and I'm almost all of the way there, but still having some issues with autocomplete and Pycharm itself importing the library. I've been through countless other related threads, but it seems like most of them are either outdated, for prebuilt versions, or unanswered. I'm using Anaconda and have several environments, and unfortunately installing it through pip install opencv-contrib-python doesn't include everything I need. So, I've built it from source, and the library itself seem to be working fine. The build process installed some things into ./Anaconda3/envs/cv/Lib/site-packages/cv2/: __init__.py, some config py files, and .../cv2/python-3.8/cv2.cp38-win_amd64.pyd. I'm not sure if it did anything else. But here's where I'm at: In a separate environment, a pip install opencv-contrib-python both runs and has autocomplete working In this environment, OpenCV actually runs just fine, but the autocomplete doesn't work and Pycharm complains about everything, eg: Cannot find reference 'imread' in '__init__.py' Invalidate Caches / Restart doesn't help Removing and re-adding the environment doesn't help Deleting the user preferences folder for Pycharm doesn't help Rebuilding/Installing OpenCV doesn't help File->Settings->Project->Project Interpreter is set correctly Run->Edit Configuration->Python Interpreter is set correctly So my question is: how does Pycharm get or generate that autocomplete information? It looks like the pyd file is just a dll in disguise, and looking through the other environment's site-packages/cv2 folder, I don't see anything interesting. I've read that __init__.py has something to do with it, but again the pip version doesn't contain anything (except there's a from .cv2 import *, but I'm not sure how that factors in). The .whl file you can download is a zip that only contains the same as what 'pip install' gets. Where does the autocomplete information get stored? Maybe there's some way to copy it from one environment to another? It would get me almost all the way there, which at this point would be good enough I think. Maybe I need to rebuild it with another flag I missed?
Got it finally! Figures that would happen just after posting the question... Turns out .../envs/cv/site-packages/cv2/python-3.8/cv2.cp38-win_amd64.pyd needed to be copied to .../envs/cv/DLLs/. Then PyCharm did it's magic and is now all good.
0.673066
false
2
6,677
2020-04-13 14:37:13.480
OpenCV built from source: Pycharm doesn't get autocomplete information
I'm trying to install OpenCV into my python environment (Windows), and I'm almost all of the way there, but still having some issues with autocomplete and Pycharm itself importing the library. I've been through countless other related threads, but it seems like most of them are either outdated, for prebuilt versions, or unanswered. I'm using Anaconda and have several environments, and unfortunately installing it through pip install opencv-contrib-python doesn't include everything I need. So, I've built it from source, and the library itself seem to be working fine. The build process installed some things into ./Anaconda3/envs/cv/Lib/site-packages/cv2/: __init__.py, some config py files, and .../cv2/python-3.8/cv2.cp38-win_amd64.pyd. I'm not sure if it did anything else. But here's where I'm at: In a separate environment, a pip install opencv-contrib-python both runs and has autocomplete working In this environment, OpenCV actually runs just fine, but the autocomplete doesn't work and Pycharm complains about everything, eg: Cannot find reference 'imread' in '__init__.py' Invalidate Caches / Restart doesn't help Removing and re-adding the environment doesn't help Deleting the user preferences folder for Pycharm doesn't help Rebuilding/Installing OpenCV doesn't help File->Settings->Project->Project Interpreter is set correctly Run->Edit Configuration->Python Interpreter is set correctly So my question is: how does Pycharm get or generate that autocomplete information? It looks like the pyd file is just a dll in disguise, and looking through the other environment's site-packages/cv2 folder, I don't see anything interesting. I've read that __init__.py has something to do with it, but again the pip version doesn't contain anything (except there's a from .cv2 import *, but I'm not sure how that factors in). The .whl file you can download is a zip that only contains the same as what 'pip install' gets. Where does the autocomplete information get stored? Maybe there's some way to copy it from one environment to another? It would get me almost all the way there, which at this point would be good enough I think. Maybe I need to rebuild it with another flag I missed?
Alternatively add the directory containing the .pyd file to the interpreter paths. I had exactly this problem with OpenCV 4.2.0 compiled from sources, installed in my Conda environment and PyCharm 2020.1. I solved this way: Select project interpreter Click on the settings button next to it and then clicking on the Show paths for selected interpreter adding the directory containing the cv2 library (in my case in the Conda Python library path - e.g. miniconda3/lib/python3.7/site-packages/cv2/python-3.7). In general check the site-packages/cv2/python-X.X directory)
0.673066
false
2
6,677
2020-04-14 08:45:43.027
How to classify English words according to topics with python?
How to classify English words according to topics with python? Such as THE COUNTRY AND GOVERNMENT: regime, politically, politician, official, democracy......besides, there are other topics: education/family/economy/subjects and so on. I want to sort out The Economist magazine vocabularies and classify these according to frequency and topic. At present, I have completed the words frequency statistics, the next step is how to classify these words automatically with python?
It sounds quite tough to handle it. Also it is not a simple task. If I were you, I consider 2 ways to do what you ask. Make your own rule for it If you complete counting the words, then you should match those word to topic. There is no free lunch. Make own your rule for classifying category. e.g. Entertainment has many "TV" and "drama" so If some text has it, then we can guess it belongs to Entertainment. Machine learning. If you can't afford to make rules, let machine do it. But even in this case, you should label the article with your desired class(topics). Unsupervised pre-training(e.g. clustering) can also be used here. but at last, we need supervised data set with topics. You should decide taxonomy of topics. Welcome to ML world. Hope it helps to get the right starting point.
0
false
1
6,678
2020-04-15 06:06:44.530
Can I use JWTAuthentication twice for a login authentication?
In my login First place I wanted to send OTP and second place I wanted to verify the OTP and then return the token. I am using rest_framework_simplejwt JWTAuthentication. First place I am verifying the user and sending the OTP, not returning the token and second place I am verifying the OTP and returning the token. Let me know If this is the correct way to use? If not how can I implement this using JWTAuthentication. OR If this is not correct way to use, can I implement like first place use Basic authentication to verify the user and second place jwt authentication to verify the OTP and send the tokens. Let me know your solution.
What I understood? You need to send an OTP to the current user who is hitting your send_otp route after checking if the user exists or not in your system and then verify_otp route which will verify the OTP that the user has sent in the API alongwith it's corresponding mobile_number/email_id. How to do it? send_otp - Keep this route open, you don't need an authentication for this, not even Basic Auth (that's how it works in industry), just get the mobile_number from the user in the request, check whether it exists in the DB, and send the OTP to this number, and set the OTP to the corresponding user in your cache maybe for rechecking (redis/memcache). Use throttling for this route so that nobody will be able to exploit this API of yours. verify_otp - This route will also be open (no authentication_class/permission_classes), get the mobile_number/email id + OTP from the user, verify it in cache, if verified, generate the token using TokenObtainPairSerializer and send the refresh + access token in the response, if the OTP is incorrect, send 401.
0
false
1
6,679
2020-04-15 11:29:23.330
send request in selenium without clicking on the send button in python
I have python script that use selenium to login website, you should insert the username and password and captcha for submit button to login.after this login webpage have send button for send form information with post request, how can i bypass this clicking in button and send the post request without clicking on the button ?
If you mean that you want to try and bypass the captcha and go straight to the send button, I doubt that's possible. If you need to solve recaptchas, check out 2captcha.com and use their API to solve it - which will unlock the send button, theoretically.
0
false
1
6,680
2020-04-15 23:29:13.073
What is the use of Celery in python?
I am confused in celery.Example i want to load a data file and it takes 10 seconds to load without celery.With celery how will the user be benefited? Will it take same time to load data?
Celery, and similar systems like Huey are made to help us distribute (offload) the amount of processes that normally can't execute concurrently on a single machine, or it would lead to significant performance degradation if you do so. The key word here is DISTRIBUTED. You mentioned downloading of a file. If it is a single file you need to download, and that is all, then you do not need Celery. How about more complex scenario - you need to download 100000 files? How about even more complex - these 100000 files need to be parsed and the parsing process is CPU intensive? Moreover, Celery will help you with retrying of failed tasks, logging, monitoring, etc.
1.2
true
2
6,681
2020-04-15 23:29:13.073
What is the use of Celery in python?
I am confused in celery.Example i want to load a data file and it takes 10 seconds to load without celery.With celery how will the user be benefited? Will it take same time to load data?
Normally, the user has to wait to load the data file to be done on the server. But with the help of celery, the operation will be performed on the server and the user will not be involved. Even if the app crashes, that task will be queued. Celery will keep track of the work you send to it in a database back-end such as Redis or RabbitMQ. This keeps the state out of your app server's process which means even if your app server crashes your job queue will still remain. Celery also allows you to track tasks that fail.
0
false
2
6,681
2020-04-16 12:20:31.583
Set Custom Discord Status when running/starting a Program
I am working on a application, where it would be cool to change the Status of your Discord User you are currently logged in to. For example when i start the appplication then the Status should change to something like "Playing Program" and when you click on the User's Status then it should display the Image of the Program. Now i wanted to ask if this is somehow possible to make and in which programming Languages is it makeable? EDIT: Solved the Problem with pypresence
In your startup, where DiscordSocketClient is available, you can use SetGameAsync(). This is for C# using Discord.NET. To answer your question, I think any wrapper for Discord's API allows you to set the current playing game.
0
false
1
6,682
2020-04-16 23:05:48.100
Save periodically gathered data with python
I periodically receive data (every 15 minutes) and have them in an array (numpy array to be precise) in python, that is roughly 50 columns, the number of rows varies, usually is somewhere around 100-200. Before, I only analyzed this data and tossed it, but now I'd like to start saving it, so that I can create statistics later. I have considered saving it in a csv file, but it did not seem right to me to save high amounts of such big 2D arrays to a csv file. I've looked at serialization options, particularly pickle and numpy's .tobytes(), but in both cases I run into an issue - I have to track the amount of arrays stored. I've seen people write the number as the first thing in the file, but I don't know how I would be able to keep incrementing the number while having the file still opened (the program that gathers the data runs practically non-stop). Constantly opening the file, reading the number, rewriting it, seeking to the end to write new data and closing the file again doesn't seem very efficient. I feel like I'm missing some vital information and have not been able to find it. I'd love it if someone could show me something I can not see and help me solve the problem.
Saving on a csv file might not be a good idea in this case, think about the accessibility and availability of your data. Using a database will be better, you can easily update your data and control the size amount of data you store.
0.386912
false
1
6,683
2020-04-17 01:04:58.020
Tkinter - how to prevent user window resize from disabling autoresize?
I have a question related to an annoying behavior I have observed recently in tkinter. When there's no fixed window size defined, the main window is expanded when adding new frames, which is great. However, if prior to adding a new widget the user only so much as touches the resizing handles, resizing the main window manually, then the window does not expand to fit the new widget. Why is that so and is there a way to prevent this behavior? Thanks in advance!
The why is because tkinter was designed to let the user ultimately control the size of the window. If the user sets the window size, tkinter assumes it was for a reason and thus honors the requested size. To get the resize behavior back, pass an empty string to the geometry method of the window.
0.386912
false
1
6,684
2020-04-17 06:16:21.087
how to get s3 object key by object url when I use aws lambda python?or How to get object by url?
I use python boto3 when I upload file to s3,aws lambda will move the file to other bucket,I can get object url by lambda event,like https://xxx.s3.amazonaws.com/xxx/xxx/xxxx/xxxx/diamond+white.side.jpg The object key is xxx/xxx/xxxx/xxxx/diamond+white.side.jpg This is a simple example,I can replace "+" get object key, there are other complicated situations,I need to get object key by object url,How can I do it? thanks!!
You should use urllib.parse.unquote and then replace + with space. From my knowledge, + is the only exception from URL parsing, so you should be safe if you do that by hand.
0.201295
false
1
6,685
2020-04-17 17:10:44.803
Django REST Cache Invalidation
I have a Django project and API view implemented with the Rest framework. I'm caching it using the @cache_page decorator but I need to implement a cache invalidation and I'm not seeing how to do that - do I need a custom decorator? The problem: The view checks the access of the API KEY and it caches it from the previous access check but, if the user changes the API KEY before the cache expires, the view will return an OK status of the key that no longer exists.
Yes, you'll need a cache decorator that takes the authentication/user context into account. cache_page() only works for GET requests, and keys based on the URL alone. Better yet, though, Don't use a cache until you're sure you need one If you do need it (think about why; cache invalidation is one of the two hard things), use a more granular cache within your view, not cache_page().
1.2
true
1
6,686
2020-04-17 17:23:20.373
Python3: can't open file 'sherlock.py' [Errno 2] No such file or directory
So I am new to Kali Linux and I have installed the infamous Sherlock, nonetheless when I used the command to search for usernames it didn't work (Python3: can't open file 'sherlock.py' [Errno 2] No such file or directory). Naturally I tried to look up at similiar problems and have found that maybe the problem is located on my python path. Which is currently located in /usr/bin/python/ and my pip is in /usr/local/bin/pip. Is my python and pip installed correctly in the path? If not, how do I set a correct path? However if it is right and has no correlation with the issue, then what is the problem?
You have to change directory to sherlock twice. (it works for me)
0.201295
false
1
6,687
2020-04-17 22:40:16.480
How to create individual node sets using abaqus python scripting?
I am new to Python scripting in Abaqus. I am aware how to use the GUI but not really familiar with the scripting interface. However, I would like to know one specific thing. I would like to know how to assign a set to each individual node on a geometry's edges. I have thought about referencing the node numbers assigned to the geometry edges but don't know how I will do it. The reason for creating a set for each node is that I would like to apply Periodic Boundary Conditions (PBC). Currently my model is a 2D Repeating Unit Cell (RUC) and I would like to apply a constraint equation between the opposite nodes on the opposite edges. To do that, I need to create a set for each node and then apply an equation on the corresponding set of nodes. Just to add that the reason why I would like to use the Python scripting interface is because through the GUI, I can only make sets of nodes and create constraint equations for a simple mesh. But for a refined mesh, there will be a lot more constraint equations and a whole lot more sets. Any suggestion of any kind would be really helpful.
One of the way would be with the help of getByBoundingBox(...) method available for selecting nodes inside of a particular bounding box. allNodes = mdb.models[name].parts[name].nodes allNodes.getByBoundingBox(xMin, yMin, zMin, xMax, yMax, zMax) mdb.models[name].parts[name].Set(name=<name_i>, region=<regionObject_corresponding_to_node_i>) One could always look for pointers in the replay file *.rpy of the current current session, which is mostly machine generated python code of the manual steps done in CAE. Abaqus > Scripting Reference > Python commands > Mesh commands > MeshNodeArray object and Abaqus > Scripting Reference > Python commands > Region commands > Set object contains the relevant information.
1.2
true
1
6,688
2020-04-19 16:28:41.883
Python Seperate thread for list which automatically removes after time limit
I want to have a list which my main process will add data to and this seperate thread will see the data added, wait a set amount of eg 1 minute then remove it from the list. Im not very experience with multi-threading in python so i dont know how to do this.
The way you could achieve this is by using a global variable as your list, as your thread will be able to access data from it. You can use a deque from the collections library, and each time you add something in the queue, you spawn a new thread that will just pop from the front after waiting that set amount of time. Although, you have to be careful with the race conditions. It may happen that you try to write something at one end in your main thread and at the same time erase something from the beginning in one of your new threads, and this will cause unexpected behavior. Best way to avoid this is by using a lock.
0
false
1
6,689
2020-04-20 05:00:44.977
How do you pass session object in TensorFlow v2?
I have a function change_weight() that modifies weights in any given model. This function resides in a different python file. So if I have a simple neural network that classifies MNIST images, I test the accuracy before and after calling this function and I see that it works. This was easy to do in TensorFlow v1, as I just had to pass the Session sess object in the function call, and I could get the weights of this session in the other file. With eager execution in TensorFlow v2, how do I do this? I don't have a Session object anymore. What do I pass?
I was able to do this by passing the Model object instead and getting the weights by model.trainable_variables in the other function.
1.2
true
1
6,690
2020-04-20 14:29:58.670
PyQt5 Designer is not working: This application failed to start because no Qt platform plugin could be initialized
i have a problem with PyQt5 Designer. I install PyQt with -pip install PyQt5 and then -pip install PyQt5-tools everything OK. But when i try to run Designer it open messagebox with error: This application failed to start because no Qt platform plugin could be initialized! how to deal with it?
Go to => Python38>lib>site-packages>PyQt5>Qt>plugins In plugins copy platform folder After that go to Python38>lib>site-packages>PyQt5_tools>Qt>bin paste folder here . Do copy and replace. This will surely work.. Now you can use designer tool go and do some fun with python...
1
false
3
6,691
2020-04-20 14:29:58.670
PyQt5 Designer is not working: This application failed to start because no Qt platform plugin could be initialized
i have a problem with PyQt5 Designer. I install PyQt with -pip install PyQt5 and then -pip install PyQt5-tools everything OK. But when i try to run Designer it open messagebox with error: This application failed to start because no Qt platform plugin could be initialized! how to deal with it?
I found a way of solving this: Go to you python instalation folder Python38\Lib\site-packages\PyQt5\Qt\bin Then copy all of that files to your clipboard and paste them at Python38\Lib\site-packages\pyqt5_tools\Qt\bin Then open the designer.exe and it should work.
0.496174
false
3
6,691
2020-04-20 14:29:58.670
PyQt5 Designer is not working: This application failed to start because no Qt platform plugin could be initialized
i have a problem with PyQt5 Designer. I install PyQt with -pip install PyQt5 and then -pip install PyQt5-tools everything OK. But when i try to run Designer it open messagebox with error: This application failed to start because no Qt platform plugin could be initialized! how to deal with it?
Try running it using command: pyqt5designer It should set all the paths for libraries. Works on Python 3.8, pyqt5-tool 5.15
0.997458
false
3
6,691
2020-04-21 06:23:06.150
What happens in background when we pass the command python manage.py createsuperuser in Django?
I'm working on Django and I know to create a account to log in into admin page I have to create a superuser.And for that we have to pass the command python manage.py createsuperuser. But my question is when we pass this command what happens first and because of what in the background and after that what happens?? Which all methods and classes are called to create a superuser?? I know its a weird question but I wanted to know how this mechanism works.. Thanks in advance!!
Other people will answer this in detail but let me tell you in short what happens. First when you pass the command python manage.py createsuperuser you will be prompted to fill the fields mentioned in USERNAME_FIELD and REQUIERD_FIELDS, when you will fill those fields then django will call your user model manager to access your create_superuser function and then code in that will execute to return a superuser. I hope this will help you.
1.2
true
1
6,692
2020-04-21 12:05:53.207
Stripe too many high risk payments
I'm using the stripe subscription API to provide multi tier accounts for my users. but about 50% of the transactions that i get in stripe are declined and flagged as fraudulent. how can i diagnose this issue knowing that i'm using the default base code provided in the stripe documentation (front end) and using the stripe python module (backend). I know that i haven't provided much information, but that is only because there isn't much to provide. the code is known to anyone who has used stripe before, and there isn't any issue with it as there are transaction that work normally. Thank you !
After contacting stripe support, i found that many payments were done by people from an IP address that belongs to a certain location with a card that is registered to a different location. for example if someone uses a French debit card from England. i did ask stripe to look into this issue.
1.2
true
1
6,693
2020-04-21 13:14:00.810
how to make my python project as a software in ubuntu
I've made a python program using Tkinter(GUI) and I would like to enter it by creating a dedicated icon on my desktop (I want to send the file to my friend, without him having to install python or any interpreter). The file is a some-what game that I want to share with friends and family, which are not familiar with coding. I am using Ubuntu OS.
you can use pip3 install pyinstaller then use pyinstaller to convert your file to a .exe file than can run on windows using this command pyninstaller --onefile -w yourfile. it can now run without installing anything on windows. and you can use wine to run it on ubuntu
1.2
true
1
6,694
2020-04-21 19:42:41.703
How to set attribute in nifi processor using pure Python not jython?
How to set properties (attribute) in nifi processor using pure Python in ExecuteStreamCommand processor I don't want to use Jython, I know it can be done using pinyapi. But don't know how to do it. I just want to create an attribute using Python script.
How to set properties(attribute) in nifi processor using pure python in ExecuteStreamCommand processor I don't want to use Jython You can't do it from ExecuteStreamCommand. The Python script doesn't have the ability to interact with the ProcessSession, which is what it would need to set an attribute. You'd need to set up some operations after it to add the attributes like an UpdateAttribute instance.
0
false
1
6,695
2020-04-22 10:04:50.737
Navigating through Github repos
I am currently trying to find end-to-end speech recognition solutions to implement in python (I am a data science student btw). I have searched for projects on github and find it very hard to comprehend how these repositories work and how I can use them for my own project. I am mainly confused with the following: how do repositories usually get used by other developers and how can I use them best for my specific issue? How do I know if the proposed solution is working in python? What is the usual process in installing the project from the repo? Sorry for the newbie question but I am fairly new to this. Thank you
You can read the documentation(README.md) there you can have all the information you need. You can install the project from a repo by cloning or by downloading zip.
0
false
1
6,696
2020-04-22 14:02:22.120
Agent repeats the same action circle non stop, Q learning
How can you prevent the agent from non-stop repeating the same action circle? Of course, somehow with changes in the reward system. But are there general rules you could follow or try to include in your code to prevent such a problem? To be more precise, my actual problem is this one: I'm trying to teach an ANN to learn Doodle Jump using Q-Learning. After only a few generations the agent keeps jumping on one and the same platform/stone over and over again, non-stop. It doesn't help to increase the length of the random-exploration-time. My reward system is the following: +1 when the agent is living +2 when the agent jumps on a platform -1000 when it dies An idea would be to reward it negative or at least with 0 when the agent hits the same platform as it did before. But to do so, I'd have to pass a lot of new input-parameters to the ANN: x,y coordinates of the agent and x,y coordinates of the last visited platform. Furthermore, the ANN then would also have to learn that a platform is 4 blocks thick, and so on. Therefore, I'm sure that this idea I just mentioned wouldn't solve the problem, contrarily I believe that the ANN would in general simply not learn well anymore, because there are too many unuseful and complex-to-understand inputs.
This is not a direct answer to the very generally asked question. I found a workaround for my particular DoodleJump example, probably someone does something similar and needs help: While training: Let every platform the agent jumped on disappear after that, and spawn a new one somewhere else. While testing/presenting: You can disable the new "disappear-feature" (so that it's like it was before again) and the player will play well and won't hop on one and the same platform all the time.
1.2
true
1
6,697
2020-04-22 19:47:18.237
Is vectorization a hardware/framework specific feature or is it a good coding practice?
I am trying to wrap my head around vectorization (for numerical computing), and I'm coming across seemingly contradictory explanations: My understanding is that it is a feature built into low-level libraries that takes advantage of parallel processing capabilities of a given processor to perform operations against multiple data points simultaneously. But several tutorials seem to be describing it as a coding practice that one incorporates into their code for more efficiency. How is it a coding practice, if it is also a feature you have or you don't have in the framework you are using. A more concrete explanation of my dilemma: Let's say I have a loop to calculate an operation on a list of numbers in Python. To vectorize it, I just import Numpy and then use an array function to do the calculation in one step instead of having to write a time consuming loop. The low level C routines used by Numpy will do all the heavy lifting on my behalf. Knowing about Numpy and how to import it and use it is not a coding practice, as far as I can tell. It's just good knowledge of tools and frameworks, that's all. So why do people keep referring to vectorization as a coding practice that good coders leverage in their code?
Vectorization can mean different things in different contexts. In numpy we usually mean using the compiled numpy methods to work on whole arrays. In effect it means moving any loops out of interpreted Python and into compiled code. It's very specific to numpy. I came to numpy from MATLAB years ago, and APL before that (and physics/math as a student). Thus I've been used to thinking in terms of whole arrays/vectors/matrices for a long time. MATLAB now has a lot just-in-time compiling, so programmers can write iterative code without a performance penalty. numba (and cython) lets numpy users do some of the same, though there are still a lot of rough edges - as can be seen in numpa tagged questions. Parallelization and other means of taking advantage of modern multi-core computers is a different topic. That usually requires using additional packages. I took issue with a comment that loops are not Pythonic. I should qualify that a bit. Python does have tools for avoiding large, hard to read loops, things like list comprehensions, generators and other comprehensions. Performing a complex task by stringing together comprehensions and generators is good Python practice, but that's not 'vectorization' (in the numpy sense).
0.591696
false
2
6,698
2020-04-22 19:47:18.237
Is vectorization a hardware/framework specific feature or is it a good coding practice?
I am trying to wrap my head around vectorization (for numerical computing), and I'm coming across seemingly contradictory explanations: My understanding is that it is a feature built into low-level libraries that takes advantage of parallel processing capabilities of a given processor to perform operations against multiple data points simultaneously. But several tutorials seem to be describing it as a coding practice that one incorporates into their code for more efficiency. How is it a coding practice, if it is also a feature you have or you don't have in the framework you are using. A more concrete explanation of my dilemma: Let's say I have a loop to calculate an operation on a list of numbers in Python. To vectorize it, I just import Numpy and then use an array function to do the calculation in one step instead of having to write a time consuming loop. The low level C routines used by Numpy will do all the heavy lifting on my behalf. Knowing about Numpy and how to import it and use it is not a coding practice, as far as I can tell. It's just good knowledge of tools and frameworks, that's all. So why do people keep referring to vectorization as a coding practice that good coders leverage in their code?
Vectorization leverage the SIMD (Single Instruction Multiple Data) instruction set of modern processors. For example, assume your data is 32 bits, back in the old days one addition would cost one instruction (say 4 clock cycles depending on the architecture). Intel's latest SIMD instructions now process 512 bits of data all at once with one instruction, enabling you to make 16 additions in parallel. Unless you are writing assembly code, you better make sure that your code is efficiently compiled to leverage the SIMD instruction set. This is being taking care of with the standard packages. Your next speed up opportunities are in writing code to leverage multicore processors and to move your loops out of the interpreted python. Again, this is being taking care of with libraries and frameworks. If you are a data scientist, you should only care about calling the right packages/frameworks, avoid reimplementing logic already offered by the libraries (with loops being a major example) and just focus on your application. If you are a framework/low-level code developer, you better learn the good coding practices or your package will never fly.
0.265586
false
2
6,698
2020-04-23 09:33:37.790
How to fix not updating problem with static files in Django port 8000
So when you make changes to your CSS or JS static file and run the server, sometimes what happens is that the browser skips the static file you updated and loads the page using its cache memory, how to avoid this problem?
U have DEBUG = False in your settings.py. Switch on DEBUG = True and have fun
0
false
2
6,699
2020-04-23 09:33:37.790
How to fix not updating problem with static files in Django port 8000
So when you make changes to your CSS or JS static file and run the server, sometimes what happens is that the browser skips the static file you updated and loads the page using its cache memory, how to avoid this problem?
Well there are multiple ways to avoid this problem the simplest way is by: if you are using Mac: Command+Option+R if you are using Windows: Ctrl+F5 What it does is that it re-downloads the cache files enabling the update of the static files in the browser. Another way is by: making a new static file and pasting the existing code of the previously used static file and then running the server What happens, in this case, is that the browser doesn't use the cache memory for rendering the page as it assumes it is a different file.
1.2
true
2
6,699
2020-04-23 13:21:50.967
How to monitor memory usage of individual celery tasks?
I would like to know the max memory usage of a celery task, but from the documentations none of the celery monitoring tools provide the memory usage feature. How can one know how much memory a task is taking up? I've tried to get the pid with billiard.current_process and use that with memory_profiler.memory_usage but it looks like the current_process is the worker, not the task. Thanks in advance.
Celery does not give this information unfortunately. With little bit of work it should not be difficult to implement own inspect command that actually samples each worker-process. Then you have all necessary data for what you need. If you do this, please share the code around as other people may need it...
0
false
1
6,700
2020-04-23 16:53:28.387
How can I use the PyCharm debugger with Google Cloud permissions?
I have a simple flask app that talks to Google Cloud Storage. When I run it normally with python -m api.py it inherits Google Cloud access from my cli tools. However, when I run it with the PyCharm debugger it can no longer access any Google Services. I've been trying to find a way to have the PyCharm debugger inherit the permissions of my usual shell but I'm not seeing any way to do that. Any tips on how I can use the PyCharm debugger with apps that require access to Google Cloud?
I usually download the credentials file and set GOOGLE_APPLICATION_CREDENTIALS="/home/user/Downloads/[FILE_NAME].json environment variable in PyCharm. I usually create a directory called auth and place the credential file there and add that directory to .gitignore I don't know if it is a best practice or not but it gives me an opportunity to limit what my program can do. So if I write something that may have disrupting effect, I don't have to worry about it. Works great for me. I later use the same service account and attach it to the Cloud Function and it works out just fine for me.
1.2
true
1
6,701
2020-04-24 02:29:15.327
How often should i run requirements.txt file in my python project?
Working on a python project and using pycharm . Have installed all the packages using requirements.txt. Is it a good practice to run it in the beginning of every sprint or how often should i run the requirements.txt file ?
The answer is NO. Let's say you're working on your project, already installed all the packages in the requirements.txt into your virtual environment, etc etc, at this point your environment is already setup. Keep working on the project and installed a new package with pip or whatever, now your environment is ok but you're requirements.txt is not up to date, you need to update it adding the new package, but you don't need to reinstall all the packages in it every time this happens. You only runs pip install -r requirements.txt when you want to run your project on a different virtual environment
0
false
1
6,702
2020-04-24 18:18:18.077
Converting days into years, months and days
I know I can use relativedelta to calculate difference between two dates in the calendar. However, it doesn't feet my needs. I must consider 1 year = 365 days and/or 12 months; and 1 month = 30 days. To transform 3 years, 2 months and 20 days into days, all I need is this formula: (365x3)+(30x2)+20, which is equal to 1175. However, how can I transform days into years, months and days, considering that the amount of days may or may not be higher than 1 month or 1 year? Is there a method on Python that I can use? Mathemacally, I could divide 1175 by 365, multiply the decimals of the result by 365, divide the result by 30 and multiply the decimals of the result by 30. But how could I do that in Python?
You can use days%365 to get number of years from days.
-0.386912
false
1
6,703
2020-04-25 11:10:01.107
Microsoft Visual C++ 14.0 is required error while installing a python module
I was trying to pip install netfilterqueue module with my Windows 7 system, in python 3.8 It returned an error "Microsoft Visual C++ 14.0 is required" My system already has got a Microsoft Visual C++ 14.25. Do I still need to install the 14.0, or is there a way that I can get out of this error? If no, how do I install a lower version without uninstalling or replacing the higher version?
Alright, try uninstalling the higher version and go for the lower version making sure you download it with the same computer not with another, remembering that windows 7 no longer support some operations, and i will advice you upgrade to windows 10
0.386912
false
1
6,704
2020-04-25 16:20:35.730
how do i go back to my system python using pyenv in Ubuntu
i installed pyenv and switched to python 3.6.9 (using pyenv global 3.6.9). How do i go back to my system python? Running pyenv global system didnt work
pyenv sets the python used according to ~/.pyenv/version. For a temporary fix, you can write system in it. Afterwards, you'll need to fiddle through your ~/.*rc files and make sure eval "$(pyenv init -)" is called after any changes to PATH made by other programs (such as zsh).
0.135221
false
1
6,705
2020-04-25 17:06:38.983
Advanced game made in pygame is too slow
I've been working on a game for a month and it's quite awesome. I'm not very new to game developing. There are no sprites and no images, only primitive drawn circles and rectangles. Everything works well except that the FPS gets slow the more I work on it, and every now and then the computer starts accelerating and heating up. My steps every frame (besides input handling): updating every object state (physics, collision, etc), around 50 objects some more complex than the other drawing the world, every pixel on (1024,512) map. drawing every object, only pygame.draw.circle or similar functions There is some text drawing but font.render is used once and all the text surfaces are cached. Is there any information on how to increase the speed of the game? Is it mainly complexity or is there something wrong with the way I'm doing it? There are far more complex games (not in pygame) that I play with ease and high FPS on my computer. Should I move to different module like pyglet or openGL? EDIT: thank you all for the quick response. and sorry for the low information. I have tried so many things but in my clumsiness I heavent tried to solve the "draw every pixel every single frame proccess" I changed that to be drawn for changes only and now it runs so fast I have to change parameters in order to make it reasonably slow again. thank you :)
Without looking at the code its hard to say something helpful. Its possible that you got unnecessary loops/checks when updating objects. Have you tried increasing/decreasing the amount of objects? How does the performance change when you do that? Have you tried playing other games made with pygame? Is your pc just bad? I dont think that pygame should have a problem with 50 simple shapes. I got some badly optimized games with 300+ objects and 60+ fps (with physics(collision, gravity, etc.)) so i think pygame can easily handle 50 simple shapes. You should probably post a code example of how you iterate your objects and what your objects look like.
1.2
true
1
6,706
2020-04-26 18:20:13.677
Are there any other ways to share / run code?
So I just created a simple script with selenium that automates the login for my University's portal. The first reaction I got from a friend was: ah nice, you can put that on my pc as well. That would be rather hard as he'd have to install python and run it through an IDE or through his terminal or something like that and the user friendliness wouldn't be optimal. Is there a way that I could like wrap it in a nicer user interface, maybe create an app or something so that I could just share that program? All they'd have to do is then fill in their login details once and the program then logs them in every time they want. I have no clue what the possibilities for that are, therefore I'm asking this question. And more in general, how do I get to use my python code outside of my IDE? Thusfar, I've created some small projects and ran them in PyCharm and that's it. Once again, I have no clue what the possibilities are so I also don't really know what I'm asking. If anyone gets what I mean by using my code further than only in my IDE, I'd love to hear your suggestions!
The IDE running you program is the same as you running your program in the console. But if you dont want them to have python installed (and they have windows) you can maybe convert them to exe with py2exe. But if they have linux, they probably have python installed and can run you program with "python script.py". But tell your friends to install python, if they program or not, it will always come in handy
0.386912
false
1
6,707
2020-04-27 17:40:11.477
Modular python admin pages
I'm building a personal website that I need to apply modularity to it for purpose of learning. What it means is that there is a model that contains x number of classes with variations, as an example a button is a module that you can modify as much depending on provided attributes. I also have a pages model that need to select any of created modules and render it. I can't find any documentation of how to access multiple classes from one field to reference to. Model structure is as below: Modules, contains module A and module B Pages should be able to select any of module A and order its structure. Please let me know if not clear, this is the simplest form I could describe. Am I confusing this with meta classes? How one to achieve what I'm trying to achieve?
I ended up using Proxy models but will also try polymorphic approach. This is exactly what is designed to do, inherit models from a parent model in both one to many and many to many relationships.
1.2
true
1
6,708
2020-04-27 18:30:21.580
How to deploy changes made to my django project, which is hosted on pythonanywhere?
I am new to git and Pythonanywhere. So, I have a live Django website which is hosted with the help of Pythonanywhere. I have made some improvements to it. I have committed and pushed that changes to my Github repository. But, now I don't know that how to further push that changes to my Pythonanywhere website. I am so confused. Please help!!! Forgive me, I am new to it.
You need to go to the repo on PythonAnywhere in a bash console, run git pull (You may need to run ./mange.py migrate if you made changes to your models) and then reload the app on "Web" configuration page on PythonAnywhere. .
1.2
true
1
6,709
2020-04-30 04:45:05.280
How is variable assignment implemented in CPython?
I know that variables in Python are really just references/pointers to some underlying object(s). And since they're pointers, I guess they somehow "store" or are otherwise associated with the address of the objects they refer to. Such an "address storage" probably happens at a low level in the CPython implementation. But my knowledge of C isn't good enough to infer this from the source code, nor do I know where in the source to begin looking. So, my question is: In the implementation of CPython, how are object addresses stored in, or otherwise associated with, the variables which point to them?
In module scope or class scope, variables are implemented as entries in a Python dict. The pointer to the object is stored in the dict. In older CPython versions, the pointer was stored directly in the dict's underlying hash table, but since CPython 3.6, the hash table now stores an index into a dense array of dict entries, and the pointer is in that array. (There are also split-key dicts that work a bit differently. They're used for optimizing object attributes, which you might or might not consider to be variables.) In function scope, Python creates a stack frame object to store data for a given execution of a function, and the stack frame object includes an array of pointers to variable values. Variables are implemented as entries in this array, and the pointer to the value is stored in the array, at a fixed index for each variable. (The bytecode compiler is responsible for determining these indices.)
1.2
true
1
6,710