Q_CreationDate
stringlengths
23
23
Title
stringlengths
11
149
Question
stringlengths
25
6.53k
Answer
stringlengths
15
5.1k
Score
float64
-1
1.2
Is_accepted
bool
2 classes
N_answers
int64
1
17
Q_Id
int64
0
6.76k
2020-02-05 14:18:27.763
How to use logger with one basic config across app in Python
I want to improve my understanding of how to use logging correctly in Python. I want to use .ini file to configure it and what I want to do: define basic logger config through .fileConfig(...) in some .py file import logger, call logger = logging.getLogger(__name__) across the app and be sure that it uses my config file that I was loaded recently in different .py file I read few resources over Internet ofc but they are describing tricks of how to configure it etc, but want I to understand is that .fileConfig works across all app or works only for file/module where it was declared. Looks like I missed some small tip or smth like that.
It works across the whole app. Be sure to configure the correct loggers in the config. logger = logging.getLogger(__name__) works well if you know how to handle having a different logger in every module, otherwise you might be happier just calling logger = logging.getLogger("mylogger") which always gives you the same logger. If you only configure the root logger you might even skip that and simply use logging.info("message") directly.
1.2
true
1
6,533
2020-02-05 17:03:07.580
Is there any way to remove BatchToSpaceND from tf.layers.conv1d?
As I get, tf.layers.conv1d uses pipeline like this: BatchToSpaceND -> conv1d -> SpaceToBatchND. So the question is how to remove (or disable) BatchToSpaceND and SpaceToBatchND from the pipeline?
As I've investigated it's impossible to remove BatchToSpaceND and SpaceToBatchND from tf.layers.conv1d without changing and rebuilding tensorflow source code. One of the solutions is to replace layers to tf.nn.conv1d, which is low-level representation of convolutional layers (in fact tf.layers.conv1d is a wrapper around tf.nn.conv1d). These low-level implementations doesn't include BatchToSpaceND and SpaceToBatchND.
1.2
true
1
6,534
2020-02-07 09:05:30.220
ConnectionRefusedError: [WinError 10061][WinError 10061] No connection could be made because the target machine actively refused it
What exactly does this error mean and how can i fix it, am running server on port 8000 of local host. ConnectionRefusedError: [WinError 10061] No connection could be made because the target machine actively refused it
Is firewall running on the server? If so, that may be blocking connections. You could disable firewall or add an exception on the server side to allow connections on port 8000.
0
false
1
6,535
2020-02-07 17:33:20.807
Is there a way to get one, unified, mapping from all indices under one alias?
For example if I have 10 indices with similar names and they all alias to test-index, how would I get test-index-1 test-index-2 test-index-3 test-index-4, test-index-5, test-index-6, test-index-7, test-index-8, test-index-9, and test-index-10 to all point to the mapping in use currently when you to a GET /test-index/_mapping?
Not sure what you define as 'unified' mapping - but you can always use wildcards in mapping request. For example : /test-inde*/_mapping would give mapping of all indices in that pattern .
0
false
1
6,536
2020-02-08 13:56:49.837
How do I add a mobile geolocation marker into a Folium webmap that automatically updates position?
I have a webmap which is made in python using Folium. I am adding various geojson layers from an underlying database. I would like to do spatial analysis based on the user's location and their position relative to the various map overlays. As part of this I want to display a marker on the map which indicates the user's current position, and which updates regularly as they move around. I know how to add markers to the map from within python, using Folium. I know how to get a constantly updating latitude / longitude of the user using JS navigator.geolocation.watchPosition(showPosition) which then passes a position variable to the function showPosition. I am currently just displaying this as text on the website for now. What I have not been able to do is to add a marker to the Folium map from inside the webpage, using JS/Leaflet (as Folium is just a wrapper for Leaflet, i think I should be able to do this). The Folium map object seems to be assigned a new variable name every time the webpage is loaded, and I don't know how to "get" the map element and add a marker using the Leaflet syntax L.marker([lat, lon]).addTo(name_of_map_variable_which_keeps_changing) Alternatively there might be a way to "send" the constantly changing lat/lon variables from the webpage back to the python script so that I can just use folium to add the marker. But I have been unable to figure this out or find the right assistance online and would appreciate any help.
OK, I have figured out a main part of the question - how to add a user location marker to the Folium map. It is actually very simple: https://python-visualization.github.io/folium/plugins.html#folium.plugins.LocateControl I am still unable to pass the user's lat/lon through to my python script so that I can perform spatial queries using that location. So am looking forward to anyone being able to answer that part. Though I may have to post that as a separate question perhaps...
0
false
1
6,537
2020-02-09 11:13:26.383
Convert String into the format that readUTF() expects
I created a Client with Java and a Server with Python. The Java client receive data using readUTF() of the class DataInputStream. My problem is that the function readUTF() expects a modified version of 'utf-8' that I don't know how to generate in the (Python) server side.
I got it!. Using the function read() of the class DataInputStream do work. The problem was that I initialized the destination buffer like this: byte[] ans = {}, instead of allocating some bytes. Thanks for everyone!
0
false
1
6,538
2020-02-09 20:57:38.340
Change default version of python in ubuntu 18.04
I just installed ubuntu 18.04 and I really don't know how does everything work yet. I use the last version of python in my windows system (3.8.1) and would like to use that version as well in ubuntu, but the "pre-installed" version of python is 2.7. Is there a way to uninstall that old version of python instead of changin the alias of the python command to match the version I want to use? Can you do that or does ubuntu need to have that version? If you could help me or explain this to me I would appreciate it.
Some services and application in Ubuntu use Python 2.x to run. It is not advisable to remove it. Rather, virtual environments maybe a good practice. There, you can work on Python 3.x, as per your needs, without messing up with the system's dependencies.
0
false
1
6,539
2020-02-10 10:36:23.260
How to convert a 3d model into an array of points
I'm building my own 3d engine, I need to import 3d models into it, but I don't know how to do it. I wonder if it is possible to convert a 3d model into an array of points; if it is possible, how do you do it?
This isn't something I've done before; but the premise is interesting so I thought I'd share my idea as I have worked with grids (pretty much an array) in 3D space during my time at university. If you consider 3D space, you could represent that space as a three dimensional array quite simply with each dimension representing an axis. You could then treat each element in that array as a point in space and populate that with a value (say a Boolean of true/false, 1/0) to identify the points of your model within that three dimensional space. All you'd need is the Height, Width and Depth of your model, with each one of these being the dimensions in your array. Populate the values with 0/false if the model does not have a point in that space, or 1/true if it does. This would then give you a representation of your model as a 3D array.
0
false
1
6,540
2020-02-10 11:09:57.583
Discord.py: Adding someone to a discord server with just the discord ID
I'm trying to add someone to a specific server and then DM said person with just the discord ID. The way it works is that someone is logging himself in using discord OAuth2 on a website and after he is logged in he should be added to a specific server and then the bot should DM saying something like Welcome to the server! Has anyone an idea how to do that? Thanks for any help
It is not possible to leave or join servers with OAuth2. Nor is it possible to DM a user on Discord with a bot unless they share a mutual server.
0
false
1
6,541
2020-02-10 21:41:21.090
Run Python Script on AWS and transfer 5GB of files to EC2
I am an absolute beginner in AWS: I have created a key and an instance, the python script I want to run in the EC2 environment needs to loop through around 80,000 filings, tokenize the sentences in them, and use these sentences for some unsupervised learning. This might be a duplicate; but I can't find a way to copy these filings to the EC2 environment and run the python script in EC2, I am also not very sure as to how I can use boto3. I am using Mac OS. I am just looking for any way to speed things up. Thank you so so much! I am forever grateful!!!
Here's what I tried recently: Create the bucket and keep the bucket accessible for public. Create the role and add HTTP option. Upload all the files and make sure the files are public accessible. Get the HTTP link of the S3 file. Connect the instance through putty. wget copies the file into EC2 instance. If your files are in zip format, one time copy enough to move all the files into instance.
1.2
true
2
6,542
2020-02-10 21:41:21.090
Run Python Script on AWS and transfer 5GB of files to EC2
I am an absolute beginner in AWS: I have created a key and an instance, the python script I want to run in the EC2 environment needs to loop through around 80,000 filings, tokenize the sentences in them, and use these sentences for some unsupervised learning. This might be a duplicate; but I can't find a way to copy these filings to the EC2 environment and run the python script in EC2, I am also not very sure as to how I can use boto3. I am using Mac OS. I am just looking for any way to speed things up. Thank you so so much! I am forever grateful!!!
Here's one way that might help: create a simple IAM role that allows S3 access to the bucket holding your files apply that IAM role to the running EC2 instance (or launch a new instance with the IAM role) install the awscli on the EC2 instance SSH to the instance and sync the S3 files to the EC2 instance using aws s3 sync run your app I'm assuming you've launched EC2 with enough diskspace to hold the files.
0
false
2
6,542
2020-02-11 04:04:16.040
pyzk how to get the result of live capture
: 1495 : 2020-02-11 11:55:00 (1, 0) Here is my sample result but then when I'm trying to split it gives me error Process terminate : 'Attendance' object has no attribute 'split' In the documentation it says print (attendance) # Attendance object How to access it?
found the solution i check in the github repository of pyzk and look for the attendance class and found all the object being return by the live_capture thank you :)
1.2
true
1
6,543
2020-02-11 08:29:03.657
which python vs PYTHONPATH
If I type in which python I get: /home/USER/anaconda3/bin/python If I type in echo $PYTHONPATH I get: /home/USER/terrain_planning/devel/lib/python2.7/dist-packages:/opt/ros/melodic/lib/python2.7/dist-packages Should that not be the same? And is it not better to set it: usr/lib/python/ How would I do that? Add it to the PYTHONPATH or set the PYTHONPATH to that? But how to set which python?
You're mixing 2 environment variables: PATH where which looks up for executables when they're accessed by name only. This variable is a list (colon/semi-colon separated depending on the platform) of directories containing executables. Not python specific. which python just looks in this variable and prints the full path PYTHONPATH is python-specific list of directories (colon/semi-colon separated like PATH) where python looks for packages that aren't installed directly in the python distribution. The name & format is very close to system/shell PATH variable on purpose, but it's not used by the operating system at all, just by python.
1.2
true
1
6,544
2020-02-11 08:37:01.607
Add a new column to multiple .csv and populate with filename
I am new in python and I have a folder with 15 excel files and I am trying to rename a specific column in each file to a standard name, for instance I have a columns named "name, and server" on different files but they entail of the same information so I need to rename them to a standard name like " server name" and I don't know how to start
If the position of the columns are the same across all excel file, you can iterate all the 15 excel files, locate the position of the column and replace the text directly. Alternatively, you can iterate all the files via read_xls (or read_csv depending on your context), reading them as dataframe and replace the necessary column name, and overwrite the file. Below is a reference syntax for your reference. df.rename(columns={ df.columns[1]: "your value" }, inplace = True)
1.2
true
1
6,545
2020-02-11 13:48:39.953
Setting global JsonEncoder in Python
Basically, I'm fighting with the age-old problem that Python's default json encoder does not support datetime. However all the solutions I can find call to json.dumps and manually pass the "proper" encoder on each invocation. And honestly, that can't be the best way to do it. Especially if you want to use a wrapper like jsonify to set up your response object properly, where you can't even specify these parameters. So: long story short: how to override the global default encoder in Python's JSON implementation to a custom one, that actually does support the features I want? EDIT: ok so I figured out how to do this for my specific use case (inside Flask). You can do app.json_encoder = MyCustomJSONEncoder there. However how to do this outside of flask would still be an interesting question.
Unfortunately, I could not find a way to set default encoders or decoders for the json module. So the best way is to do what flask do, that is wrapping the calls to dump or dumps, and provide a default in that wrapper.
0
false
1
6,546
2020-02-11 14:46:10.263
How can I use radish with Pycharm to have behave step autocomplete
Note : radish is a "Gherkin-plus" framework—it adds Scenario Loops and Preconditions to the standard Gherkin language, which makes it more friendly to programmers. So how i can use it or use an other method to use Gherkin step autocomplete with Pycharm. Thank's
I have solve this problem by buying a professional version of PyCharm, autocomplete is not available for Community version :(
1.2
true
1
6,547
2020-02-11 18:40:45.797
Estimating Dataframe memory usage from file sizes
If I have a list of files in a directory is it possible to estimate a memory use number that would be taken up by reading or concatenating the files using pd.read_csv(file) or pd.concat([df1, df2])? I would like to break these files up into concatenation 'batches' where each batch will not exceed a certain memory usage so I do not run into local memory errors. Using os.path.getsize() will allow me to obtain the file sizes and df.memory_usage() will tell me how much memory the dataframe will use once it's already read in but is there a way to estimate this with just the files themselves?
You could open each CSV, read first 1000 lines only into DataFrame, and then check memory usage. Then scale estimated memory usage by number of lines in the file. Note that memory_usage() isn't accurate with default arguments, because it won't count strings' memory usage. You need memory_usage(deep=True), although that might overestimate memory usage in some cases. But better to overestimate than underestimate.
0
false
1
6,548
2020-02-12 07:09:03.673
How do find correlation between time events and time series data in python?
I have two different excel files. One of them is including time series data (268943 accident time rows) as below The other file is value of 14 workers measured daily from 8 to 17 and during 4 months(all data merged in one file) I am trying to understand correlation between accident times and values (hourly from 8 to 17 per one hour and daily from Monday to Friday and monthly) Which statistical method is fit(Normalized Auto or cross correlation) and how can I do that? Generally, in the questions, the correlation analysis are performed between two time series based values, but I think this is a little bit different. Also, here times are different. Thank your advance..
I think the accident times and the bloodsugar levels are not coming from the same source, and so I think it is not possible to draw a correlation between these two separate datasets. If you would like to assume that the blood sugar levels of all 14 workers reflect that of the workers accident dataset, that is a different story. But what if those who had accidents had a significantly different blood sugar level profile than the rest, and what if your tiny dataset of 14 workers does not comprise such examples? I think the best you may do is to graph the blood sugar level of your 14 worker dataset and also similarly analyze the accident dataset separately, and try to see visually whether there is any correlation here.
0.673066
false
1
6,549
2020-02-12 13:25:20.013
How to get full path for any (including local) function in python?
f"{f.__module__}.{f.__name__}" doesn't work because function f can be local, eg inside another function. We need to add some kind of marked (.<local>.) in the path to specify that this function is local. But how to determine when we need to add this marker?
Use f.__qualname__ instead of __name__.
1.2
true
1
6,550
2020-02-12 14:33:31.370
How to use R models in Python
I have been working on an algorithm trading project where I used R to fit a random forest using historical data while the real-time trading system is in Python. I have fitted a model I'd like to use in R and am now wondering how can I use this model for prediction purposes in the Python system. Thanks.
There are several options: (1) Random Forest is a well researched algorithm and is available in Python through sci-kit learn. Consider implementing it natively in Python if that is the end goal. (2) If that is not an option, you can call R from within Python using the Rpy2 library. There is plenty of online help available for this library, so just do a google search for it. Hope this helps.
0.386912
false
1
6,551
2020-02-12 21:27:39.690
How many users can Sqlite can handle , Django
I have a Django application, which I hosted on pythonanywhere. For the database, I have used SQLite(default). So I want to know how many users my applications can handle? And what if two user register form or make post at same time, will my application will crash?
SQLite supports multiple users, however it locks the database when write operations is being executed. In other words,concurrent writes cannot be treated with this database, so is not recommended. You can use PostgreSQL or MySQL as an alternative.
0
false
1
6,552
2020-02-13 03:37:40.343
How can I cycle through items in a DynamoDB table?
How can I cycle through items in a DynamoDB table? That is, if I have a table containing [A,B,C], how can I efficiently get item A with my first call, item B with my second call, item C with my third call and item A again with my fourth call, repeat? This table could in the future expand to include D, E, F etc and I would like to incorporate the new elements into the cycle. The current way I am doing it is giving each item an attribute "seen". We scan the whole table, find an element that's not "seen" and put it back as "seen". When everything has been "seen", make all elements not "seen" again. This is very expensive.
The efficient way to return items that haven't been seen would be to have an attribute of seen=no included when inserted. Then you could have a global secondary index over that attribute which you could then Query(). There isn't an efficient way to reset all the seen=yes attributes back to no. Scan() and Query() would both end up returning the entire table and you'd end up updating records one by one. That will not be fast nor cheap with a large table. EDIT Once all the records have seen="yes" and you want to reset them back to seen="no" A query on the GSI suggested above will work exactly like a scan...every record will have to be read and then updated. If you have 1M records, each about 1K, and you want to reset them...you're going to need 250K reads (since you can read 4 records with a single 4KB RCU) 1M writes
0
false
2
6,553
2020-02-13 03:37:40.343
How can I cycle through items in a DynamoDB table?
How can I cycle through items in a DynamoDB table? That is, if I have a table containing [A,B,C], how can I efficiently get item A with my first call, item B with my second call, item C with my third call and item A again with my fourth call, repeat? This table could in the future expand to include D, E, F etc and I would like to incorporate the new elements into the cycle. The current way I am doing it is giving each item an attribute "seen". We scan the whole table, find an element that's not "seen" and put it back as "seen". When everything has been "seen", make all elements not "seen" again. This is very expensive.
I think the simplest option is probably: use scan with Limit=1 and do not supply ExclusiveStartKey, this will get the first item if an item was returned and LastEvaluatedKey is present in the response, then re-run scan with ExclusiveStartKey set to the LastEvaluatedKey of the prior response and again Limit=1, repeat step 2 until no item returned or LastEvaluatedKey is absent when you get zero items returned, you've hit the end of the table, goto step 1 This is an unusual pattern and probably not super-efficient, so if you can share any more about what you're actually trying to do here, then we might be able to propose better options.
1.2
true
2
6,553
2020-02-13 12:00:17.947
Python: os.getcwd() randomly fails in mounted network drive
I'm on Debian using python3.7. I have a network drive that I typically mount to /media/N_drive with dir_mode=0777 and file_mode=0777. I generally have no issues with reading/writing files in this network drive. Occasionally, especially soon after mounting the drive, if I try to run any Python script with os.getcwd() (including any imported libraries like pandas) I get the error FileNotFoundError: [Errno 2] No such file or directory. If I cd up to the local drive (cd /media/) the script runs fine. Doing some reading, it sounds like this error indicates that the working directory has been deleted. Yet I can still navigate to the directory, create files, etc. when I'm in the shell. It only seems to be Python's os.getcwd() that has problems. What is more strange is that this behavior is not predictable. Typically if I wait ~1 hour after mounting the drive the same script will run just fine. I suspect this has something to do with the way the drive is mounted maybe? Any ideas how to troubleshoot it?
To me, it seems a problem with the mount, e.g. the network disk will be disconnected, and reconnected. So your cwd is not more valid. Note: cwd is pointing to a disk+inode, it is not a name (which you will see). So /media/a is different to /media/a after a reconnection. If you are looking on how to solve the mounting, you are in the wrong place. Try Unix&Linux sister site, or Serverfault (also a sister site). If you are looking how to solve programmatically: save cwd at beginning of the script and use os.path.join() at every path access, so that you forcing absolute paths, and not relative paths, and so you should be on the correct location. This is not save, if you happen to read a file during disconnection.
0.386912
false
1
6,554
2020-02-13 14:20:37.003
Best practice for getting data from Django view into JS to execute on page?
I have been told it is 'bad practice' to return data from a Django view and use those returned items in Javascript that is loaded on the page. For example: if I was writing an app that needed some extra data to load/display a javascript based graph, I was told it's wrong to pass that data directly into the javascript on the page from a template variable passed from the Django view. My first thought: Just get the data the graph needs in the django view and return it in a context variable to be used in the template. Then just reference that context variable directly in the javascript in the template. It should load the data fine - but I was told that is the wrong way. So how is it best achieved? My second thought: Spin up Django Rest Framework and create an endpoint where you pass any required data to and make an AJAX request when the page loads - then load the data and do the JS stuff needed. This works, except for one thing, how do I get the variables required for the AJAX request into the AJAX request itself? I'd have to get them either from the context (which is the 'wrong way') or get the parameters from the URL. Is there any easy way to parse the data out of the URL in JS? It seems like a pain in the neck just to get around not utilizing the view for the data needed and accessing those variables directly in the JS. So, is it really 'bad practice' to pass data from the Django view and use it directly in the Javascript? Are both methods acceptable? What is the Django appropriate way to get data like that into the Javascript on a given page/template?
Passing data directly is not always the wrong way to go. JS is there so you can execute code when everything else is ready. So when they tell you it's the wrong way to pass data directly, it's because there is no point in making the page and data heavier than it should be before JS kicks in. BUT it's okay to pass the essential data so your JS codes knows what it has to do. To make it more clear, let's look into your case: You want to render a graph. And graphs are sometimes heavy to render and it can make the first render slow. And most of the time, graphs are not so useful without the extra context that your page provides. So in order to make your web page load faster, you let JS load your graph after your webpage has been rendered. And if you're going to wait, then there is no point in passing the extra data needed because it makes the page heavier and slows down the initial render and it takes time to parse and convert those data to JSON objects. By removing the data and letting JS load them in the background, you make your page smaller and faster to render. So while a user is reading the context needed for your graph, JS will fetch the data needed and renders the graph. This will cause your web page to have a faster initial render. So in general: When to pass data directly: When the initial data is necessary for JS to do what it has to (configs, defaults, etc). When the time difference matters a lot and you can't wait too much for an extra request to complete the render. When data is very small. When not to pass data directly: When rendering the extra data takes time anyway, so why not get the data latter too? When the data size is big. When you need to render something as fast as possible. When there are some heavy processes needed for those data. When JS can make your data size smaller (Decide what kind of data should be passed exactly using options that are only accessible by JS.)
1.2
true
1
6,555
2020-02-13 23:49:41.500
Interpreter won't show in Python 3.8.1
I recently downloaded python for the first time and when I load into pycharm to create a new project and it asks to select an interpreter python doesn't show up even when I click the plus sign and search through all my files it doesn't show even though I have the latest python version installed and I have windows 10 I tried deleting both programs and redownloading them but that doesn't seem to work either please if possible and the answer may be obvious but sorry I'm a beginner and also looking at videos didn't help either.
You have no navigate to the folder where python is downloaded and just select there. Try the following path C:\Users\YourName\AppData\Local\Programs\Python\Python38-32\python.exe
0.673066
false
1
6,556
2020-02-14 01:43:31.803
How did scipy ver 0.18 scipy.interpolate.UnivariateSpline deal with values not strictly increasing?
I have a program written in python 2.7.5 scipy 0.18.1 that is able to run scipy.interpolate.UnivariateSpline with arrays that are non-sequential. When I try to run the same program in python 2.7.14 / scipy 1.0.0 I get the following error: File "/usr/local/lib/python2.7/site-packages/scipy/interpolate/fitpack2.py", line 176, in init raise ValueError('x must be strictly increasing') Usually I would just fix the arrays to remove the non-sequential values. But in this case I need to reproduce the exact same solution produced by the earlier version of python/scipy. Can anyone tell me how the earlier code dealt with the situation where the values were not sequential?
IIRC this was whatever the FITPACK (the fortran library the univariatespline class wraps) was doing. So the first stop would be to remove the check from your local scipy install and see if this does the trick
1.2
true
1
6,557
2020-02-14 09:01:36.233
Send mail via Python logging in with Windows Authentication
I've a conceptual doubt, I don't know if it's even possible. Assume I log on a Windows equipment with an account (let's call it AccountA from UserA). However, this account has access to the mail account (Outlook) of the UserA and another fictional user (UserX, without any password, you logg in thanks to Windows authentication), shared by UserA, UserB and UserC. Can I send a mail from User A using the account of User X via Python? If so, how shall I do the log in? Thanks in advance
A interesting feature with Windows Authentication is that is uses the well known Kerberos protocol under the hood. In a private environment, that means if a server trusts the Active Directory domain, you can pass the authentication of a client machine to that server provided the service is Kerberized, even if the server is a Linux or Unix box and is not a domain member. It is mainly used for Web servers in corporate environment, but could be used for any kerberized service. Postfix for example is know to accept this kind of authentication. If you want to access an external mail server, you will have to store the credential in plain text on the client machine, which is bad. An acceptable way would be to use a file only readable by the current user (live protection) in an encrypted folder (at rest protection).
1.2
true
1
6,558
2020-02-14 17:19:50.673
How to switch two words around in file document in python
I was wondering how to switch two words around in a file document in python. Example: I want to switch the words motorcycle to car, and car to motorcycle. The way I'm doing it is making it have all the words motorcycle change to car, and because car is being switched to motorcycle, it get's switched back to car. Hopefully that makes sense.
First, replace all the motocycle to carholder Second, replace all car to motocycle Third, replace all carholder to car That's it
0.673066
false
1
6,559
2020-02-15 11:54:26.760
umqtt.robust on Wemos
I am trying to install micropython-umqtt.robust on my Wemos D1 mini. The way i tried this is as follow. I use the Thonny editor I have connected the wemos to the internet. in wrepl type: import upip upip.install('micropython-umqtt.simple') I get the folowing error: Installing to: /lib/ Error installing 'micropython-umqtt.simple': Package not found, packages may be partially installed upip.install('micropython-umqtt.robust') I get the folowing error: Error installing 'micropython-umqtt.robust': Package not found, packages may be partially installed Can umqtt be installed on Wemos D1 mini ? if yes how do I do this ?
Thanks for your help Reilly, The way I solved it is as follow. With a bit more understanding of mqtt and micropython I found that the only thing that happens when you try to install umqtt simple and umqtt robust,is that it makes in de lib directory of your wemos a new directory umqtt. Inside this directory it installs two files robust.py and simple.py. While trying to install them I kept having error messages. But I found a GitHub page for these two files, so I copied these files. Made the umqtt directory within the lib directory and in this umqtt directory I pasted the two copied files. Now I can use mqtt on my wemos.
0.386912
false
2
6,560
2020-02-15 11:54:26.760
umqtt.robust on Wemos
I am trying to install micropython-umqtt.robust on my Wemos D1 mini. The way i tried this is as follow. I use the Thonny editor I have connected the wemos to the internet. in wrepl type: import upip upip.install('micropython-umqtt.simple') I get the folowing error: Installing to: /lib/ Error installing 'micropython-umqtt.simple': Package not found, packages may be partially installed upip.install('micropython-umqtt.robust') I get the folowing error: Error installing 'micropython-umqtt.robust': Package not found, packages may be partially installed Can umqtt be installed on Wemos D1 mini ? if yes how do I do this ?
I think the MicroPython build available from micropython.org already bundles MQTT so no need to install it with upip. Try this directly from the REPL: from umqtt.robust import MQTTClient or from umqtt.simple import MQTTClient and start using it from there mqtt = MQTTClient(id, server, user, password)
1.2
true
2
6,560
2020-02-16 04:54:39.390
how to add python in xilinx vitis
I have implemented a Zynq ZCU102 board in vivado and I want to use final ".XSA" file into VITIS, but after creating a new platform, its languages are C and C++, While in the documentation was told that vitis supports python. My question is how can I add python to my vitis platform? Thank you
Running Python in FPGA needs an Operating System. I had to run Linux OS on my FPGA using petaLinux and then run python code on it.
1.2
true
1
6,561
2020-02-16 14:44:02.093
How to create "add to favorites" functional using Django Rest Framework
I just can’t find any information about the implementation of the system of adding to favorites for registered users. The model has a Post model. It has a couple of fields of format String. The author field, which indicates which user made the POST request, etc. But how to make it so that the user can add this Post to his “favorites”, so that later you can get a JSON response with all the posts that he added to himself. Well, respectively, so that you can remove from favorites. Are there any ideas?
You can add a favorite_posts field (many-to-many) in your Author model.
-0.386912
false
1
6,562
2020-02-17 04:26:06.590
how to customized hr,month,year in python date time module?
How can i customize hrs,days,months of date time module in python? day of 5 hrs only, a month of 20 days only, and a year of 10 months only. using date time module.
I agree with @TimPeters . This just doesn't fit in what datetime does. For your needs, I would be inclined to start my own class from scratch, as that is pretty far from datetime. That said...you could look into monkeypatching datetime...but I would recommend against it. It's a pretty complex beast, and changing something as fundamental as the number of hours in a day will blow away unknown assumptions within the code, and would certainly turn its unit tests upside down. Build your own from scratch is my advice.
0
false
1
6,563
2020-02-17 06:51:16.120
Bad interpreter file not found error when running flask commands
Whenever I run a flask command in my project, I get an error of the form zsh: (correct file path)/venv/bin/flask: bad interpreter: (incorrect, old file path)/venv/bin/python3. I believe the error is due to the file paths not matching, and the second file path no longer existing. I changed the name of the directory for my project when I changed the name of the project, but I don't know how to change the path that flask searches for the interpreter in. Thanks in advance. Edit: I just tried going into the flask file at (correct file path)/venv/bin. I saw that it still had #!(incorrect, old file path)/venv/bin/python3 at the top. I tried changing this to #!(correct file path)/venv/bin/python3, but the same error as before persisted, as well as the flask app not being able to find the flask_login module, which it was not having issues with before.
Ok, I figured out how to fix it. I had to go into my (correct file path)/venv/bin/flask file and change the file path after the #! to the correct file path. I had to do the same for pip, pip3, and pip3.7 which were all in the same location as the flask file. Then I had to reinstall the flask_login package. This fixed everything.
0
false
1
6,564
2020-02-17 13:37:36.463
Implementing saved python regression model to react expo application
I have a python regression model that predicts one's level of happiness based on user-input data, i have trained and tested it using Python. But I'm using React Native to create my mobile application. My mobile application will take in the user-input data needed and will output a prediction on their level of happiness. Anyone has an idea on how to implement this? Any advice would be appreciated! I lack the experience, but have an interest in this area, Im still learning so please help me out :)
You need to create python API and call it from the mobile application by passing the input features. Python API will return you the forecasted value. This API will load the regression model and make a forecast on given input features. I hope It will help.
0.386912
false
1
6,565
2020-02-18 14:37:41.817
I have set up a small flask webpage but in only runs on localhost while I would like to make it run on my local network python3.7
I have set up a small flask webpage but in only runs on localhost while I would like to make it run on my local network, how do I do that?
Just my 2 cents on this, I just did some research, there are many suggestions online... Adding a parameter to your app.run(), by default it runs on localhost, so change it to app.run(host= '0.0.0.0') to run on your machines IP address. Few other things you could do is to use the flask executable to start up your local server, and then you can use flask run --host=0.0.0.0 to change the default IP which is 127.0.0.1 and open it up to non local connections. The thing is you should use the app.run() method which is much better than any other methods. Hope it helps a little, if not good luck :)
1.2
true
1
6,566
2020-02-18 23:36:54.463
Do .py Python files contain metadata?
.doc files, .pdf files, and some image formats all contain metadata about the file, such as the author. Is a .py file just a plain text file whose contents are all visible once opened with a code editor like Sublime, or does it also contain metadata? If so, how does one access this metadata?
On Linux and most Unixes, .py's are just text (sometimes unicode text). On Windows and Mac, there are cubbyholes where you can stash data, but I doubt Python uses them. .pyc's, on the other hand, have at least a little metadata stuff in them - or so I've heard. Specifically: there's supposed to be a timestamp in them, so that if you copy a filesystem hierarchy, python won't automatically recreate all the .pyc's on import. There may or may not be more.
1.2
true
1
6,567
2020-02-19 12:51:11.683
Errors such as: 'Solving environment: failed with initial frozen solve. Retrying with flexible solve' & 'unittest' tab
I am working with spyder - python. I want to test my codes. I have followed the pip install spyder-unittest and pip install pytest. I have restarted the kernel and restarted my MAC as well. Yet, Unit Testing tab does not appear. Even when I drop down Run cannot find the Run Unit test. Does someone know how to do this?
So, I solved the issue by running the command: conda config --set channel_priority false. And then proceeded with the unittest download with the command run: conda install -c spyder-ide spyder-unittest. The first command run conda config --set channel_priority false may solve other issues such as: Solving environment: failed with initial frozen solve. Retrying with flexible solve
1.2
true
1
6,568
2020-02-19 17:27:39.387
JupyterLab - python open() function results in FileNotFoundError
I am trying to open an existing file in a subfolder of the current working directory. This is my command: fyle = open('/SPAdes/default/{}'.format(file), 'r') The filevariable contains the correct filename, the folder structure is correct (working on macOS), and the file exists. This command, however, results if this error message: FileNotFoundError: [Errno 2] No such file or directory: [filename] Does it have anything to do with the way JupyterLab works? How am I supposed to specify the folder srtucture on Jupyter? I am able to create a new file in the current folder, but I am not able to create one in a subfolder of the current one (results in the same error message). The folder structure is recognized on the same Jupyter notebook by bash commands, but I am somehow not able to access subfolders using python code. Any idea as to what is wrong with the way I specified the folder structure? Thanks a lot in advance.
There shouldn’t be a forward slash in front of SPAdes. Paths starting with a slash exist high up in file hierarchy. You said this is a sub-directory of your current working directory.
0.673066
false
1
6,569
2020-02-19 18:16:29.400
What's a good way to save all data related to a training run in Keras?
I know how to do a few things already: Summarise a model with model.summary(). But this actually doesn't print everything about the model, just the coarse details. Save model with model.save() and load model with keras.models.load_model() Get weights with model.get_weights() Get the training history from model.fit() But none of these seem to give me a catch all solution for saving everything from end to end so that I can 100% reproduce a model architecture, training setup, and results. Any help filling in the gaps would be appreciated.
model.to_json() can be used to convert model config into json format and save it as a json. You can recreate the model from json using model_from_json found in keras.models Weights can be saved separately using model.save_weights. Useful in checkpointing your model. Note that model.save saves both of these together. Saving only the weights and loading them back useful when you need to work with the variables used in defining the model. In that case create the model using the code and do model.load_weights.
0
false
1
6,570
2020-02-20 07:22:20.357
continuous log file processing and extract required data using python
I have to analyze a log file which will generate continuously 24*7. So, the data will be huge. I will have credentials to where log file is generating. But how can I get that streaming data ( I mean like any free tools or processes) so that I can use it in my python code to extract some required information from that log stream and will have to prepare a real time dashboard with that data. please tell some possibilities to achieve above task.
Just a suggestion You could try with ELK: ELK, short for Elasticsearch (ES), Logstash, and Kibana, is the most popular open source log aggregation tool. Es is a NoSQL. Logstash is a log pipeline system that can ingest data, transform it, and load it into a store like Elasticsearch. Kibana is a visualization layer on top of Elasticsearch. or you could use Mongo DB to handle such huge amount of data: MongoDB is an open-source document database and leading NoSQL. Mongo DB stores data in a json format. Process the logs and store it in a json format and retrieve it for any further use. Basically its not a simple question to explain, it depends on the scenarios.
0
false
1
6,571
2020-02-20 12:40:01.843
Tika in Python Azure Function
I'm trying to create a function on Azure Function Apps that is given back a PDF and uses the python tika library to parse it. This setup works fine locally, and I have the python function set up in Azure as well, however I cannot figure out how to include Java in the environment? At the moment, when I try to run the code on the server I get the error message Unable to run java; is it installed? Failed to receive startup confirmation from startServer.
So this isnt possible at this time. To solve it, I abstracted out the tika code into a Java Function app and used that instead.
1.2
true
1
6,572
2020-02-20 14:06:16.640
When python is referred to as single threaded why does in not have the same pitfalls in processing as something like node.js?
I've been doing Node programming for a while and one thing I'm just very tired of is having to worry about blocking the event loop with anything that requires lots of cpu time. I'd also like to expand my language skills to something more focused on machine learning, so python seemed like a good choice based on what I've read. However, I keep seeing that python is also single threaded, but I get the feeling this wording is being used in a different way than how it's usually used in node. Python is the go to language for a lot of heavy data manipulation so I can't imagine it blocks the same way node does. Can someone with more familiarity with python (and some with node) explain how their processing of concurrent requests differs when 1 request is cpu intensive?
First of all Python is not single-threaded, but its standard library contains everything required to manage threads. It works fine for IO bound tasks, but does not for CPU bound tasks because of the Global Interpretor Lock which prevents more than one thread to execute Python code at the same time. For data processing tasks, several modules exist that add low level (C code level) processing and internally manage the GIL to be able to use multi-core processing. The most used modules here are scipy and numpy (scientific and numeric processing) and pandas which is an efficient data frame processing tools using numpy arrays for its underlying containers. Long story short: For io bound tasks, Python is great. If your problem is vectorizable through numpy or pandas, Python is great. If your problem is CPU intensive and neither numpy nor pandas will be used, Python is not at its best.
0.386912
false
1
6,573
2020-02-23 01:42:23.757
subprocess.check_call command called not using threads
I'm running the following command using subprocess.check_call ['/home/user/anaconda3/envs/hum2/bin/bowtie2-build', '-f', '/media/user/extra/tmp/subhm/sub_humann2_temp/sub_custom_chocophlan_database.ffn', '/media/user/extra/tmp/subhm/sub_humann2_temp/sub_bowtie2_index', ' --threads 8'] But for some reason, it ignores the --threads argument and runs on one thread only. I've checked outside of python with the same command that the threads are launched. This only happens when calling from subprocess, any idea on how to fix this? thanks
You are passing '--threads 8' and not '--threads', '8'. Although it could be '--threads=8' but I don't know the command.
1.2
true
1
6,574
2020-02-23 14:42:25.190
How to change the name of a mp4 video using python
I just want to know how can I change the name of mp4 video using python. I tried looking on the internet but could not find it. I am a beginner in python
you can use os module to rename as follows... import os os.rename('full_file_path_old','new_file_name_path)
1.2
true
1
6,575
2020-02-23 16:13:40.300
Converting depth map to pointcloud on Raspberry PI for realtime application
I am developing a robot based on StereoPI. I have successfully calibrated the cameras and obtained a fairly accurate depth map. However, I am unable to convert my depth map to point cloud so that I can obtain the actual distance of an object. I have been trying to use cv2.reprojectImageTo3D, but see no success. May I ask if there is a tutorial or guide which teaches how to convert disparity map to point cloud? I am trying very hard to learn and find reliable sources but see on avail. So, Thank you very much in advance.
By calibrating your cameras you compute their interior orientation parameters (IOP - or intrinsic parameters). To compute the XYZ coordinates from the disparity you need also the exterior orientation parameters (EOP). If you want your point cloud relative to the robot position, the EOP can be simplified, otherwise, you need to take into account the robot's position and rotation, which can be retrieved with a GNSS receiver and intertial measurement unit (IMU). Note that is very likely that such data need to be processed with a Kalman filter. Then, assuming you got both (i) the IOP and EOP of your cameras, and (ii) the disparity map, you can generate the point cloud by intersection. There are several ways to accomplish this, I suggest using the collinearity equations.
0
false
1
6,576
2020-02-25 03:32:31.883
What is the best way to implement Django 3 Modal forms?
I appreciate it if somebody gives the main idea of how to handle submission/retrieval form implementation in Bootstrap modals. I saw many examples on google but it is still ambiguous for me. Why it is required to have a separate Html file for modal-forms template? Where SQL commands will be written? What is the flow in submission/retrieval forms (I mean steps)? What is the best practice to implement these kind of forms? I'm fairly new to Django, please be nice and helpful.
No need for separate file for modal-form. Here MVT structure following, whenever forms are used. Easy interaction to template. Moreover if you go through Django documentation, you will get to know easily. Submission - mention the form action url. It will call that and check the django forms
0
false
1
6,577
2020-02-25 03:53:05.757
How do we calculate the accuracy of a multi-class classifier using neural network
When the outputs (prediction) are the probabilities coming from a Softmax function, and the training target is one-hot type, how do we compare those two different kinds of data to calculate the accuracy? (the number of training data classified correctly) / (the number of the total training data) *100%
Usually, we assign the class label with highest probability in the output of the soft max function as the label.
1.2
true
1
6,578
2020-02-25 16:58:00.407
When switching to zsh shell on mac terminal from bash, how do you update the base python version?
Mac has recently updated its terminal shell to Zsh from bash. As a python programmer, I'd like to have a consistency in python versions across all the systems that includes terminals, & IDE. On a bash shell, to update the python version in the terminal to 3.8.1, I had followed the below process nano ~/.bash_profile alias python=python3 ctrl + x y enter This enabled me to update the python version from 2.7.6 to 3.8.1. However, repeating the same steps for zsh shell didn't work out. Tried a tweak of the above process, and somehow stuck with 3.7.3 steps followed which python3 #Location of the python3.8.1 terminal command file is found. Installed it. python --version #returned python 3.7.3 PS: I am an absolute beginner in python, so please consider that in your response. I hope i am not wasting your time.
it is actually not recommendet to update the default Python executable system-wide because some applications are depending on it. Although, you can use venv (virtual environment) or for using another version of Python within your ZSH you can also put an alias like python='python3' in your ~/.zsh_profile and source it. Hope that helps. Greetings
0.386912
false
1
6,579
2020-02-25 19:43:50.643
How can I darken/lighten a RGB color
So I'm trying to make a color gradient, from a color to completely black, as well as from a color to completely white. So say I have (175, 250, 255) and I want to darken that color exactly 10 times to end at (0, 0, 0), how could I do this? I'd also like to brighten the color, so I'd like to brighten it exactly 10 times and end at (255, 255, 255).
Many ways to solve this one. One idea would be to find the difference between your current value to the target value and divide that by 10. So (175, 250, 255) to (0, 0, 0) difference is (175, 250, 255), then divide that by ten to have what you would subtract each of the ten steps. So subtract (-17.5, -25, -25.5) every step, rounding when needed.
0
false
1
6,580
2020-02-26 11:00:22.870
Django Queryset - Can I query at specific positions in a string for a field?
I have a table field with entries such as e.g. 02-65-04-12-88-55. Each position (separated by -) represents something. (There is no '-' in the database, that's how it's displayed to the user). Users would like to search by the entry's specific position. I am trying to create a queryset to do this but cannot figure it out. I could handle startswith, endswith but the rest - I have no idea. Other thoughs would be to split the string at '-' and then query at each specific part of the field (if this is possible). How can a user search the field's entry at say positions 0-1, 6-7, 10-11 and have the rest wildcarded and returned? Is this possible? I may be approaching this wrong? Thoughts?
You could use a something__like='__-__-__-__-88-__' query, but it's likely to not be very efficient (since the database will have to scan through all rows to find a match). If you need to lots of these queries, it'd be better to split these out to actual fields (something_1, something_2, etc.)
0
false
1
6,581
2020-02-28 11:26:45.567
Python Script to compare du and df console outputs
As part of a larger project, I'm currently writing a python script that runs Linux commands in a vApp. I'm currently facing an issue where after working with a mounted iso, it may or may not unmount as expected. To check the mount status, I want to run the df -hk /directory and du -sch /directory commands respectively, and compare the outputs. If the iso is not unmounted, the result for the df command should return a larger value than the du command as the df command includes the mount size in the result, while du does not. I'm just wondering how can i compare these values or if there is a better way for me to run this check in the first place.
why don't you use /proc/mounts ? First column is you blockdevice, second is the mountpoint. If you mountpoint is not in /proc/mounts you have nothing mounted here.
0.386912
false
1
6,582
2020-02-28 16:42:15.147
Why no need to load Python formatter (black) and linter (pylint) and vs code?
I am learning how to use VS code and in the process, I learnt about linting and formatting with "pylint" and "black" respectively. Importantly, I have Anaconda installed as I often use conda environments for my different projects. I have therefore installed "pylint" and "black" into my conda environment. My questions are as follows: If "pylint" and "black" are Python packages, why do they not need to be imported into your script when you use them? (i.e. "import pylint" and "import black" at the top of a Python script you want to run). I am very new to VS code, linting and formatting so maybe I'm missing something obvious but how does VS code know what to do when I select "Run Linting" or "Format document" in the command palette? Or is this nothing to do with VS code ? I guess I am just suprised at the fact we don't need to import these packages to use them. In contrast you would always be using import for other packages (sys, os, or any other). I'm assuming if I used a different conda environment, I then need to install pylint and black again in it right?
Yes, black and pylint are only available in the conda environment you installed them in. You can find them in the "Scripts"-folder of your environment. VS Code knows where to look for those scripts, I guess you can set which package is used for "Run Linting" or "Format document". You only need to import python modules or functions that you want to use inside your python module. But that's not what you do.
0
false
1
6,583
2020-02-29 17:31:05.917
Dask progress during task
With dask dataframe using df = dask.dataframe.from_pandas(df, npartitions=5) series = df.apply(func) future = client.compute(series) progress(future) In a jupyter notebook I can see progress bar for how many apply() calls completed per partition (e.g 2/5). Is there a way for dask to report progress inside each partition? Something like tqdm progress_apply() for pandas.
If you mean, how complete each call of func() is, then no, there is no way for Dask to know that. Dask calls python functions which run in their own python thread (python threads cannot be interrupted by another thread), and Dask only knows whether the call is done or not. You could perhaps conceive of calling a function which has some internal callbacks or other reporting system, but I don't think I've seen anything like that.
0
false
1
6,584
2020-03-02 02:33:41.653
Using a Decision Tree to build a Recommendations Application
First of all, my apologies if I am not following some of the best practices of this site, as you will see, my home is mostly MSE (math stack exchange). I am currently working on a project where I build a vacation recommendation system. The initial idea was somewhat akin to 20 questions: We ask the user certain questions, such as "Do you like museums?", "Do you like architecture", "Do you like nightlife" etc., and then based on these answers decide for the user their best vacation destination. We answer these questions based on keywords scraped from websites, and the decision tree we would implement would allow us to effectively determine the next question to ask a user. However, we are having some difficulties with the implementation. Some examples of our difficulties are as follows: There are issues with granularity of questions. For example, to say that a city is good for "nature-lovers" is great, but this does not mean much. Nature could involve say, hot, sunny and wet vacations for some, whereas for others, nature could involve a brisk hike in cool woods. Fortunately, the API we are currently using provides us with a list of attractions in a city, down to a fairly granular level (for example, it distinguishes between different watersport activities such as jet skiing, or white water rafting). My question is: do we need to create some sort of hiearchy like: nature-> (Ocean,Mountain,Plains) (Mountain->Hiking,Skiing,...) or would it be best to simply include the bottom level results (the activities themselves) and just ask questions regarding those? I only ask because I am unfamiliar with exactly how the classification is done and the final output produced. Is there a better sort of structure that should be used? Thank you very much for your help.
Bins and sub bins are a good idea, as is the nature, ocean_nature thing. I was thinking more about your problem last night, TripAdvisor would be a good idea. What I would do is, take the top 10 items in trip advisor and categorize them by type. Or, maybe your tree narrows it down to 10 cities. You would rank those cities according to popularity or distance from the user. I’m not sure how to decide which city would be best for watersports, etc. You could even have cities pay to be top of the list.
0
false
2
6,585
2020-03-02 02:33:41.653
Using a Decision Tree to build a Recommendations Application
First of all, my apologies if I am not following some of the best practices of this site, as you will see, my home is mostly MSE (math stack exchange). I am currently working on a project where I build a vacation recommendation system. The initial idea was somewhat akin to 20 questions: We ask the user certain questions, such as "Do you like museums?", "Do you like architecture", "Do you like nightlife" etc., and then based on these answers decide for the user their best vacation destination. We answer these questions based on keywords scraped from websites, and the decision tree we would implement would allow us to effectively determine the next question to ask a user. However, we are having some difficulties with the implementation. Some examples of our difficulties are as follows: There are issues with granularity of questions. For example, to say that a city is good for "nature-lovers" is great, but this does not mean much. Nature could involve say, hot, sunny and wet vacations for some, whereas for others, nature could involve a brisk hike in cool woods. Fortunately, the API we are currently using provides us with a list of attractions in a city, down to a fairly granular level (for example, it distinguishes between different watersport activities such as jet skiing, or white water rafting). My question is: do we need to create some sort of hiearchy like: nature-> (Ocean,Mountain,Plains) (Mountain->Hiking,Skiing,...) or would it be best to simply include the bottom level results (the activities themselves) and just ask questions regarding those? I only ask because I am unfamiliar with exactly how the classification is done and the final output produced. Is there a better sort of structure that should be used? Thank you very much for your help.
I think using a decision tree is a great idea for this problem. It might be an idea to group your granular activities, and for the "nature lovers" category list a number of different climate types: Dry and sunny, coastal, forests, etc and have subcategories within them. For the activities, you could make a category called watersports, sightseeing, etc. It sounds like your dataset is more granular than you want your decision tree to be, but you can just keep dividing that granularity down into more categories on the tree until you reach a level you're happy with. It might be an idea to include images too, of each place and activity. Maybe even without descriptive text.
0
false
2
6,585
2020-03-02 05:24:30.433
How to use an exported model from google colab in Pycharm
I have a LSTM Keras Tensorflow model trained and exported in .h5 (HDF5) format. My local machine does not support keras tensorflow. I have tried installing. But does not work. Therefore, i used google colabs and exported the model. I would like to know, how i can use the exported model in pycharm Edit : I just now installed tensorflow on my machine Thanks in Advance
You still need keras and tensorflow to use the model.
0
false
1
6,586
2020-03-02 08:41:18.990
How to make PyQt5 program starts like pycharm
As the title says i want to know how to make PyQt5 program starts like pycharm/spyder/photoshop/etc so when i open the program an image shows with progress bar(or without) like spyder,etc
Sounds like you want a splash screen. QSplashScreen will probably be your friend.
1.2
true
1
6,587
2020-03-02 12:31:11.530
What is the point of using sys.exit (or raising SystemExit)?
This question is not about how to use sys.exit (or raising SystemExit directly), but rather about why you would want to use it. If a program terminates successfully, I see no point in explicitly exiting at the end. If a program terminates with an error, just raise that error. Why would you need to explicitly exit the program or why would you need an exit code?
Letting the program exit with an Exception is not user friendly. More exactly, it is perfectly fine when the user is a Python programmer, but if you provide a program to end users, they will expect nice error messages instead of a Python stacktrace which they will not understand. In addition, if you use a GUI application (through tkinter or pyQt for example), the backtrace is likely to be lost, specially on Windows system. In that case, you will setup error processing which will provide the user with the relevant information and then terminate the application from inside the error processing routine. sys.exit is appropriate in that use case.
1.2
true
1
6,588
2020-03-02 23:01:44.213
VS Code Azure Functions: pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available
Trying to deploy Azure Functions written in Python and looks like the only option to do that is through VS Code. I have Python and Azure Functions extensions, and normally use PyCharm with Anaconda interpreter. I also have azure-functions-core-tools installed and calling "func" in PS works. In the VS Code I create a virtual environment as it suggests. But when tyring to debug any Azure Function (using one of their templates for now) I get the error above. As far as I understand it tries to install "azure-functions" module as specified in the "requirements.txt" file and tries to do that with pip. pip works normally if I use it through Anaconda prompt or with my global env python, but I have to use the virtual environment created by VS Code for this one. Any suggestions on how to get through this? Thanks in advance.
Just solved the problem after wasting my valuable whole afternoon. The problem lies on the side of Anaconda. As you described in your question, pip works normally (only) in your Anaconda prompt. Which means, it doesn't work anywhere outside, no matter in a CMD or a PowerShell (although pip and conda seem work outside of the prompt, SSL requests get somehow always refused). However, VS Code, when you simply press F5 instead of using func start command, uses an external PowerShell to call pip. No wonder it'll fail. The problem can be solved, when you install Anaconda on Windows 10, by choosing to add Anaconda's root folder to PATH. This being said, Anaconda's installer strongly doesn't recommend choosing this option (conflicts with other apps blabla)... And if you try to install Anaconda through some package manager such as scoop, it'll install it without asking you for this detail, which is logical. The "fun" part is philosophically Anaconda itself doesn't suggest using conda or pip command outside Anaconda Prompt, while other apps want and may have to do it the other way. Very very confusing and annoying.
0.386912
false
1
6,589
2020-03-03 11:36:54.743
wxPython wx.CallAfter()
I work with wxpython and threads in my project. I think that I didn't understand well how to use wx.CallAfter and when to us it. I read few thing but I still didn't got the point. Someone can explain it to me?
In a nutshell, wx.CallAfter simply takes a callable and the parameters that should be passed to it, bundles that up into a custom event, and then posts that event to the application's pending event queue. When that event is dispatched the handler calls the given callable, passing the given parameters to it. Originally wx.CallAfter was added in order to have an easy way to invoke code after the current and any other pending events have been processed. Since the event is always processed in the main UI thread, then it turns out that wx.CallAfter is also a convenient and safe way for a worker thread to cause some code to be run in the UI thread.
1.2
true
1
6,590
2020-03-03 18:37:00.133
How to use huggingface T5 model to test translation task?
I see there exits two configs of the T5model - T5Model and TFT5WithLMHeadModel. I want to test this for translation tasks (eg. en-de) as they have shown in the google's original repo. Is there a way I can use this model from hugging face to test out translation tasks. I did not see any examples related to this on the documentation side and was wondering how to provide the input and get the results. Any help appreciated
T5 is a pre-trained model, which can be fine-tuned on downstream tasks such as Machine Translation. So it is expected that we get gibberish when asking it to translate -- it hasn't learned how to do that yet.
0.201295
false
1
6,591
2020-03-03 19:05:01.693
Python: Create List Containing 10 Successive Integers Starting with a Number
I want to know how to create a list called "my_list" in Python starting with a value in a variable "begin" and containing 10 successive integers starting with "begin". For example, if begin = 2, I want my_list = [2,3,4,5,6,7,8,9,10,11]
Simply you can use extend method of list and range function. start = 5 my_list = [] my_list.extend(range(start,start+11)) print(my_list)
0
false
1
6,592
2020-03-04 04:29:00.517
How to change resetting password of django
I am learning to use django and my question is if it is possible to change the system to reset the users' password, the default system of sending a link by mail I do not want to use it, my idea is to send a code to reset the password , but I don't know how it should be done and if possible, I would also need to know if it's safe. What I want is for the user who wants to recover his password to go to the recovery section, fill in his email and choose to send and enable a field to put the code that was sent to the mail. I don't know how I should do it or is there a package for this? Thank you very much people greetings.
You can do this, when user clicks on reset password ask for users email id, verify that email id provided is same as what you have in DB. If the email id matches you can generate a OTP and save it in DB(for specific time duration like 3 mins) and send it to user's Email id. Now User enters the OTP. If the OTP provided by user matches the one you have in DB, open the page where user can enter new password.
0
false
1
6,593
2020-03-04 15:31:36.817
How are threads different from process in terms of how they are executed on hardware level?
I was wondering how the threads are executed on hardware level, like a process would run on a single processing core and make a context switch on the processor and the MMU in order to switch between processes. How do threads switch? Secondly when we create/spawn a new thread will it be seen as a new process would for the processor and be scheduled as a process would? Also when should one use threads and when a new process? I know I probably am sounding dumb right now, that's because I have massive gaps in my knowledge that I would like fill. Thanks in advance for taking the time and explaining things to me. :)
Think of it this way: "a thread is part of a process." A "process" owns resources such as memory, open file-handles and network ports, and so on. All of these resources are then available to every "thread" which the process owns. (By definition, every "process" always contains at least one ("main") "thread.") CPUs and cores, then, execute these "threads," in the context of the "process" which they belong to. On a multi-CPU/multi-core system, it is therefore possible that more than one thread belonging to a particular process really is executing in parallel. Although you can never be sure. Also: in the context of an interpreter-based programming language system like Python, the actual situation is a little bit more complicated "behind the scenes," because the Python interpreter context does exist and will be seen by all of the Python threads. This does add a slight amount of additional overhead so that it all "just works."
0.135221
false
2
6,594
2020-03-04 15:31:36.817
How are threads different from process in terms of how they are executed on hardware level?
I was wondering how the threads are executed on hardware level, like a process would run on a single processing core and make a context switch on the processor and the MMU in order to switch between processes. How do threads switch? Secondly when we create/spawn a new thread will it be seen as a new process would for the processor and be scheduled as a process would? Also when should one use threads and when a new process? I know I probably am sounding dumb right now, that's because I have massive gaps in my knowledge that I would like fill. Thanks in advance for taking the time and explaining things to me. :)
There are a few different methods for concurrency. The threading module creates threads within the same Python process and switches between them, this means they're not really running at the same time. The same happens with the Asyncio module, however this has the additional feature of setting when a thread can be switched. Then there is the multiprocessing module which creates a separate Python process per thread. This means that the threads will not have access to shared memory but can mean that the processes run on different CPU cores and therefore can provide a performance improvement for CPU bound tasks. Regarding when to use new threads a good rule of thumb would be: For I/O bound problems, use threading or async I/O. This is because you're waiting on responses from something external, like a database or browser, and this waiting time can instead be filled by another thread running it's task. For CPU bound problems use multiprocessing. This can run multiple Python processes on separate cores at the same time. Disclaimer: Threading is not always a solution and you should first determine whether it is necessary and then look to implement the solution.
1.2
true
2
6,594
2020-03-05 07:24:31.123
How can I replace an EXE's icon to the "default" icon?
I converted a python script to an exe using pyinstaller. I want to know how I can change the icon it gave me to the default icon. In case you don't know what I mean, look at C:\Windows\System32\alg.exe. There are many more files with that icon, but that is one of them. Sorry if this is the wrong place to ask this, and let me know if you have any questions
I would suggest to use auto-py-to-exe module for conversion of python script to exe. At first install using command pip install auto-py-to-exe after that run it through python command line just by typing auto-py-to-exe, you'll get an window where you'll get the icon option. Please vote if you find your solution.
-0.135221
false
2
6,595
2020-03-05 07:24:31.123
How can I replace an EXE's icon to the "default" icon?
I converted a python script to an exe using pyinstaller. I want to know how I can change the icon it gave me to the default icon. In case you don't know what I mean, look at C:\Windows\System32\alg.exe. There are many more files with that icon, but that is one of them. Sorry if this is the wrong place to ask this, and let me know if you have any questions
You'll need to extract the icon from the exe, and set that as the icon file with pyinstaller -i extracted.ico myscript.py. You can extract the icon with tools available online or you can use pywin32 to extract the icons.
0
false
2
6,595
2020-03-05 22:26:04.187
How to update python script/application remotely
I'm trying to develop a windows gui app with python and i will distribute that later. I don't know how to set the app for some future releasing updates or bug fix from a server/remotely. How can I handle this problem? Can I add some auto-update future to app? What should write for that in my code and what framework or library should I use? Do pyinstaller/ inno setup have some futures for this? Thanks for your help.
How about this approach: You can use a version control service like github to version control your code. Then checkout the repository on your windows machine. Write a batch/bash script to checkout the latest version of your code and restart the app. Then use the Windows task scheduler to periodically run this script.
1.2
true
1
6,596
2020-03-06 01:25:48.877
While using Word2vec, how can I get a result from unseen words corpus?
I am using Word2vec model to extract similar words, but I want to know if it is possible to get words while using unseen words for input. For example, I have a model trained with a corpus [melon, vehicle, giraffe, apple, frog, banana]. "orange" is unseen word in this corpus, but when I put it as input, I want [melon, apple, banana] for result. Is this a possible situation?
The original word2vec algorithm can offer nothing for words that weren't in its training data. Facebook's 'FastText' descendent of the word2vec algorithm can offer better-than-random vectors for unseen words – but it builds such vectors from word fragments (character n-gram vectors), so it does best where shared word roots exist, or where the out-of-vocabulary word is just a typo of a trained word. That is, it won't help in your example, if no other words morphologically similar to 'orange' (like 'orangey', 'orangade', 'orangish', etc) were present. The only way to learn or guess a vector for 'orange' is to have some training examples with it or related words. (If all else failed, you could scrape some examples from other large corpora or the web to mix with your other training data.)
0.673066
false
1
6,597
2020-03-06 05:12:48.407
import xmltodict module into visual studio code
I am having a little tough time importing the xmltodict module into my visual studio code. I setup the module in my windows using pip. it should be working on my visual studio as per the guidelines and relevant posts I found here. but for some reasons it isn't working in the visual studio. Please advise on how can I get the xmltodict module installed or imported on visual studio code Thanks in Advance
I had the same issue and it turned out that it wasn't installed in that virtual environment even though that was what I had done. Try: venv/Scripts/python.exe -m pip install xmltodict
0
false
1
6,598
2020-03-06 10:30:35.220
How to open a python file in Cmder Terminal Quicker?
I want to open a python file in cmder terminal quickly. Currently, the fastest way i know how is to navigate to the directory of the python file in cmder terminal and then run it by calling "python file.py". This is slow and cumbersome. Is there a way for me to have a file or exe, that, when i run it (or drag the program onto it), automatically makes the program run in cmder straight away. Windows 10 Clarification: I'm using cmder terminal specifically because it supports text coloring. Windows terminal and powershell do not support this.
Answer: The escape codes just weren't properly configured for the windows terminals. You can get around this by using colorama's colorama.init(). It should work after that.
1.2
true
2
6,599
2020-03-06 10:30:35.220
How to open a python file in Cmder Terminal Quicker?
I want to open a python file in cmder terminal quickly. Currently, the fastest way i know how is to navigate to the directory of the python file in cmder terminal and then run it by calling "python file.py". This is slow and cumbersome. Is there a way for me to have a file or exe, that, when i run it (or drag the program onto it), automatically makes the program run in cmder straight away. Windows 10 Clarification: I'm using cmder terminal specifically because it supports text coloring. Windows terminal and powershell do not support this.
On windows you can go to the directory with the file in the explorer and then simply hold shift as you right click at the same time. This will open the menu and there you will have the option to use the command shell/powershell and then you don't have to navigate to the directory inside the shell anymore and can just execute the python file. I hope that helps.
0
false
2
6,599
2020-03-06 15:03:36.553
discord py: canceling a loop using a command
My question is this: If I were to make a command with a loop (for example "start") where it would say something like:"It has been 3 hours since..." and it loops for 10800 seconds (3 hours) and then says:"It has been 6 hours since..." , so the part where I'm stuck is: If I were to make a command called "stop" how would I implement it in the command "start" where it would check if the command "stop" has been used. If yes the loop is cancelled, if it hasn't been used the loop continues.
but if you run the command several times or on different servers, one stop command stops them all. Is there not a way to stop just one loop with one command
0
false
1
6,600
2020-03-07 19:56:55.313
How to design a HTML parser that would follow the Single Responsibility Principle?
I am writing an application which extracts some data from HTML using BeautifoulSoup4. These are search results of some kind, to be more specific. I thought it would be a good a idea to have a Parser class, storing default values like URL prefixes, request headers etc. After configuring those parameters, the public method would return a list of objects, each of them containing a single result or maybe even an object with a list composed into it alongside with some other parameters. I'm struggling to decouple small pieces of logic that build that parser implementation from the parser class itself. I want to write dozens of parser private utility methods like: _is_next_page_available, _are_there_any_results, _is_did_you_mean_available etc. However, these are the perfect candidates for writing unit tests! And since I want to make them private, I have a feeling that I'm missing something... My other idea was to write that parser as a function, calling bunch of other utility functions, but that would be just equal to making all of those methods public, which doesn't make sense, since they're implementation details. Could you please advice me how to design this properly?
I think you're interpreting the Single-Responsibility Principle (SRP) a little differently. It's actual meaning is a little off from 'a class should do only one thing'. It actually states that a class should have one and only one reason to change. To employ the SRP you have to ask yourself to what/who would your parser module methods be responsible, what/who might make them change. If the answer for each method is the same, then your Parser class employs the SRP correctly. If there are methods that are responsible to different things (business-rule givers, groups of users etc.) then those methods should be taken out and be placed elsewhere. Your overall objective with the SRP is to protect your class from changes coming from different directions.
0.673066
false
1
6,601
2020-03-08 18:20:41.720
How can I use the Twitter API to look up accounts from email addresses?
I'm helping out a newly formed startup build a social media following, and I have a csv file of thousands of email addresses of people I need to follow. From looking at the twitter API, I see its possible to follow the accounts if I knew their usernames, but its unclear how to look them up by email. Any ideas?
This does not appear to be an option with their API, you can use either user_id or screen name with their GET users/show or GET users/lookup options.
0.201295
false
2
6,602
2020-03-08 18:20:41.720
How can I use the Twitter API to look up accounts from email addresses?
I'm helping out a newly formed startup build a social media following, and I have a csv file of thousands of email addresses of people I need to follow. From looking at the twitter API, I see its possible to follow the accounts if I knew their usernames, but its unclear how to look them up by email. Any ideas?
There is no way to do a lookup based on email address in the Twitter API.
0
false
2
6,602
2020-03-09 06:36:25.567
Overfitting problem in convolutional neural Network and deciding the parameters of convolution and dence layer
I applied batch normalization technique to increase the accuracy of my cnn model.The accuracy of model without batch Normalization was only 46 % but after applying batch normalization it crossed 83% but a here arisen a bif overfitting problem that the model was giving validation Accuracy only 15%. Also please tell me how to decide no of filters strides in convolution layer and no of units in dence layer
Batch normalization has been shown to help in many cases but is not always optimal. I found that it depends where it resides in your model architecture and what you are trying to achieve. I have done a lot with different GAN CNNs and found that often BN is not needed and can even degrade performance. It's purpose is to help the model generalize faster but sometimes it increases training times. If I am trying to replicate images, I skip BN entirely. I don't understand what you mean with regards to the accuracy. Do you mean it achieved 83% accuracy with the training data but dropped to 15% accuracy on the validation data? What was the validation accuracy without the BN? In general, the validation accuracy is the more important metric. If you have a high training accuracy and a low validation accuracy, you are indeed overfitting. If you have several convolution layers, you may want to apply BN after each. If you still over-fit, try increasing your strides and kernel size. If that doesn't work you might need to look at the data again and make sure you have enough and that it is somewhat diverse. Assuming you are working with image data, are you creating samples where you rotate your images, crop them, etc. Consider synthetic data to augment your real data to help combat overfiiting.
0
false
1
6,603
2020-03-09 09:01:08.040
How to create Dashboard using Python or R
In my company, I have got task to create dash board using python whose complete look and feel should be like qlicksense. I am fresher in data science field I don't know how to do this. I did lots of R & D and plotly and dash is the best option as much according to R & D on internet dash table is also a good option but I am not able to create the things what it should look like. If any one know how to start plz help me ..
you can use django or other web framework to develop the solution, keep in mind that you probably will need to handle lots of front end stuff like builiding the UI of the system, Flask also is very lightweight option, but it needs lots of customization. Django comes with pretty much everything you might need out of the box.
0
false
1
6,604
2020-03-09 17:18:29.683
is there any function or module in nlp that would find a specific paragraph headings
I have a text file . I need to identify specific paragraph headings and if true i need to extract relevant tables and paragraph wrt that heading using python. can we do this by nlp or machine learning?. if so please help me out in gathering basics as i am new to this field.I was thinking of using a rule like: if (capitalized) and heading_length <50: return heading_text how do i parse through the entire document and pick only the header names ? this is like automating human intervention of clicking document,scrolling to relevant subject and picking it up. please help me out in this
I agree with lorg. Although you could use NLP, but that might just complicate the problem. This problem could be an optimization problem if performance is a concern.
0
false
2
6,605
2020-03-09 17:18:29.683
is there any function or module in nlp that would find a specific paragraph headings
I have a text file . I need to identify specific paragraph headings and if true i need to extract relevant tables and paragraph wrt that heading using python. can we do this by nlp or machine learning?. if so please help me out in gathering basics as i am new to this field.I was thinking of using a rule like: if (capitalized) and heading_length <50: return heading_text how do i parse through the entire document and pick only the header names ? this is like automating human intervention of clicking document,scrolling to relevant subject and picking it up. please help me out in this
You probably don't need NLP or machine learning to detect these headings. Figure out the rule you actually want and if indeed it is such a simple rule as the one you wrote, a regexp will be sufficient. If your text is formatted (e.g. using HTML) it might be even simpler. If however, you can't find a rule, and your text isn't really formatted consistently, your problem will be hard to solve.
0.201295
false
2
6,605
2020-03-10 07:31:06.813
How to find time take by whole test suite to complete in Pytest
I want to know how much time has been taken by the whole test suite to complete the execution. How can I get it in Pytest framework. I can get the each test case execution result using pytest <filename> --durations=0 cmd. But, How to get whole suite execution time>
Use pytest-sugar pip install pytest-sugar Run your tests after it, You could something like Results (10.00s) after finishing the tests
1.2
true
1
6,606
2020-03-10 16:13:15.393
python + how to remove the message "cryptography is not installed, use of crypto disabled"
First time programming in python and I guess you will notice it after reading my question: + How can I remove the message "cryptography is not installed, use of crypto disabled" when running the application? I have created a basic console application using the pyinstaller tool and the code is written in python. When I run the executable, I am getting the message "cryptography is not installed, use of crypto disabled". The program still runs, but I would prefer to get rid off the message. Can someone help me? Thanks in advance.
cryptography and crypto are 2 different modules. try: pip install cryptography pip install crypto
1.2
true
1
6,607
2020-03-11 11:00:09.743
Maya python (or MEL) select objects
I need select all objects in Maya with name "shd" and after that I need assigned to them specific material. I don't know how to do that because when I wrote: select -r "shd"; it send me the message: More than one object matches name: shd // So maybe I should select them one by one in some for loop or something. I am 3D artist so sorry for the lame question.
You can use select -r "shd*" to select all objects with a name stating with "shd".
0
false
1
6,608
2020-03-11 14:39:29.703
How to redirect to a different page in Django when I receive an input from a barcode scanner?
The whole project is as follows: I'm trying to build a Django based web-app for my college library. This app when idle will be showing a slideshow of pictures on the screen. However when an input is received from the barcode scanner, it is supposed to redirect it to a different age containing information related to that barcode. I'm not able to figure out how to get an input from the scanner and only then redirect it to the page for 3 seconds containing the relevant information, after the interval, it should redirect back to the page containing the slideshow.
you should communicate with the bar-code scanner to receive scanning-done event which has nothing to do with django but only javascript or even an interface software which the user must install, like a driver, so you can detect the bar-code scanner from javascript(web browser) then you can get your event in javascript and redirect the page on the event or do whatever you want
0
false
1
6,609
2020-03-11 23:42:50.313
Airflow Operator to pull data from external Rest API
I am trying to pull data from an external API and dump it on S3 . I was thinking on writing and Airflow Operator rest-to-s3.py which would pull in data from external Rest API . My concerns are : This would be a long running task , how do i keep track of failures ? Is there a better alternative than writing an operator ? Is it advisable to do a task that would probably run for a couple of hours and wait on it ? I am fairly new to Airflow so it would be helpful.
Errors - one of the benefits of using a tool like airflow is error tracking. Any failed task is subject to rerun (based on configuration) will persist its state in task history etc.. Also, you can branch based on the task status to decide if you want to report error e.g. to email An operator sounds like a valid option, another option is the built-in PythonOperator and writing a python function. Long-running tasks are problematic with any design and tool. You better break it down to small tasks (and maybe parallelize their execution to reduce the run time?) Does the API take long time to respond? Or do you send many calls? maybe split based on the resulting s3 files? i.e. each file is a different DAG/branch?
0.999909
false
1
6,610
2020-03-13 05:38:38.123
How do I select a sub-folder as a directory containing tests in Python extension for Visual studio code
I am using VScode with python code and I have a folder with sub-directories (2-levels deep) containing python tests. When I try "Python: Discover Tests" it asks for a test framework (selected pytest) and the directory in which tests exist. At this option, it shows only the top-level directories and does not allow to select a sub-directory. I tried to type the directory path but it does not accept it. Can someone please help on how to achieve this?
Try opening the "Output" log (Ctrl+Shift+U) and run "Python: Discover Tests". Alternatively, you may type pytest --collect-only into the console. Maybe you are experiencing some errors with the tests themselves (such as importing errors). Also, make sure to keep __init__.py file in your "tests" folder. I am keeping the pytest "tests" folder within a subdirectory, and there are no issues with VS Code discovering the tests.
0
false
2
6,611
2020-03-13 05:38:38.123
How do I select a sub-folder as a directory containing tests in Python extension for Visual studio code
I am using VScode with python code and I have a folder with sub-directories (2-levels deep) containing python tests. When I try "Python: Discover Tests" it asks for a test framework (selected pytest) and the directory in which tests exist. At this option, it shows only the top-level directories and does not allow to select a sub-directory. I tried to type the directory path but it does not accept it. Can someone please help on how to achieve this?
There are two options. One is to leave the selection as-is and make sure your directories are packages by adding __init__.py files as appropriate. The other is you can go into your workspace settings and adjust the "python.testing.pytestArgs" setting as appropriate to point to your tests.
0
false
2
6,611
2020-03-13 10:02:01.843
how to fix CVE-2019-19646 Sqlite Vulnerability in python3
I am facing issue with SQLite vulnerability which fixed in SQLite version 3.31.1. I am using the python3.7.4-alpine3.10 image, but this image uses a previous version of SQLite that isn't patched. The patch is available in python3.8.2-r1 with alpine edge branch but this image is not available in docker hub. Please help how can i fix this issue?
Your choices are limited to two options: Wait for the official patched release Patch it yourself Option 1 is easy, just wait and the patch will eventually propagate through to docker hub. Option 2 is also easy, just get the code for the image from github, update the versions, and run the build yourself to produce the image.
0
false
1
6,612
2020-03-13 18:51:28.017
How to see current cache size when using functools.lru_cache?
I am doing performance/memory analysis on a certain method that is wrapped with the functools.lru_cache decorator. I want to see how to inspect the current size of my cache without doing some crazy inspect magic to get to the underlying cache. Does anyone know how to see the current cache size of method decorated with functools.lru_cache?
Digging around in the docs showed the answer is calling .cache_info() on the method. To help measure the effectiveness of the cache and tune the maxsize parameter, the wrapped function is instrumented with a cache_info() function that returns a named tuple showing hits, misses, maxsize and currsize. In a multi-threaded environment, the hits and misses are approximate.
1.2
true
1
6,613
2020-03-14 04:25:07.913
Why use signals in Django?
I have read lots of documentation and articles about using signals in Django, but I cannot understand the concept. What is the purpose of using signals in Django? How does it work? Please explain the concept of signals and how to use it in Django code.
The Django Signals is a strategy to allow decoupled applications to get notified when certain events occur. Let’s say you want to invalidate a cached page everytime a given model instance is updated, but there are several places in your code base that this model can be updated. You can do that using signals, hooking some pieces of code to be executed everytime this specific model’s save method is trigged. Another common use case is when you have extended the Custom Django User by using the Profile strategy through a one-to-one relationship. What we usually do is use a “signal dispatcher” to listen for the User’s post_save event to also update the Profile instance as well.
1.2
true
1
6,614
2020-03-14 15:27:13.413
Run generated .py files without python installation
I am coding a PyQt5 based GUI application needs to be able to create and run arbitrary Python scripts at runtime. If I convert this application to a .exe, the main GUI Window will run properly. However, I do not know how I can run the short .py scripts that my application creates. Is it possible to runs these without a system wide Python installation? I don't want ways to compile my python application to exe. This problem relates to generated .py scripts
No, to run a Python file you need an interpreter. It is possible that your main application can contain a Python interpreter so that you don't need to depend on a system-wide Python installation.
0.386912
false
1
6,615
2020-03-15 10:28:17.703
Can manim be used in pycharm?
I have been programming with python for about half a year, and I would like to try manim ( the animation programme of 3blue1brown from youtube), but I am not sure where to start. I have not installed it, but I have tried to read up on it. And to be honest I do not understand much of the requirements of the program, and how to run it. Google has left me without much help, so I decided to check here to see if anyone here is able to help. From what I understand, you run manim directly in python and the animations are based on a textfile with code i assume is LaTex. I have almost no experience with python itself, but I have learned to use it through Thonny, and later Pycharm. My main questions are: (Good sources to how to do this without being a wizard would be really helpful if they exist☺️) Is it possible to install manim in pycharm, and how? Do i need some extra stuff installed to pycharm in order to run it? (I run a windows 64-bit computer) If i manage to do this in pycharm, Will I then be able to code the animations directly in pycharm (in .py or .txt files), or is it harder to use in pycharm? All help or insights is very appreciated As I said I am not extremely knowledgeable in computers, but I am enjoying learning how to code and applications of coding
Yes, you can 1.Write your code in pycharm 2.save it 3.copy that .py file to where you installed manim. In my case, it is This pc>> C drive >> manim-master >> manim-master 4.select on the path and type "cmd" to open terminal from there Type this on the terminal python -m manim -pql projectname.py This will do. To play back the animation or image, open the media folder.
0
false
1
6,616
2020-03-15 13:08:38.213
FFmpeg is in Path, but running in the CMD results in "FFmpeg not recognized as internal or external command"
FFmpeg is installed in C:\FFmpeg, and I put C:\FFmpeg\bin in the path. Does anyone know how to fix? Thanks!
You added C:\FFmpeg\bin\ffmpeg.exe to your path, instead, you need to add only the directory: C:\FFmpeg\bin\
-0.386912
false
1
6,617
2020-03-15 16:27:58.057
How to put an icon for my android app using kivy-buildozer?
I made an android app using python-kivy (Buildozer make it to apk file) Now I want to put an image for the icon of the application. I mean the picture for the app-icon on your phone. how can I do this? I cannot find any code in kv
Just uncomment icon.filename: in the buildozer spec file and write a path to your icon image.
0.386912
false
1
6,618
2020-03-15 19:50:38.793
How to activate google colab gpu using just plain python
I'm new to google colab. I'm trying to do deep learning there. I have written a class to create and train a LSTM net using just python - not any specific deep learning library as tensorflow, pytorch, etc. I thought I was using a gpu because I had chosen the runtime type properly in colab. During the code execution, however, I was sometimes getting the message to quit gpu mode because I was not making use of it. So, my question: how can one use google colab gpu, using just plain python, without special ai libraries? Is there something like "decorator code" to put in my original code so that the gpu get activated?
It's just easier to use frameworks like PyTorch or Tensorflow. If not, you can try pycuda or numba, which are closer to "pure" GPU programming. That's even harder than just using PyTorch.
0.201295
false
1
6,619
2020-03-16 16:42:55.520
Overriding button functionality in kivy using an another button
Currently I am making a very simple interface which asks user to input parameters for a test and then run the test. The test is running brushless dc motor for several minutes. So when the run button is pressed the button is engaged for the time period till the function is finished executing. I have another stop button which should kill the test but currently cant use it since the run button is kept pressed till the function is finished executing and stop button cant be used during the test. I want to stop the test with pressing the stop button even if the run button function is currently being executed. The run button should release and the function should continuously check the stop function for stopping the test. Let me know how this can be executed.
Your problem is that all your code it taking place sequentially in a single thread. Once your first button is pressed, all of the results of that pressing are followed through before anything else can happen. You can avoid this by running the motor stuff in a separate thread. Your stop button will then need to interrupt that thread.
1.2
true
1
6,620
2020-03-17 13:45:02.310
how to compute the sum for each pair of rows x in matrice X and y in matrice Y?
I am trying to write a function in python that takes as input two matrices X and Y and computes for every pair of rows x in X and y in Y, the norm ||x - y|| . I would like to do it without using for loops. Do you have an idea about how to do it ?
I just solve it :D instead of len(np.trnspose(y)) i had to do len(y) and it perfectly worked with a for loop.
0
false
1
6,621
2020-03-18 08:43:05.267
How to set text color to gradient texture in kivy?
I have used to create a Texture with gradient color and set to the background of Label, Button and etc. But I am wondering how to set this to color of Label?
You can't set the color property to a gradient, that just isn't what it does. Gradients should be achieved using images or textures directly applied to canvas vertex instructions.
0
false
1
6,622