Q_CreationDate
stringlengths 23
23
| Title
stringlengths 11
149
| Question
stringlengths 25
6.53k
| Answer
stringlengths 15
5.1k
| Score
float64 -1
1.2
| Is_accepted
bool 2
classes | N_answers
int64 1
17
| Q_Id
int64 0
6.76k
|
---|---|---|---|---|---|---|---|
2019-08-24 07:50:39.733 | How to get a callback when the specified epoch number is over? | I want to fine turn my model when using Keras, and I want to change my training data and learning rate to train when the epochs arrive 10, So how to get a callback when the specified epoch number is over. | Actually, the way keras works this is probably not the best way to go, it would be much better to treat this as fine tuning, meaning that you finish the 10 epochs, save the model and then load the model (from another script) and continue training with the lr and data you fancy.
There are several reasons for this.
It is much clearer and easier to debug. You check you model properly after the 10 epochs, verify that it works properly and carry on
It is much better to do several experiments this way, starting from epoch 10.
Good luck! | 0 | false | 1 | 6,267 |
2019-08-25 13:10:47.760 | Is Python's pipenv slow? | I tried switching from venv & conda to pipenv to manage my virtual environments, but one thing I noticed about pipenv that it's oddly slow when it's doing "Locking" and it gets to the point where it stops executing for "Running out of time". Is it usually this slow or is it just me? Also, could you give me some advice regarding how to make it faster? | Pipenv is literally a joke. I spent 30 minutes staring at "Locking", which eventually fails after exactly 15 minutes, and I tried two times.
The most meaningless thirty minutes in my life.
Was my Pipfile complex? No. I included "flask" with "flake8" + "pylint" + "mypy" + "black".
Every time someone tries to fix the "dependency management" of Python, it just gets worse.
I'm expecting Poetry to solve this, but who knows.
Maybe it's time to move on to typed languages for web development. | 0.995055 | false | 2 | 6,268 |
2019-08-25 13:10:47.760 | Is Python's pipenv slow? | I tried switching from venv & conda to pipenv to manage my virtual environments, but one thing I noticed about pipenv that it's oddly slow when it's doing "Locking" and it gets to the point where it stops executing for "Running out of time". Is it usually this slow or is it just me? Also, could you give me some advice regarding how to make it faster? | try using --skip-lock like this :
pipenv install --skip-lock
Note : do not skip-lock when going in production | 0.16183 | false | 2 | 6,268 |
2019-08-26 01:21:16.273 | Python script denied in terminal | I have a folder on my desktop that contains my script and when I run it in the pycharm ide it works perfectly but when I try to run from the terminal I get /Users/neelmukherjee/Desktop/budgeter/product_price.py: Permission denied
I'm not quite sure as to why this is happening
I tried using ls -al to check the permissions and for some reason, the file is labelled as
drwx------@ 33 neelmukherjee staff 1056 26 Aug 09:03 Desktop
I'm assuming this means that I should run this file as an admin. But how exactly can I do that?
My goal is to run my script from the terminal successfully and that may be possible by running it as an admin how should I do that? | Ok, so I was able to figure it out. I had to use
chmod +x to help make it executable first.
chmod +x /Users/neelmukherjee/Desktop/budgeter/product_price.py
and the run /Users/neelmukherjee/Desktop/budgeter/product_price.py | 1.2 | true | 1 | 6,269 |
2019-08-26 06:10:10.417 | How to watch an hdfs directory and copy the latest file that arrives in hdfs to local? | I want to write a script in bash/python such that the script copies the latest file which arrives at hdfs directory.I know I can use inotify in local, but how to implement it in hdfs?
Can you please share the sample code for it. When I searched for it in google it gives me long codes.Is there a simpler way other than inotify(if its too complex) | Inelegant hack:
Mount hdfs using FUSE then periodically use find <mountpoint> -cmin n to get a list of files created in the last n minutes.
Then use find <mountpoint> -anewer to sort them. | 0 | false | 1 | 6,270 |
2019-08-26 22:11:31.383 | PySpark Group and apply UDF row by row operation | I have a dataset that contains 'tag' and 'date'. I need to group the data by 'tag' (this is pretty easy), then within each group count the number of row that the date for them is smaller than the date in that specific row. I basically need to loop over the rows after grouping the data. I don't know how to write a UDF which takes care of that in PySpark. I appreciate your help. | you need an aggregation ?
df.groupBy("tag").agg({"date":"min"})
what about that ? | 0 | false | 1 | 6,271 |
2019-08-26 23:20:06.887 | How to install stuff like Requests and BeautifulSoup to use in Python? | I am an extreme beginner with Python and its libraries and installation in general. I want to make an extremely simple google search web scraping tool. I was told to use Requests and BeautifulSoup. I have installed python3 on my Mac by using brew install python3 and I am wondering how to get those two libraries
I googled around and many results said that by doing brew install python3 it will automatically install pip so I can use something like pip install requests but it says pip: command not found.
by running python3 --version it says Python 3.7.4 | Since you're running with Python3, not Python (which usually refers to 2.7), you should try using pip3.
pip on the other hand, is the package installer for Python, not Python3. | 1.2 | true | 1 | 6,272 |
2019-08-27 14:17:56.143 | Stop subprocess.check_output to print on video | I'm writing a python program which uses subprocess to send files via cURL. It works, but for each file/zip it outputs the loading progress, time and other stuff which I don't want to be shown. Does anyone know how to stop it? | You should add stderr=subprocess.DEVNULL or stderr=subprocess.PIPE to your check_output call | 1.2 | true | 1 | 6,273 |
2019-08-27 15:22:51.420 | In a Jupyter Notebook how do I split a bulleted list in multiple text cells? | Suppose I have a bulleted list in Jupyter in a markdown cell like this:
Item1
Item2
Item3
Is there a way to convert this one cell list in three markdown text cells? | Ctrl + Shift + - will split a cell on cursor. Else, cannot process a text of a cell with code unless you're importing a notebook within another notebook. | 0 | false | 1 | 6,274 |
2019-08-27 17:48:12.200 | Saving large numpy 2d arrays | I have an array with ~1,000,000 rows, each of which is a numpy array of 4,800 float32 numbers.
I need to save this as a csv file, however using numpy.savetxt has been running for 30 minutes and I don't know how much longer it will run for.
Is there a faster method of saving the large array as a csv?
Many thanks,
Josh | As pointed out in the comments, 1e6 rows * 4800 columns * 4 bytes per float32 is 18GiB. Writing a float to text takes ~9 bytes of text (estimating 1 for integer, 1 for decimal, 5 for mantissa and 2 for separator), which comes out to 40GiB. This takes a long time to do, since just the conversion to text itself is non-trivial, and disk I/O will be a huge bottle-neck.
One way to optimize this process may be to convert the entire array to text on your own terms, and write it in blocks using Python's binary I/O. I doubt that will give you too much benefit though.
A much better solution would be to write the binary data to a file instead of text. Aside from the obvious advantages of space and speed, binary has the advantage of being searchable and not requiring transformation after loading. You know where every individual element is in the file, if you are clever, you can access portions of the file without loading the entire thing. Finally, a binary file is more likely to be highly compressible than a relatively low-entropy text file.
Disadvantages of binary are that it is not human-readable, and not as portable as text. The latter is not a problem, since transforming into an acceptable format will be trivial. The former is likely a non-issue given the amount of data you are attempting to process anyway.
Keep in mind that human readability is a relative term. A human can not read 40iGB of numerical data with understanding. A human can process A) a graphical representation of the data, or B) scan through relatively small portions of the data. Both cases are suitable for binary representations. Case A) is straightforward: load, transform and plot the data. This will be much faster if the data is already in a binary format that you can pass directly to the analysis and plotting routines. Case B) can be handled with something like a memory mapped file. You only ever need to load a small portion of the file, since you can't really show more than say a thousand elements on screen at one time anyway. Any reasonable modern platform should be able to keep upI/O and binary-to-text conversion associated with a user scrolling around a table widget or similar. In fact, binary makes it easier since you know exactly where each element belongs in the file. | 1.2 | true | 1 | 6,275 |
2019-08-28 08:59:57.570 | Overriding celery result table (celery_taskmeta) for Postgres | I am using celery to do some distributed tasks and want to override celery_taskmeta and add some more columns. I use Postgres as DB and SQLAlchemy as ORM. I looked up celery docs but could not find out how to do it.
Help would be appreciated. | I would suggest a different approach - add an extra table with your extended data. This table would have a foreign-key constraint that would ensure each record is related to the particular entry in the celery_taskmeta. Why this approach? - It separates your domain (domain of your application), from the Celery domain. Also it does not involve modifying the table structure that may (in theory it should not) cause trouble. | 0.386912 | false | 1 | 6,276 |
2019-08-28 14:01:28.443 | how to remove airflow install | I tried pip uninstall airflow and pip3 uninstall airflow and both return
Cannot uninstall requirement airflow, not installed
I'd like to remove airflow completely and run clean install. | Airflow now is apache-airflow. | 1.2 | true | 1 | 6,277 |
2019-08-28 16:38:47.373 | ImportError: cannot import name 'deque' from 'collections' how to clear this? | I have get
ImportError: cannot import name 'deque' from 'collections'
How to resolve this issue? I have already changed module name (the module name is collections.py) but this is not worked. | In my case I had to rename my python file from keyword.py to keyword2.py. | 0 | false | 2 | 6,278 |
2019-08-28 16:38:47.373 | ImportError: cannot import name 'deque' from 'collections' how to clear this? | I have get
ImportError: cannot import name 'deque' from 'collections'
How to resolve this issue? I have already changed module name (the module name is collections.py) but this is not worked. | I had the same problem when i run the command python -m venv <env folder>. Renamed my file from: collections.py to my_collections.py.
It worked! | 0 | false | 2 | 6,278 |
2019-08-30 04:47:58.880 | Authenticating Google Cloud Storage SDK in Cloud Functions | This is probably a really simple question, but I can't seem to find an answer online.
I'm using a Google Cloud Function to generate a CSV file and store the file in a Google Storage bucket. I've got the code working on my local machine using a json service account.
I'm wanting to push this code to a cloud function, however, I can't use the json service account file in the cloud environment - so how do I authenticate to my storage account in the cloud function? | You don't need the json service account file in the cloud environment.
If the GCS bucket and GCF are in the same project, you can just directly access it.
Otherwise, add your GCF default service account(Note: it's App Engine default service account ) to your GCS project's IAM and grant relative GSC permission. | 0.999329 | false | 1 | 6,279 |
2019-08-30 15:12:00.713 | Can selenium post real traffic on a website? | I have written a script in selenium python which is basically opening up a website and clicking on links in it and doing this thing multiple times..
Purpose of the software was to increase traffic on the website but after script was made it has observed that is not posting real traffic on website while website is just taking it as a test and ignoring it.
Now I am wondering whether it is basically possible with selenium or not?
I have searched around and I suppose it is possible but don't know how. Do anyone know about this? Or is there any specific piece of code for this? | It does create traffic, the problem is websites sometimes defends from bots and can guess if the income connection is a bot or not, maybe you should put some time.wait(seconds) between actions to deceive the website control and make it thinks you are a person | 0 | false | 1 | 6,280 |
2019-08-30 22:08:19.797 | what are the options to implement random search? | So i want to implement random search but there is no clear cut example as to how to do this. I am confused between the following methods:
tune.randint()
ray.tune.suggest.BasicVariantGenerator()
tune.sample_from(lambda spec: blah blah np.random.choice())
Can someone please explain how and why these methods are same/different for implementing random search. | Generally, you don't need to use ray.tune.suggest.BasicVariantGenerator().
For the other two choices, it's up to what suits your need. tune.randint() is just a thin wrapper around tune.sample_from(lambda spec: np.random.randint(...)). You can do more expressive/conditional searches with the latter, but the former is easier to use. | 0 | false | 1 | 6,281 |
2019-09-01 08:27:48.270 | Python Linter installation issue with VScode | [warning VSCode newbie here]
When installing pylinter from within VScode I got this message:
The script isort.exe is installed in 'C:\Users\fjanssen\AppData\Roaming\Python\Python37\Scripts' which is not on PATH.
Which is correct. However, my Python is installed in C:\Program Files\Python37\
So I am thinking Python is installed for all users, while pylinter seems to be installed for the user (me).
Checking the command-line that VScode threw to install pylinter it indeed seems to install for the user:
& "C:/Program Files/Python37/python.exe" -m pip install -U pylint --user
So, I have some questions on resolving this issue;
1 - how can I get the immediate issue resolved?
- remove pylinter as user
- re-install for all users
2 - Will this (having python installed for all users) keep bugging me in the future?
- should I re-install python for the current user only when using it with VScode? | If the goal is to simply use pylint with VS Code, then you don't need to install it globally. Create a virtual environment and select that in VS Code as your Python interpreter and then pylint will be installed there instead of globally. That way you don't have to worry about PATH. | 0.386912 | false | 1 | 6,282 |
2019-09-01 14:17:27.727 | Taking specified number of user inputs and storing each in a variable | I am a beginner in python and want to know how to take just the user specified number of inputs in one single line and store each input in a variable.
For example:
Suppose I have 3 test cases and have to pass 4 integers separated by a white space for each such test case.
The input should look like this:
3
1 0 4 3
2 5 -1 4
3 7 1 9
I know about the split() method that helps you to separate integers with a space in between. But since I need to input only 4 integers, I need to know how to write the code so that the computer would take only 4 integers for each test case, and then the input line should automatically move, asking the user for input for the next test case.
Other than that, the other thing I am looking for is how to store each integer for each test case in some variable so I can access each one later. | For the first part, if you would like to store input in a variable, you would do the following...
(var_name) = input()
Or if you want to treat your input as an integer, and you are sure it is an integer, you would want to do this
(var_name) = int(input())
Then you could access the input by calling up the var_name.
Hope that helped :D | 0 | false | 1 | 6,283 |
2019-09-02 11:48:46.377 | How to automatically update view once the database is updated in django? | I have a problem in which I have to show data entered into a database without having to press any button or doing anything.
I am creating an app for a hospital, it has two views, one for a doctor and one for a patient.
I want as soon as the patient enters his symptoms, it shows up on doctor immediately without having to press any button.
I have no idea how to do this.
Any help would be appreciated.
Thanks in advance | You can't do that with Django solely. You have to use some JS framework (React, Vue, Angular) and WebSockets, for example. | 0 | false | 1 | 6,284 |
2019-09-04 11:00:26.350 | how do I give permission to bash to run to multiple gcloud commands from local jupyter notebook | I am practicing model deployment to GCP cloud ML Engine. However, I receive errors stated below when I execute the following code section in my local jupyter notebook. Please note I do have bash installed in my local PC and environment variables are properly set.
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
Error messages:
-bash: line 1: /mnt/c/Users/User/AppData/Local/Google/Cloud SDK/google-cloud-sdk/bin/gcloud: Permission denied
-bash: line 2: /mnt/c/Users/User/AppData/Local/Google/Cloud SDK/google-cloud-sdk/bin/gcloud: Permission denied
CalledProcessError: Command 'b'gcloud config set project $PROJECT\ngcloud config set compute/region $REGION\n\n'' returned non-zero exit status 126. | Perhaps you installed Google Cloud SDK with root?
try
sudo gcloud config set project $PROJECT
and
sudo gcloud config set compute/region $REGION | 0 | false | 1 | 6,285 |
2019-09-04 13:31:44.333 | how to use breakpoint in mydll.dll using python3 and pythonnet | I have function imported from a DLL file using pythonnet:
I need to trace my function(in a C# DLL) with Python. | you can hook a Visual Studio debugger to python.exe which runs your dll | 0 | false | 1 | 6,286 |
2019-09-04 13:40:07.500 | Python Oracle DB Connect without Oracle Client | I am trying to build an application in python which will use Oracle Database installed in corporate server and the application which I am developing can be used in any local machine.
Is it possible to connect to oracle DB in Python without installing the oracle client in the local machine where the python application will be stored and executed?
Like in Java, we can use the jdbc thin driver to acheive the same, how it can be achieved in Python.
Any help is appreciated
Installing oracle client, connect is possible through cx_Oracle module.
But in systems where the client is not installed, how can we connect to the DB. | It is not correct that java can connect to oracle without any oracle provided software.
It needs a compatible version of ojdbc*.jar to connect. Similarly python's cx_oracle library needs oracle instant-client software from oracle to be installed.
Instant client is free software and has a small footprint. | 0.265586 | false | 2 | 6,287 |
2019-09-04 13:40:07.500 | Python Oracle DB Connect without Oracle Client | I am trying to build an application in python which will use Oracle Database installed in corporate server and the application which I am developing can be used in any local machine.
Is it possible to connect to oracle DB in Python without installing the oracle client in the local machine where the python application will be stored and executed?
Like in Java, we can use the jdbc thin driver to acheive the same, how it can be achieved in Python.
Any help is appreciated
Installing oracle client, connect is possible through cx_Oracle module.
But in systems where the client is not installed, how can we connect to the DB. | Installing Oracle client is a huge pain. Could you instead create a Webservice to a system that does have OCI and then connect to it that way? This might end being a better solution rather than direct access. | 0 | false | 2 | 6,287 |
2019-09-05 03:55:31.020 | How to take multi-GPU support to the OpenNMT-py (pytorch)? | I used python-2.7 version to run the PyTorch with GPU support. I used this command to train the dataset using multi-GPU.
Can someone please tell me how can I fix this error with PyTorch in OpenNMT-py or is there a way to take pytorch support for multi-GPU using python 2.7?
Here is the command that I tried.
CUDA_VISIBLE_DEVICES=1,2
python train.py -data data/demo -save_model demo-model -world_size 2 -gpu_ranks 0 1
This is the error:
Traceback (most recent call last):
File "train.py", line 200, in
main(opt)
File "train.py", line 60, in main
mp = torch.multiprocessing.get_context('spawn')
AttributeError: 'module' object has no attribute 'get_context' | Maybe you can check whether your torch and python versions fit the openmt requiremen.
I remember their torch is 1.0 or 1.2 (1.0 is better). You have to lower your latest of version of torch. Hope that would work | 0 | false | 1 | 6,288 |
2019-09-05 18:28:58.863 | What does wave_read.readframes() return if there are multiple channels? | I understand how the readframes() method works for mono audio input, however I don't know how it will work for stereo input. Would it give a tuple of two byte objects? | A wave file has:
sample rate of Wave_read.getframerate() per second (e.g 44100 if from an audio CD).
sample width of Wave_read.getsampwidth() bytes (i.e 1 for 8-bit samples, 2 for 16-bit samples)
Wave_read.getnchannels() channels (typically 1 for mono, 2 for stereo)
Every time you do a Wave_read.getframes(N), you get N * sample_width * n_channels bytes. | 0 | false | 1 | 6,289 |
2019-09-07 03:28:26.677 | Does SciPy have utilities for parsing and keeping track of the units associated with its constants? | scipy.constants.physical_constants returns (value, unit, uncertainty) tuples for many specific physical constants. The units are given in the form of a string. (For example, one of the options for the universal gas constant has a unit field of 'J kg^-1 K^-1'.)
At first blush, this seems pretty useful. Keeping track of your units is very important in scientific calculations, but, for the life of me, I haven't been able to find any facilities for parsing these strings into something that can be tracked. Without that, there's no way to simplify the combined units after different values have been added, subtracted, etc with eachother.
I know I can manually declare the units of constants with separate libraries such as what's available in SymPy, but that would make ScyPy's own units completely useless (maybe just a convenience for printouts). That sounds pretty absurd. I can't imagine that ScyPy doesn't know how to deal with units.
What am I missing?
Edit:
I know that SciPy is a stack, and I am well aware of what libraries are part of it. My questions is about if SciPy knows how to work with the very units it spits out with its constants (or if I have to throw out those units and manually redefine everything). As far as I can see, it can't actually parse its own unit strings (and nothing else in the ecosystem seems to know how to make heads or tails of them either). This doesn't make sense to me because if SciPy proper can't deal with these units, why would they be there in the first place? Not to mention, keeping track of your units across your calculations is the exact kind of thing you need to do in science. Forcing manual redefinitions of all the units someone went through the trouble of associating with all these constants doesn't make sense. | No, scipy the library does not have any notion of quantities with units and makes no guarantees when operating on quantities with units (from e.g. pint, astropy.Quantity or other objects from other unit-handling packages). | 0 | false | 1 | 6,290 |
2019-09-07 11:50:52.290 | LightGBM unexpected behaviour outside of jupyter | I have this strange but when I'm using a LightGBM model to calculate some predictions.
I trained a LightGBM model inside of jupyter and dumped it into a file using pickle. This model is used in an external class.
My problem is when I call my prediction function from this external class outside of jupyter it always predicts an output of 0.5 (on all rows). When I use the exact same class inside of jupyter I get the expected output. In both cases the exact same model is used with the exact same data.
How can this behavior be explained and how can I achieve to get the same results outside of jupyter? Has it something to do with the fact I trained the model inside of jupyter? (I can't imagine why it would, but atm have no clue where this bug is coming from)
Edit: Used versions:
Both times the same lgb version is used (2.2.3), I also checked the python version which are equal (3.6.8) and all system paths (sys.path output). The paths are equal except of '/home/xxx/.local/lib/python3.6/site-packages/IPython/extensions' and '/home/xxx/.ipython'.
Edit 2: I copied the code I used inside of my jupyter and ran it as a normal python file. The model made this way works now inside of jupyter and outside of it. I still wonder why this bug accrued. | It can't be a jupyter problem since jupyter is just an interface to communicate with python. The problem could be that you are using different python environment and different version of lgbm... Check import lightgbm as lgb and lgb.__version__ on both jupyter and your python terminal and make sure there are the same (or check if there has been some major changements between these versions) | 0.386912 | false | 1 | 6,291 |
2019-09-08 16:32:01.487 | Create Python setup | I have to create a setup screen with tk that starts only at the first boot of the application where you will have to enter names etc ... a sort of setup. Does anyone have any ideas on how to do so that A) is performed only the first time and B) the input can be saved and used in the other scripts? Thanks in advance | Why not use a file to store the details? You could use a text file or you could use pickle to save a python object then reload it. On starting your application you could check to see if the file exists and contains the necessary information, if it doesn't you can activate your setup screen, if not skip it. | 0.386912 | false | 1 | 6,292 |
2019-09-09 13:09:00.117 | What is the best way to combine two data sets that depend on each other? | I am encountering a task and I am not entirely sure what the best solution is.
I currently have one data set in mongo that I use to display user data on a website, backend is in Python. A different team in the company recently created an API that has additional data that I would let to show along side the user data, and the data from the newly created API is paired to my user data (Shows specific data per user) that I will need to sync up.
I had initially thought of creating a cron job that runs weekly (as the "other" API data does not update often) and then taking the information and putting it directly into my data after pairing it up.
A coworker has suggested caching the "other" API data and then just returning the "mixed" data to display on the website.
What is the best course of action here? Actually adding the data to our data set would allow us to have 1 source of truth and not rely on the other end point, as well as doing less work each time we need the data. Also if we end up needing that information somewhere else in the project, we already have the data in our DB and can just use it directly without needing to re-organize/pair it.
Just looking for general pro's and cons for each solution. Thanks! | Synchronization will always cost more than federation. I would either A) embrace CORS and integrate it in the front-end, or B) create a thin proxy in your Python App.
Which you choose depends on how quickly this API changes, whether you can respond to those changes, and whether you need graceful degradation in case of remote API failure. If it is not mission-critical data, and the API is reliable, just integrate it in the browser. If they support things like HTTP cache-control, all the better, the user's browser will handle it.
If the API is not scalable/reliable, then consider putting in a proxy server-side so that you can catch errors and provide graceful degradation. | 1.2 | true | 1 | 6,293 |
2019-09-09 20:26:07.763 | pandas pd.options.display.max_rows not working as expected | I’m using pandas 0.25.1 in Jupyter Lab and the maximum number of rows I can display is 10, regardless of what pd.options.display.max_rows is set to.
However, if pd.options.display.max_rows is set to less than 10 it takes effect and if pd.options.display.max_rows = None then all rows show.
Any idea how I can get a pd.options.display.max_rows of more than 10 to take effect? | min_rows displays the number of rows to be displayed from the top (head) and from the bottom (tail) it will be evenly split..despite putting in an odd number. If you only want a set number of rows to be displayed without reading it into the memory,
another way is to use nrows = 'putnumberhere'.
e.g. results = pd.read_csv('ex6.csv', nrows = 5) # display 5 rows from the top 0 - 4
If the dataframe has about 100 rows and you want to display only the first 5 rows from the top...NO TAIL use .nrows | -0.201295 | false | 1 | 6,294 |
2019-09-11 00:46:34.683 | Using tensorflow object detection for either or detection | I have used Tensorflow object detection for quite awhile now. I am more of a user, I dont really know how it works. I am wondering is it possible to train it to recognize an object is something and not something? For example, I want to detect cracks on the tiles. Can i use object detection to do so where i show an image of a tile and it can tell me if there is a crack (and also show the location), or it will tell me if there is no crack on the tile?
I have tried to train using pictures with and without defect, using 2 classes (1 for defect and 1 for no defect). But the results keep showing both (if the picture have defect) in 1 picture. Is there a way to show only the one with defect?
Basically i would like to do defect checking. This is a simplistic case of 1 defect. but the actual case will have a few defects.
Thank you. | In case you're only expecting input images of tiles, either with defects or not, you don't need a class for no defect.
The API adds a background class for everything which is not the other classes.
So you simply need to state one class - defect, and tiles which are not detected as such are not defected.
So in your training set - simply give bounding boxes of defects, and no bounding box in case of no defect, and then your model should learn to detect the defects as mentioned above. | 1.2 | true | 1 | 6,295 |
2019-09-11 16:52:17.283 | How can I find memory leaks without external packages? | I am writing a data mining script to pull information off of a program called Agisoft PhotoScan for my lab. PhotoScan uses its own Python library (and I'm not sure how to access pip for this particular build), which has caused me a few problems installing other packages. After dragging, dropping, and praying, I've gotten a few packages to work, but I'm still facing a memory leak. If there is no way around it, I can try to install some more packages to weed out the leak, but I'd like to avoid this if possible.
My understanding of Python garbage collection so far is, when an object loses its reference, it should be deleted. I used sys.getrefcount() to check all my variables, but they all stay constant. I have a hunch that the issue could be in the mysql-connector package I installed, or in PhotoScan itself, but I am not sure how to go about testing. I will be more than happy to provide code if that will help! | It turns out that the memory leak was indeed with the PhotoScan program. I've worked around it by having a separate script open and close it, running my original script once each time. Thank you all for the help! | 0 | false | 1 | 6,296 |
2019-09-15 06:56:39.743 | Start cmd and run multiple commands in the created cmd instance | I am trying to start cmd window and then running a chain of cmds in succession one after the other in that cmd window.
something like start cmd /k pipenv shell && py manage.py runserver the start cmd should open a new cmd window, which actually happens, then the pipenv shell should start a virtual environment within that cmd instance, also happens, and the py manage.py runserver should run in the created environment but instead it runs where the script is called.
Any ideas on how I can make this work? | Your py manage.py runserver command calling python executor in your major environment. In your case, you could use pipenv run manage.py runserver that detect your virtual env inside your pipfile and activate it to run your command. An alternative way is to use virtualenv that create virtual env directly inside your project directory and calling envname\Scripts\activate each time you want to run something inside your virtual env. | 0.201295 | false | 1 | 6,297 |
2019-09-15 21:33:55.463 | structured numpy ndarray, how to get values | I have a structured numpy ndarray la = {'val1':0,'val2':1} and I would like to return the vals using the 0 and 1 as keys, so I wish to return val1 when I have 0 and val2 when I have 1 which should have been straightforward however my attempts have failed, as I am not familiar with this structure.
How do I return only the corresponding val, or an array of all vals so that I can read in order? | Just found out that I can use la.tolist() and it returns a dictionary, somehow? when I wanted a list, alas from there on I was able to solve my problem. | 0 | false | 1 | 6,298 |
2019-09-16 15:19:19.583 | impossible to use pip | I start on python, I try to use mathplotlib on my code but I have an error "ModuleNotFoundError: No module named 'matplotlib'" on my cmd. So I have tried to use pip on the cmd: pip install mathplotlib.
But I have an other error "No python at 'C:...\Microsoft Visual Studio..."
Actually I don't use microsoft studio anymore so I usinstall it but I think I have to change the path for the pip modul but I don't know how... I add the link of the script of the python folder on the variables environment but it doesn't change anything. How can I use pip ?
Antoine | Your setup seems messed up. A couple of ideas:
long term solution: Uninstall everything related to Python, make sure your PATH environment variables are clean, and reinstall Python from scratch.
short term solution: Since py seems to work, you could go along with it: py, py -3 -m pip install <something>, and so on.
If you feel comfortable enough you could try to salvage what works by looking at the output of py -0p, this should tell you where are the Python installations that are potentially functional, and you could get rid of the rest. | 0 | false | 1 | 6,299 |
2019-09-16 16:45:45.577 | How to create button based chatbot | I have created a chatbot using RASA to work with free text and it is working fine. As per my new requirement i need to build button based chatbot which should follow flowchart kind of structure. I don't know how to do that what i thought is to convert the flowchart into graph data structure using networkx but i am not sure whether it has that capability. I did search but most of the examples are using dialogue or chat fuel. Can i do it using networkx.
Please help. | Sure, you can.
You just need each button to point to another intent. The payload of each button should point have the /intent_value as its payload and this will cause the NLU to skip evaluation and simply predict the intent. Then you can just bind a trigger to the intent or use the utter_ method.
Hope that helps. | 1.2 | true | 1 | 6,300 |
2019-09-16 19:35:35.813 | Teradataml: Remove all temporary tables created by Teradata MLE functions | In teradataml how should the user remove temporary tables created by Teradata MLE functions? | At the end of a session call remove_context() to trigger the dropping of tables. | 0 | false | 1 | 6,301 |
2019-09-17 06:03:09.647 | How to inherit controller of a third party module for customization Odoo 12? | I have a module with a controller and I need to inherit it in a newly created module for some customization. I searched about the controller inheritance in Odoo and I found that we can inherit Odoo's base modules' controllers this way:
from odoo.addons.portal.controllers.portal import CustomerPortal, pager as portal_pager, get_records_pager
but how can I do this for a third party module's controller? In my case, the third party module directory is one step back from my own module's directory. If I should import the class of a third party module controller, how should I do it? | It is not a problem whether you are using a custom module.If the module installed in the database you can import as from odoo.addons.
Eg : from odoo.addons.your_module.controllers.main import MyClass | 1.2 | true | 1 | 6,302 |
2019-09-17 13:31:40.087 | how to deal with high cardinal categorical feature into numeric for predictive machine learning model? | I have two columns of having high cardinal categorical values, one column(area_id) has 21878 unique values and other has(page_entry) 800 unique values. I am building a predictive ML model to predict the hits on a webpage.
column information:
area_id: all the locations that were visited during the session. (has location code number of different areas of a webpage)
page_entry: describes the landing page of the session.
how to change these two columns into numerical apart from one_hot encoding?
thank you. | One approach could be to group your categorical levels into smaller buckets using business rules. In your case for the feature area_id you could simply group them based on their geographical location, say all area_ids from a single district (or for that matter any other level of aggregation) will be replaced by a single id. Similarly, for page_entry you could group similar pages based on some attributes like nature of the web page like sports, travel, etc. In this way you could significantly reduce the number dimensions of your variables.
Hope this helps! | 0 | false | 1 | 6,303 |
2019-09-18 17:09:01.753 | How to restrict the maximum size of an element in a list in Python? | Problem Statement:
There are 5 sockets and 6 phones. Each phone takes 60 minutes to charge completely. What is the least time required to charge all phones?
The phones can be interchanged along the sockets
What I've tried:
I've made a list with 6 elements whose initial value is 0. I've defined two functions. Switch function, which interchanges the phone one socket to the left. Charge function, which adds value 10(charging time assumed) to each element, except the last (as there are only 5 sockets). As the program proceeds, how do I restrict individual elements to 60, while other lower value elements still get added 10 until they attain the value of 60? | You cannot simply restrict the maximum element size. What you can do is check the element size with a if condition and terminate the process.
btw, answer is 6x60/5=72 mins. | 0 | false | 2 | 6,304 |
2019-09-18 17:09:01.753 | How to restrict the maximum size of an element in a list in Python? | Problem Statement:
There are 5 sockets and 6 phones. Each phone takes 60 minutes to charge completely. What is the least time required to charge all phones?
The phones can be interchanged along the sockets
What I've tried:
I've made a list with 6 elements whose initial value is 0. I've defined two functions. Switch function, which interchanges the phone one socket to the left. Charge function, which adds value 10(charging time assumed) to each element, except the last (as there are only 5 sockets). As the program proceeds, how do I restrict individual elements to 60, while other lower value elements still get added 10 until they attain the value of 60? | In the charge function, add an if condition that checks the value of the element.
I'm not sure what you're add function looks like exactly, but I would define the pseudocode to look something like this:
if element < 60:
add 10 to the element
This way, if an element is greater than or equal to 60, it won't get caught by the if condition and won't get anything added to it. | 0 | false | 2 | 6,304 |
2019-09-18 18:44:22.307 | how to display plot images outside of jupyter notebook? | So, this might be an utterly dumb question, but I have just started working with python and it's data science libs, and I would like to see seaborn plots displayed, but I prefer to work with editors I have experience with, like VS Code or PyCharm instead of Jupyter notebook. Of course, when I run the python code, the console does not display the plots as those are images. So how do I get to display and see the plots when not using jupyter? | You can try to run an matplotlib example code with python console or ipython console. They will show you a window with your plot.
Also, you can use Spyder instead of those consoles. It is free, and works well with python libraries for data science. Of course, you can check your plots in Spyder. | 0 | false | 1 | 6,305 |
2019-09-19 18:35:33.863 | Tasks linger in celery amqp when publisher is terminated | I am using Celery with a RabbitMQ server. I have a publisher, which could potentially be terminated by a SIGKILL and since this signal cannot be watched, I cannot revoke the tasks. What would be a common approach to revoke the tasks where the publisher is not alive anymore?
I experimented with an interval on the worker side, but the publisher is obviously not registered as a worker, so I don't know how I can detect a timeout | Another solution, which works in my case, is to add the next task only if the current processed ones are finished. In this case the queue doesn't fill up. | 1.2 | true | 2 | 6,306 |
2019-09-19 18:35:33.863 | Tasks linger in celery amqp when publisher is terminated | I am using Celery with a RabbitMQ server. I have a publisher, which could potentially be terminated by a SIGKILL and since this signal cannot be watched, I cannot revoke the tasks. What would be a common approach to revoke the tasks where the publisher is not alive anymore?
I experimented with an interval on the worker side, but the publisher is obviously not registered as a worker, so I don't know how I can detect a timeout | There's nothing built-in to celery to monitor the producer / publisher status -- only the worker / consumer status. There are other alternatives that you can consider, for example by using a redis expiring key that has to be updated periodically by the publisher that can serve as a proxy for whether a publisher is alive. And then in the task checking to see if the flag for a publisher still exists within redis, and if it doesn't the task returns doing nothing. | 0.673066 | false | 2 | 6,306 |
2019-09-19 19:03:13.597 | Python "Magic methods" are realy methods? | I know how to use magical methods in python, but I would like to understand more about them.
For it I would like to consider three examples:
1) __init__:
We use this as constructor in the beginning of most classes. If this is a method, what is the object associated with it? Is it a basic python object that is used to generate all the other objects?
2) __add__
We use this to change the behaviour of the operator +. The same question above.
3) __name__:
The most common use of it is inside this kind of structure:if __name__ == "__main__":
This is return True when you are running the module as the main program.
My question is __name__ a method or a variable? If it is a variable what is the method associated with it. If this is a method, what is the object associated with it?
Since I do not understand very well these methods, maybe the questions are not well formulated. I would like to understand how these methods are constructed in Python. | The object is the class that's being instantiated, a.k.a. the Foo in Foo.__init__(actual_instance)
In a + b the object is a, and the expression is equivalent to a.__add__(b)
__name__ is a variable. It can't be a method because then comparisons with a string would always be False since a function is never equal to a string | 0.201295 | false | 1 | 6,307 |
2019-09-19 21:07:37.810 | Python - how to check if user is on the desktop | I am trying to write a program with python that works like android folders bit for Windows. I want the user to be able to single click on a desktop icon and then a window will open with the contents of the folder in it. After giving up trying to find a way to allow single click to open a desktop application (for only one application I am aware that you can allow single click for all files and folders), I decided to check if the user clicked in the location of the file and if they were on the desktop while they were doing that. So what I need to know is how to check if the user is viewing the desktop in python.
Thanks,
Harry
TLDR; how to check if user is viewing the desktop - python | I don't know if "single clicking" would work in any way but you can use Pyautogui to automatically click as many times as you want | 0 | false | 1 | 6,308 |
2019-09-20 11:50:30.050 | How to fine-tune a keras model with existing plus newer classes? | Good day!
I have a celebrity dataset on which I want to fine-tune a keras built-in model. SO far what I have explored and done, we remove the top layers of the original model (or preferably, pass the include_top=False) and add our own layers, and then train our newly added layers while keeping the previous layers frozen. This whole thing is pretty much like intuitive.
Now what I require is, that my model learns to identify the celebrity faces, while also being able to detect all the other objects it has been trained on before. Originally, the models trained on imagenet come with an output layer of 1000 neurons, each representing a separate class. I'm confused about how it should be able to detect the new classes? All the transfer learning and fine-tuning articles and blogs tell us to replace the original 1000-neuron output layer with a different N-neuron layer (N=number of new classes). In my case, I have two celebrities, so if I have a new layer with 2 neurons, I don't know how the model is going to classify the original 1000 imagenet objects.
I need a pointer on this whole thing, that how exactly can I have a pre-trained model taught two new celebrity faces while also maintaining its ability to recognize all the 1000 imagenet objects as well.
Thanks! | With transfer learning, you can make the trained model classify among the new classes on which you just trained using the features learned from the new dataset and the features learned by the model from the dataset on which it was trained in the first place. Unfortunately, you can not make the model to classify between all the classes (original dataset classes + second time used dataset classes), because when you add the new classes, it keeps their weights only for classification.
But, let's say for experimentation you change the number of output neurons (equal to the number of old + new classes) in the last layer, then it will now give random weights to these neurons which on prediction will not give you meaningful result.
This whole thing of making the model to classify among old + new classes experimentation is still in research area.
However, one way you can achieve it is to train your model from scratch on the whole data (old + new). | 0.545705 | false | 1 | 6,309 |
2019-09-20 13:43:24.297 | Nvenc session limit per GPU | I'm using Imageio, the python library that wraps around ffmpeg to do hardware encoding via nvenc. My issue is that I can't get more than 2 sessions to launch (I am using non-quadro GPUs). Even using multiple GPUs. I looked over NVIDIA's support matrix and they state only 2 sessions per gpu, but it seems to be per system.
For example I have 2 GPUs in a system. I can either use the env variable CUDA_VISIBLE_DEVICES or set the ffmpeg flag -gpu to select the GPU. I've verified gpu usage using Nvidia-smi cli. I can get 2 encoding sessions working on a single gpu. Or 1 session working on 2 separate gpus each. But I can't get 2 encoding sessions working on 2 gpus.
Even more strangely if I add more gpus I am still stuck at 2 sessions. I can't launch a third encoding session on a 3rd gpu. I am always stuck at 2 regardless of the # of gpus. Any ideas on how to fix this? | Nvidia limits it 2 per system Not 2 per GPU. The limitation is in the driver, not the hardware. There have been unofficially drivers posted to github which remove the limitation | 1.2 | true | 1 | 6,310 |
2019-09-21 07:16:21.710 | Setup of the Divio CMS Repositories | The Divio Django CMS offers two servers: TEST and LIVE. Are these also two separate repositories? Or how is this done in the background?
I'm wondering because I would have the feeling the LIVE server is its own repository that just pulls from the TEST whenever I press deploy. Is that correct? | All Divio projects (django CMS, Python, PHP, whatever) have a Live and Test environment.
By default, both build the project from its repository's master branch (in older projects, develop).
On request, custom tracking branches can be enabled, so that the Live and Test environments will build from separate branches.
When a build successfully completes, the Docker image can be reused until changes are made to the project's repository. This means that after a successful deployment on Test, the Docker image doesn't need to be rebuilt, and the Live environment can be deployed much faster from the pre-built image. (Obviously this is only possible when they are on the same branch.) | 0.386912 | false | 1 | 6,311 |
2019-09-22 12:12:44.420 | How do i retrain the model without losing the earlier model data with new set of data | for my current requirement, I'm having a dataset of 10k+ faces from 100 different people from which I have trained a model for recognizing the face(s). The model was trained by getting the 128 vectors from the facenet_keras.h5 model and feeding those vector value to the Dense layer for classifying the faces.
But the issue I'm facing currently is
if want to train one person face, I have to retrain the whole model once again.
How should I get on with this challenge? I have read about a concept called transfer learning but I have no clues about how to implement it. Please give your suggestion on this issue. What can be the possible solutions to it? | With transfer learning you would copy an existing pre-trained model and use it for a different, but similar, dataset from the original one. In your case this would be what you need to do if you want to train the model to recognize your specific 100 people.
If you already did this and you want to add another person to the database without having to retrain the complete model, then I would freeze all layers (set layer.trainable = False for all layers) except for the final fully-connected layer (or the final few layers). Then I would replace the last layer (which had 100 nodes) to a layer with 101 nodes. You could even copy the weights to the first 100 nodes and maybe freeze those too (I'm not sure if this is possible in Keras). In this case you would re-use all the trained convolutional layers etc. and teach the model to recognise this new face. | 0.201295 | false | 1 | 6,312 |
2019-09-22 13:48:06.487 | How to debug (500) Internal Server Error on Python Waitress server? | I'm using Python and Flask, served by Waitress, to host a POST API. I'm calling the API from a C# program that posts data and gets a string response. At least 95% of the time, it works fine, but sometimes the C# program reports an error:
(500) Internal Server Error.
There is no further description of the error or why it occurs. The only clue is that it usually happens in clusters -- when the error occurs once, it likely occurs several times in a row. Without any intervention, it then goes back to running normally.
Since the error is so rare, it is hard to troubleshoot. Any ideas as to how to debug or get more information? Is there error handling I can do from either the C# side or the Flask/Waitress side? | Your flask application should be logging the exception when it occurs. Aside from combing through your logs (which should be stored somewhere centrally) you could consider something like Sentry.io, which is pretty easy to setup with Flask apps. | 0 | false | 1 | 6,313 |
2019-09-23 05:52:42.417 | Check inputs in csv file | I`m new to python. I have a csv file. I need to check whether the inputs are correct or not. The ode should scan through each rows.
All columns for a particular row should contain values of same type: Eg:
All columns of second row should contain only string,
All columns of third row should contain only numbers... etc
I tried the following approach, (it may seem blunder):
I have only 15 rows, but no idea on number of columns(Its user choice)
df.iloc[1].str.isalpha()
This checks for string. I don`t know how to check ?? | Simple approach that can be modified:
Open df using df = pandas.from_csv(<path_to_csv>)
For each column, use df['<column_name>'] = df['<column_name>'].astype(str) (str = string, int = integer, float = float64, ..etc).
You can check column types using df.dtypes | 0.386912 | false | 1 | 6,314 |
2019-09-23 11:00:06.643 | how do I upgrade pip on Mac? | I cannot upgrade pip on my Mac from the Terminal.
According to the documentation I have to type the command:
pip install -U pip
I get the error message in the Terminal:
pip: command not found
I have Mac OS 10.14.2, python 3.7.2 and pip 18.1.
I want to upgrade to pip 19.2.3 | I have found an answer that worked for me:
sudo pip3 install -U pip --ignore-installed pip
This installed pip version 19.2.3 correctly.
It was very hard to find the correct command on the internet...glad I can share it now.
Thanks. | 0.135221 | false | 3 | 6,315 |
2019-09-23 11:00:06.643 | how do I upgrade pip on Mac? | I cannot upgrade pip on my Mac from the Terminal.
According to the documentation I have to type the command:
pip install -U pip
I get the error message in the Terminal:
pip: command not found
I have Mac OS 10.14.2, python 3.7.2 and pip 18.1.
I want to upgrade to pip 19.2.3 | pip3 install --upgrade pip
this works for me! | 0.424784 | false | 3 | 6,315 |
2019-09-23 11:00:06.643 | how do I upgrade pip on Mac? | I cannot upgrade pip on my Mac from the Terminal.
According to the documentation I have to type the command:
pip install -U pip
I get the error message in the Terminal:
pip: command not found
I have Mac OS 10.14.2, python 3.7.2 and pip 18.1.
I want to upgrade to pip 19.2.3 | I came on here to figure out the same thing but none of this things seemed to work. so I went back and looked how they were telling me to upgrade it but I still did not get it. So I just started trying things and next thing you know I seen the downloading lines and it told me that my pip was upgraded. what I used was (pip3 install -- upgrade pip). I hope this can help anyone else in need. | 0 | false | 3 | 6,315 |
2019-09-23 22:18:51.993 | how to remove duplicates when using pandas concat to combine two dataframe | I have two data from.
df1 with columns: id,x1,x2,x3,x4,....xn
df2 with columns: id,y.
df3 =pd.concat([df1,df2],axis=1)
when I use pandas concat to combine them, it became
id,y,id,x1,x2,x3...xn.
there are two id here.How can I get rid of one.
I have tried :
df3=pd.concat([df1,df2],axis=1).drop_duplicates().reset_index(drop=True).
but not work. | drop_duplicates() only removes rows that are completely identical.
what you're looking for is pd.merge().
pd.merge(df1, df2, on='id) | 0 | false | 1 | 6,316 |
2019-09-25 00:25:17.317 | Supremum Metric in Python for Knn with Uncertain Data | I'm trying to make a classifier for uncertain data (e.g ranged data) using python. in certain dataset, the list is a 2D array or array of record (contains float numbers for data and a string for labels), where in uncertain dataset the list is a 3D array (contains range of float numbers for data and a string for labels). i managed to manipulate a certain dataset to be uncertain using uniform probability distribution. A research paper says that i have to use supremum distance metric. how do i implement this metric in python? note that in uncertain dataset, both test set and training set is uncertain | I found out using scipy spatial distance and tweaking for-loops in standard knn helps a lot | 1.2 | true | 1 | 6,317 |
2019-09-25 13:06:45.637 | Dataflow Sideinputs - Worker Cache Size in SDK 2.x | I am experiencing performance issues in my pipeline in a DoFn that uses large side input of ~ 1GB. The side input is passed using the pvalue.AsList(), which forces materialization of the side input.
The execution graph of the pipeline shows that the particular step spends most of the time for reading the side input. The total amount of data read exceeds the size of the side input by far. Consequently, I conclude that the side input does not fit into memory / cache of the workers even though their RAM is sufficient (using n1-highmem4 workers with 26 GB RAM).
How do I know how big this cache actually is? Is there a way to control its size using Beam Python SDK 2.15.0 (like there was the pipeline option --workerCacheMb=200 for Java 1.x SDK)?
There is no easy way of shrinking my side input more than 10%. | If you are using AsList, you are correct that the whole side input should be loaded into memory. It may be that your worker has enough memory available, but it just takes very long to read 1GB of data into the list. Also, the size of the data that is read depends on the encoding of it. If you can share more details about your algorithm, we can try to figure out how to write a pipeline that may run more efficiently.
Another option may be to have an external service to keep your side input - for instance, a Redis instance that you write to on one side, and red from on the other side. | 0 | false | 1 | 6,318 |
2019-09-26 08:40:43.480 | Install packages with Conda for a second Python installation | I recently installed Anaconda in my Windows. I did that to use some packages from some specific channels required by an application that is using Python 3.5 as its scripting language.
I adjusted my PATH variable to use Conda, pointing to the Python environment of the particular program, but now I would like to use Conda as well for a different Python installation that I have on my Windows.
When installing Anaconda then it isn't asking for a Python version to be related to. So, how can I use Conda to install into the other Python installation. Both Python installations are 'physical' installations - not virtual in any way. | Uninstall the other python installation and create different conda environments, that is what conda is great at.
Using conda from your anaconda installation to manage packages from another, independent python installation is not possible and not very feasible.
Something like this could serve your needs:
Create one env for python 3.5 conda create -n py35 python=3.5
Create one env for some other python version you would like to use, e.g. 3.6: conda create -n py36 python=3.6
Use conda activate py35, conda deactivate, conda activate py36 to switch between your virtual environments. | 1.2 | true | 1 | 6,319 |
2019-09-26 14:54:39.137 | S3 file to Mysql AWS via Airflow | I been learning how to use Apache-Airflow the last couple of months and wanted to see if anybody has any experience with transferring CSV files from S3 to a Mysql database in AWS(RDS). Or from my Local drive to MySQL.
I managed to send everything to an S3 bucket to store them in the cloud using airflow.hooks.S3_hook and it works great. I used boto3 to do this.
Now I want to push this file to a MySQL database I created in RDS, but I have no idea how to do it. Do I need to use the MySQL hook and add my credentials there and then write a python function?
Also, It doesn't have to be S3 to Mysql, I can also try from my local drive to Mysql if it's easier.
Any help would be amazing! | were you able to resolve the 'MySQLdb._exceptions.OperationalError: (2068, 'LOAD DATA LOCAL INFILE file request rejected due to restrictions on access' issue | 0 | false | 1 | 6,320 |
2019-09-27 16:26:03.963 | Change column from Pandas date object to python datetime | I have a dataset with the first column as date in the format: 2011-01-01 and type(data_raw['pandas_date']) gives me pandas.core.series.Series
I want to convert the whole column into date time object so I can extract and process year/month/day from each row as required.
I used pd.to_datetime(data_raw['pandas_date']) and it printed output with dtype: datetime64[ns] in the last line of the output. I assume that values were converted to datetime.
but when I run type(data_raw['pandas_date']) again, it still says pandas.core.series.Series and anytime I try to run .dt function on it, it gives me an error saying this is not a datetime object.
So, my question is - it looks like to_datetime function changed my data into datetime object, but how to I apply/save it to the pandas_date column? I tried
data_raw['pandas_date'] = pd.to_datetime(data_raw['pandas_date'])
but this doesn't work either, I get the same result when I check the type. Sorry if this is too basic. | type(data_raw['pandas_date']) will always return pandas.core.series.Series, because the object data_raw['pandas_date'] is of type pandas.core.series.Series. What you want is to get the dtype, so you could just do data_raw['pandas_date'].dtype.
data_raw['pandas_date'] = pd.to_datetime(data_raw['pandas_date'])
This is correct, and if you do data_raw['pandas_date'].dtype again afterwards, you will see that it is datetime[64]. | 1.2 | true | 1 | 6,321 |
2019-09-28 00:05:03.313 | Using BFS/DFS To Find Path With Maximum Weight in Directed Acyclic Graph | You have a 2005 Honda Accord with 50 miles (weight max) left in the tank. Which McDonalds locations (graph nodes) can you visit within a 50 mile radius? This is my question.
If you have a weighted directed acyclic graph, how can you find all the nodes that can be visited within a given weight restriction?
I am aware of Dijkstra's algorithm but I can't seem to find any documentation of its uses outside of min-path problems. In my example, theres no node in particular that we want to end at, we just want to go as far as we can without going over the maximum weight. It seems like you should be able to use BFS/DFS in order to solve this, but I cant find documentation for implementing those in graphs with edge weights (again, outside of min-path problems). | Finding the longest path to a vertex V (a McDonald's in this case) can be accomplished using topological sort. We can start by sorting our nodes topologically, since sorting topologically will always return the source node U, before the endpoint, V, of a weighted path. Then, since we would now have access to an array in which each source vertex precedes all of its adjacent vertices, we can search through every path beginning with vertex U and ending with vertex V and set a value in an array with an index corresponding to U to the maximum edge weight we find connecting U to V. If the sum of the maximal distances exceeds 50 without reaching a McDonalds, we can backtrack and explore the second highest weight path going from U to V, and continue backtracking should we exhaust every path exiting from vertex U. Eventually we will arrive at a McDonalds, which will be the McDonalds with the maximal distance from our original source node while maintaining a total spanning distance under 50. | 0 | false | 2 | 6,322 |
2019-09-28 00:05:03.313 | Using BFS/DFS To Find Path With Maximum Weight in Directed Acyclic Graph | You have a 2005 Honda Accord with 50 miles (weight max) left in the tank. Which McDonalds locations (graph nodes) can you visit within a 50 mile radius? This is my question.
If you have a weighted directed acyclic graph, how can you find all the nodes that can be visited within a given weight restriction?
I am aware of Dijkstra's algorithm but I can't seem to find any documentation of its uses outside of min-path problems. In my example, theres no node in particular that we want to end at, we just want to go as far as we can without going over the maximum weight. It seems like you should be able to use BFS/DFS in order to solve this, but I cant find documentation for implementing those in graphs with edge weights (again, outside of min-path problems). | For this problem, you will want to run a DFS from the starting node. Recurse down the graph from each child of the starting node until a total weight of over 50 is reached. If a McDonalds is encountered along the traversal record the node reached in a list or set. By doing so, you will achieve the most efficient algorithm possible as you will not have to create a complete topological sort as the other answer to this question proposes. Even though this algorithm still technically runs in O(ElogV) time, by recursing back on the DFS when a path distance of over 50 is reached you avoid traversing through the entire graph when not necessary. | 0 | false | 2 | 6,322 |
2019-09-29 23:15:06.167 | How does Qt Designer work in terms of creating more than 1 dialog per file? | I'm starting to use Qt Designer.
I am trying to create a game, and the first task that I want to do is to create a window where you have to input the name of the map that you want to load. If the map exists, I then switch to the main game window, and if the name of the map doesn't exist, I want to display a popup window that tells the user that the name of the map they wrote is not valid.
I'm a bit confused with the part of showing the "not valid" pop-up window.
I realized that I have two options:
Creating 2 separated .ui files, and with the help of the .show() and .hide() commands show the correspoding window if the user input is invalid.
The other option that I'm thinking of creating both windows in the same .ui file, which seems to be a better option, but I don't really know how to work with windows that come from the same file. Should I create a separate class for each of the windows that come from the Qt Designer file? If not, how can I access both windows from the same class? | Your second option seems impossible, it would be great to share the .ui since in my years that I have worked with Qt Designer I have not been able to implement what you point out.
An .ui is an XML file that describes the elements and their properties that will be used to create a class that is used to fill a particular widget. So considering the above, your second option is impossible.
This concludes that the only viable option is its first method. | 1.2 | true | 1 | 6,323 |
2019-10-01 02:27:00.000 | Start at 100 and count up till 999 | So, this is for my assignment and I have to create a flight booking system. One of the requirements is that it should create 3 digit passenger code that does not start with zeros (e.g. 100 is the smallest acceptable value) and I have no idea how I can do it since I am a beginner and I just started to learn Python. I have made classes for Passenger, Flight, Seating Area so far because I just started on it today. Please help. Thank you. | I like list comprehension for making a list of 100 to 999:
flights = [i for i in range(100, 1000)]
For the random version, there is probably a better way, but Random.randint(x, y) creates a random in, inclusive of the endpoints:
from random import Random
rand = Random()
flight = rand.randint(100,999)
Hope this helps with your homework, but do try to understand the assignment and how the code works...lest you get wrecked on the final! | 0 | false | 1 | 6,324 |
2019-10-01 07:26:35.203 | String problem / Select all values > 8000 in pandas dataframe | I want to select all values bigger than 8000 within a pandas dataframe.
new_df = df.loc[df['GM'] > 8000]
However, it is not working. I think the problem is, that the value comes from an Excel file and the number is interpreted as string e.g. "1.111,52". Do you know how I can convert such a string to float / int in order to compare it properly? | You can see value of df.dtypes to see what is the type of each column. Then, if the column type is not as you want to, you can change it by df['GM'].astype(float), and then new_df = df.loc[df['GM'].astype(float) > 8000] should work as you want to. | 0.201295 | false | 1 | 6,325 |
2019-10-03 19:17:11.890 | Can we detect multiple objects in image using caltech101 dataset containing label wise images? | I have a caltech101 dataset for object detection. Can we detect multiple objects in single image using model trained on caltech101 dataset?
This dataset contains only folders (label-wise) and in each folder, some images label wise.
I have trained model on caltech101 dataset using keras and it predicts single object in image. Results are satisfactory but is it possible to detect multiple objects in single image?
As I know some how regarding this. for detecting multiple objects in single image, we should have dataset containing images and bounding boxes with name of objects in images.
Thanks in advance | The dataset can be used for detecting multiple objects but with below steps to be followed:
The dataset has to be annotated with bounding boxes on the object present in the image
After the annotations are done, you can use any of the Object detectors to do transfer learning and train on the annotated caltech 101 dataset
Note: - Without annotations, with just the caltech 101 dataset, detecting multiple objects in a single image is not possible | 1.2 | true | 1 | 6,326 |
2019-10-04 13:40:16.797 | Data type to save expanding data for data logging in Python | I am writing a serial data logger in Python and am wondering which data type would be best suited for this. Every few milliseconds a new value is read from the serial interface and is saved into my variable along with the current time. I don't know how long the logger is going to run, so I can't preallocate for a known size.
Intuitively I would use an numpy array for this, but appending / concatenating elements creates a new array each time from what I've read.
So what would be the appropriate data type to use for this?
Also, what would be the proper vocabulary to describe this problem? | Python doesn't have arrays as you think of them in most languages. It has "lists", which use the standard array syntax myList[0] but unlike arrays, lists can change size as needed. using myList.append(newItem) you can add more data to the list without any trouble on your part.
Since you asked for proper vocabulary in a useful concept to you would be "linked lists" which is a way of implementing array like things with varying lengths in other languages. | 0 | false | 1 | 6,327 |
2019-10-04 20:01:45.247 | How do you push in pycharm if the commit was already done? | Once you commit in pycharm it takes you to a second window to go through with the push. But if you only hit commit and not commit/push then how do you bring up the push option. You can't do another commit unless changes are made. | In the upper menu [VCS] -> [Git...] -> [Push] | 0.673066 | false | 1 | 6,328 |
2019-10-06 17:33:10.463 | ModuleNotFoundError: No module named 'telegram' | Trying to run the python-telegram-bot library through Jupyter Notebook I get this question error. I tried many ways to reinstall it, but nothing from answers at any forums helped me. What should be a mistake and how to avoid it while installing? | Do you have a directory with "telegram" name? If you do,rename your directory and try it again to prevent import conflict.
good luck:) | 0.386912 | false | 1 | 6,329 |
2019-10-07 20:48:55.507 | argparse.print_help() ArgumentParser message string | I am writing a slack bot, and I am using argsparse to parse the arguments sent into the slackbot, but I am trying to figure out how to get the help message string so I can send it back to the user via the slack bot.
I know that ArgumentParser has a print_help() method, but that is printed via console and I need a way to get that string. | I just found out that there's a method called format_help() that generates that help string | 0.386912 | false | 1 | 6,330 |
2019-10-07 22:25:21.107 | Is it possible to have a c++ dll run a python program in background and have it populate a map of vectors? If so, how? | There will be an unordered_map in c++ dll containing some 'vectors' mapped to its 'names'. For each of these 'names', the python code will keep on collecting data from a web server every 5 seconds and fill the vectors with it.
Is such a dll possible? If so, how to do it? | You can make the Python code into an executable. Run the executable file from the DLL as a separate process and communicate with it via TCP localhost socket - or some other Windows utility that allows to share data between different processes.
That's a slow mess. I agree, but it works.
You can also embed Python interpreter and run the script it on the dll... I suppose. | 0 | false | 1 | 6,331 |
2019-10-08 00:10:57.677 | What is the difference between spline filtering and spline interpolation? | I'm having trouble connecting the mathematical concept of spline interpolation with the application of a spline filter in python. My very basic understanding of spline interpolation is that it's fitting the data in a piece-wise fashion, and the piece-wise polynomials fitted are called splines. But its applications in image processing involve pre-filtering the image and then performing interpolation, which I'm having trouble understanding.
To give an example, I want to interpolate an image using scipy.ndimage.map_coordinates(input, coordinates, prefilter=True), and the keyword prefilter according to the documentation:
Determines if the input array is prefiltered with spline_filter before interpolation
And the documentation for scipy.ndimage.interpolation.spline_filter simply says the input is filtered by a spline filter. So what exactly is a spline filter and how does it alter the input data to allow spline interpolation? | I'm guessing a bit here. In order to calculate a 2nd order spline, you need the 1st derivative of the data. To calculate a 3rd order spline, you need the second derivative. I've not implemented an interpolation motor beyond 3rd order, but I suppose the 4th and 5th order splines will require at least the 3rd and 4th derivatives.
Rather than recalculating these derivatives every time you want to perform an interpolation, it is best to calculate them just once. My guess is that spline_filter is doing this pre-calculation of the derivatives which then get used later for the interpolation calculations. | 0.386912 | false | 1 | 6,332 |
2019-10-08 08:59:39.373 | How to show a highlighted label when The mouse is on widget | I need to know how to make a highlighted label(or small box )appears when the mouse is on widget like when you are using browser and put the mouse on (reload/back/etc...) button a small box will appear and tell you what this button do
and i want that for any widget not only widgets on toolbar | As the comment of @ekhumoro says
setToolTip is the solution | 1.2 | true | 1 | 6,333 |
2019-10-08 14:18:17.240 | xmlsec1 not found on ibm-cloud deployment | I am having hard time to install a python lib called python3-saml
To narrow down the problem I created a very simple application on ibm-cloud and I can deploy it without any problem, but when I add as a requirement the lib python3-saml
I got an exception saying:
pkgconfig.pkgconfig.PackageNotFoundError: xmlsec1 not found
The above was a deployment on ibm-cloud, but I did try to install the same python lib locally and I got the same error message, locally I can see that I have the xmlsec1 installed.
Any help on how to successfully deploy it on ibm-cloud using python3-saml?
Thanks in advance | I had a similar issue and I had to install the "xmlsec1-devel" on my CentOS system before installing the python package. | 0.386912 | false | 1 | 6,334 |
2019-10-10 09:57:25.667 | Using a function from a built-in module in your own module - Python | I'm new with Python and new on Stackoverflow, so please let me know if this question should be posted somewhere else or you need any other info :). But I hope someone can help me out with what seems to be a rather simple mistake...
I'm working with Python in Jupyter Notebook and am trying to create my own module with some selfmade functions/loops that I often use. However, when I try to some of the functions from my module, I get an error related to the import of the built-in module that is used in my own module.
The way I created my own module was by:
creating different blocks of code in a notebook and downloading it
as 'Functions.py' file.
saving this Functions.py file in the folder that i'm currently working in (with another notebook file)
in my current notebook file (where i'm doing my analysis), I import my module with 'import Functions'.
So far, the import of my own module seems to work. However, some of my self-made functions use functions from built-in modules. E.g. my plot_lines() function uses math.ceil() somewhere in the code. Therefore, I imported 'math' in my analysis notebook as well. But when I try to run the function plot_lines() in my notebook, I get the error "NameError: name 'math' is not defined".
I tried to solve this error by adding the code 'import math' to the function in my module as well, but this did not resolve the issue.
So my question is: how can I use functions from built-in Python modules in my own modules?
Thanks so much in advance for any help! | If anyone encounters the same issue:
add 'import math' to your own module.
Make sure that you actually reload your adjusted module, e.g. by restarting your kernell! | 0 | false | 1 | 6,335 |
2019-10-10 14:40:43.443 | how to post-process raw images using rawpy to have the same effect with default output like ISP in camera? | I use rawpy module in python to post-process raw images, however, no matter how I set the Params, the output is different from the default RGB in camera ISP, so anyone know how to operate on this please?
I have tried the following ways:
Default:
output = raw.postprocess()
Use Camera White balance:
output = raw.postprocess(use_camera_wb=True)
No auto bright:
output = raw.postprocess(use_camera_wb=True, no_auto_bright=True)
None of these could recover the RGB image as the camera ISP output. | The dcraw/libraw/rawpy stack is based on publicly available (reverse-engineered) documentation of the various raw formats, i.e., it's not using any proprietary libraries provided by the camera vendors. As such, it can only make an educated guess at what the original camera ISP would do with any given image. Even if you have a supposedly vendor-neutral DNG file, chances are the camera is not exporting everything there in full detail.
So, in general, you won't be able to get the same output. | 0 | false | 1 | 6,336 |
2019-10-11 00:23:12.790 | How does TF know what object you are finetuning for | I am trying to improve mobilenet_v2's detection of boats with about 400 images I have annotated myself, but keep on getting an underfitted model when I freeze the graphs, (detections are random does not actually seem to be detecting rather just randomly placing an inference). I performed 20,000 steps and had a loss of 2.3.
I was wondering how TF knows that what I am training it on with my custom label map
ID:1
Name: 'boat'
Is the same as what it regards as a boat ( with an ID of 9) in the mscoco label map.
Or whether, by using an ID of 1, I am training the models' idea of what a person looks like to be a boat?
Thank you in advance for any advice. | The model works with the category labels (numbers) you give it. The string "boat" is only a translation for human convenience in reading the output.
If you have a model that has learned to identify a set of 40 images as class 9, then giving it a very similar image that you insist is class 1 will confuse it. Doing so prompts the model to elevate the importance of differences between the 9 boats and the new 1 boats. If there are no significant differences, then the change in weights will find unintended features that you don't care about.
The result is a model that is much less effective. | 0 | false | 2 | 6,337 |
2019-10-11 00:23:12.790 | How does TF know what object you are finetuning for | I am trying to improve mobilenet_v2's detection of boats with about 400 images I have annotated myself, but keep on getting an underfitted model when I freeze the graphs, (detections are random does not actually seem to be detecting rather just randomly placing an inference). I performed 20,000 steps and had a loss of 2.3.
I was wondering how TF knows that what I am training it on with my custom label map
ID:1
Name: 'boat'
Is the same as what it regards as a boat ( with an ID of 9) in the mscoco label map.
Or whether, by using an ID of 1, I am training the models' idea of what a person looks like to be a boat?
Thank you in advance for any advice. | so I managed to figure out the issue.
We created the annotation tool from scratch and the issue that was causing underfitting whenever we trained regardless of the number of steps or various fixes I tried to implement was that When creating bounding boxes there was no check to identify whether the xmin and ymin coordinates were less than the xmax and ymax I did not realize this would be such a large issue but after creating a very simple check to ensure the coordinates are correct training ran smoothly. | 0 | false | 2 | 6,337 |
2019-10-11 00:57:45.870 | Warehouse routes between each started workorder in production order | I'm working with odoo11 community version and currently I have some problem.
This is my exmplanation of problem:
In company I have many workcenters, and for each workcenter:
1) I want to create separate warehouse for each workcenter
or
2) Just 1 warehouse but different storage areas for each workcenter
(currently I made second option) and each workcenter have their own operation type: Production
Now my problem started, There are manufacturing orders and each manufacturing order have few workorders, And I want to do something that when some workorder is started then products are moved to this workcenter's warehouse/storage area and they are there untill next workorders using different workcenter starting then product are moved to next workcenter warehouse/storage area.
I can only set that after creating new sale order production order is sent to first Workcenter storage area and he is ther untill all workorders in production order are finished, I don't know how to trigger move routes between workcenters storage areas. for products that are still in production stage
Can I do this from odoo GUI, or maybe I need to do this somewhere in code? | Ok, I found my answer, which is that to accomplish what I wanted I need to use Manufacturing with Multi levell Bill of material, it working in way that theoretically 3 steps manufacturing order is divided into 3 single manufacture orders with 1 step each, and for example 2 and 3 prodcution order which before were 2 and 3 step are using as components to produce product that are finished in previous step which now is individual order. | 1.2 | true | 1 | 6,338 |
2019-10-11 03:40:50.427 | How to Connect Django with Python based Crawler machine? | Good day folks
Recently, I made a python based web crawler machine that scrapes_ some news ariticles and django web page that collects search title and url from users.
But I do not know how to connect the python based crawler machine and django web page together, so I am looking for the any good resources that I can reference.
If anyone knows the resource that I can reference,
Could you guys share those?
Thanks | There are numerous ways you could do this.
You could directly integrate them together. Both use Python, so the scraper would just be written as part of Django.
You could have the scraper feed the data to a database and have Django read from that database.
You could build an API from the scraper to your Django implementation.
There are quite a few options for you depending on what you need. | 1.2 | true | 1 | 6,339 |
2019-10-11 08:52:05.923 | Is it possible to make a mobile app in Django? | I was wondering if it is possible for me to use Django code I have for my website and somehow use that in a mobile app, in a framework such as, for example, Flutter.
So is it possible to use the Django backend I have right now and use it in a mobile app?
So like the models, views etc... | Yes. There are a couple ways you could do it
Use the Django Rest Framework to serve as the backend for something like React Native.
Build a traditional website for mobile and then run it through a tool like PhoneGap.
Use the standard Android app tools and use Django to serve and process data through API requests. | 1.2 | true | 1 | 6,340 |
2019-10-11 09:16:29.590 | how to simulate mouse hover in robot framework on a desktop application | Can anyone please let me know how to simulate mouse hover event using robot framework on a desktop application. I.e if I mouse hover on a specific item or an object, the sub menus are listed and i need to select one of the submenu item. | It depends on the automation library that you are using to interact with the Desktop application.
The normal approach is the following:
Find the element that you want to hover on (By ID or some other unique locator)
Get the attribute position of the element (X,Y)
Move your mouse to that position.
In this way you don´t "hardcode" the x,y position what will make your test case flaky. | 0 | false | 1 | 6,341 |
2019-10-11 13:46:11.070 | I have a network with 3 features and 4 vector outputs. How is MSE and accuracy metric calculated? | I understand how it works when you have one column output but could not understand how it is done for 4 column outputs. | It’s not advised to calculate accuracy for continuous values. For such values you would want to calculate a measure of how close the predicted values are to the true values. This task of prediction of continuous values is known as regression. And generally R-squared value is used to measure the performance of the model.
If the predicted output is of continuous values then mean square error is the right option
For example:
Predicted o/p vector1-----> [2,4,8] and
Actual o/p vector1 -------> [2,3.5,6]
1.Mean square error is sqrt((2-2)^2+(4-3.5)^2+(8-6)^2 )
2.Mean absolute error..etc.
(2)if the output is of classes then accuracy is the right metric to decide on model performance
Predicted o/p vector1-----> [0,1,1]
Actual o/p vector1 -------> [1,0,1]
Then accuracy calculation can be done with following:
1.Classification Accuracy
2.Logarithmic Loss
3.Confusion Matrix
4.Area under Curve
5.F1 Score | 0.386912 | false | 1 | 6,342 |
2019-10-11 13:57:41.357 | What are the types of Python operators? | I tried type(+) hoping to know more about how is this operator represented in python but i got SyntaxError: invalid syntax.
My main problem is to cast as string representing an operation :"3+4" into the real operation to be computed in Python (so to have an int as a return: 7).
I am also trying to avoid easy solutions requiring the os library if possible. | Operators don't really have types, as they aren't values. They are just syntax whose implementation is often defined by a magic method (e.g., + is defined by the appropriate type's __add__ method).
You have to parse your string:
First, break it down into tokens: ['3', '+', '4']
Then, parse the token string into an abstract syntax tree (i.e., something at stores the idea of + having 3 and 4 as its operands).
Finally, evaluate the AST by applying functions stored at a node to the values stored in its children. | 1.2 | true | 1 | 6,343 |
2019-10-12 16:46:51.550 | How to rotate a object trail in vpython? | I want to write a program to simulate 5-axis cnc gcode with vpython and I need to rotate trail of the object that's moving. Any idea how that can be done? | It's difficult to know exactly what you need, but if instead of using "make_trail=True" simply create a curve object to which you append points. A curve object named "c" can be rotated using the usual way to rotate an object: c.rotate(.....). | 0 | false | 1 | 6,344 |
2019-10-13 10:37:11.607 | How to extract/cut out parts of images classified by the model? | I am new to deep learning, I was wondering if there is a way to extract parts of images containing the different label and then feed those parts to different model for further processing?
For example,consider the dog vs cat classification.
Suppose the image contains both cat and dog.
We successfully classify that the image contains both, but how can we classify the breed of the dog and cat present?
The approach I thought of was,extracting/cutting out the parts of the image containing dog and cat.And then feed those parts to the respective dog breed classification model and cat breed classification model separately.
But I have no clue on how to do this. | Your thinking is correct, you can have multiple pipelines based on the number of classes.
Training:
Main model will be an object detection and localization model like Faster RCNN, YOLO, SSD etc trained to classify at a high level like cat and dog. This pipeline provides you bounding box details (left, bottom, right, top) along with the labels.
Sub models will be multiple models trained on a lover level. For example a model that is trained to classify breed. This can be done by using models like vgg, resnet, inception etc. You can utilize transfer learning here.
Inference: Pass the image through Main model, crop out the detection objects using bounding box details (left, bottom, right, top) and based on the label information, feed it appropriate sub model and extract the results. | 1.2 | true | 1 | 6,345 |
2019-10-13 14:56:02.397 | Creating dask_jobqueue schedulers to launch on a custom HPC | I'm new to dask and trying to use it in our cluster which uses NC job scheduler (from Runtime Design Automation, similar to LSF). I'm trying to create an NCCluster class similar to LSFCluster to keep things simple.
What are the steps involved in creating a job scheduler for custom clusters?
Is there any other way to interface dask to custom clusters without using JobQueueCluster?
I could find info on how to use the LSFCluster/PBSCluster/..., but couldn't find much information on creating one for a different HPC.
Any links to material/examples/docs will help
Thanks | Got it working after going through the source code.
Tips for anyone trying:
Create a customCluster & customJob class similar to LSFCluster & LSFJob.
Override the following
submit_command
cancel_command
config_name (you'll have to define it in the jobqueue.yaml)
Depending on the cluster, you may need to override the _submit_job, _job_id_from_submit_ouput and other functions.
Hope this helps. | 1.2 | true | 1 | 6,346 |
2019-10-13 23:43:47.973 | How to run a python script using an anaconda virtual environment on mac | I am trying to get some code working on mac and to do that I have been using an anaconda virtual environment. I have all of the dependencies loaded as well as my script, but I don't know how to execute my file in the virtual environment on mac. The python file is on my desktop so please let me know how to configure the path if I need to. Any help? | If you have a terminal open and are in your virtual environment then simply invoking the script should run it in your environment. | 1.2 | true | 1 | 6,347 |
2019-10-14 15:59:45.917 | Dynamically Injecting User Input Values into Python code on AWS? | I am trying to deploy a Python webapp on AWS that takes a USERNAME and PASSWORD as input from a user, inputs them into a template Python file, and logs into their Instagram account to manage it automatically.
In Depth Explanation:
I am relatively new to AWS and am really trying to create an elaborate project so I can learn. I was thinking of somehow receiving the user input on a simple web page with two text boxes to input their Instagram account info (username & pass). Upon receiving this info, my instinct tells me that I could somehow use Lambda to quickly inject it into specific parts of an already existing template.py file, which will then be taken and combined with the rest of the source files to run the code. These source files could be stored somewhere else on AWS (S3?). I was thinking of running this using Elastic Beanstalk.
I know this is awfully involved, but my main issue is this whole dynamic injection thing. Any ideas would be so greatly appreciated. In the meantime, I will be working on it. | One way in which you could approach this would be have a hosted website on a static s3 bucket. Then, when submitting a request, goes to an API Gateway POST endpoint, This could then trigger a lambda (in any language of choice) passing in the two values.
This would then be passed into the event object of the lambda, you could store these inside secrets manager using the username as the Key name so you can reference it later on. Storing it inside a file inside a lambda is not a good approach to take.
Using this way you'd learn some key services:
S3 + Static website Hosting
API Gateway
Lambdas
Secrets Manager
You could also add alias's/versions to the lambda such as dev or production and same concept to API Gateways with stages to emulate doing a deployment.
However there are hundreds of different ways to also design it. And this is only one of them! | 0 | false | 1 | 6,348 |
2019-10-14 18:35:11.307 | how do I locate the btn by class name? | I have this html code:
<button class="_2ic5v"><span aria-label="Like" class="glyphsSpriteComment_like u-__7"></span></button>
I am trying to locate all the elements that meet this class with phyton, and selenium webdriver library:
likeBtn = driver.find_elements_by_class_name('_2ic5v')
but when I print
likeBtn
it prints
[]
I want to locate all of the buttons that much this div/span class, or aria-label
how do I do that successfully? Thanks in advance
update - when I do copy Xpath from page the print stays the same | Is it button class name dynamic or static?
How if you try choose By.CssSelector?
You can find element by copy selector in element | 0 | false | 1 | 6,349 |
2019-10-15 06:25:59.843 | Trying to find text in an article that may contain quotation marks | I'm using python's findall function with a reg expression that should work but can't get the function to output results with quotation marks in them ('").
This is what I tried:
Description = findall('<p>([A-Za-z ,\.\—'":;0-9]+).</p>\n', text)
The quotation marks inside the reg expression are creating the hassle and I have no idea how to get around it. | Placing the backslash before the single quote like Sachith Rukshan suggested makes it work | 1.2 | true | 1 | 6,350 |
2019-10-16 08:45:58.897 | How to design realtime deeplearnig application for robotics using python? | I have created a machine learning software that detects objects(duh!), processes the objects based on some computer vision parameters and then triggers some hardware that puts the object in the respective bin. The objects are placed on a conveyer belt and a camera is mounted at a point to snap pictures of objects(one object at a time) when they pass beneath the camera. I don't have control over the speed of the belt.
Now, the challenge is that I have to configure a ton of things to make the machine work properly.
The first problem is the time model takes to create segmentation masks, it varies from one object to another.
Another issue is how do I maintain signals that are generated after computer vision processing, send them to actuators in a manner that it won't get misaligned with the computer vision-based inferencing.
My initial design includes creating processes responsible for a specific task and then make them communicate with one other as per the necessity. However, the problem of synchronization still persists.
As of now, I am thinking of treating the software stack as a group of services as we usually do in backend and make them communicate using something like celery and Redis queue.
I am a kind of noob in system design, come from a background of data science. I have explored python's multithreading module and found it unusable for my purpose(all threads run on single core). I am concerned if I used multiprocessing, there could be additional delays in individual processes due to messaging and thus, that would add another uncertainty to the program.
Additional Details:
Programming Frameworks and Library: Tensorflow, OpenCV and python
Camera Resolution: 1920P
Maximum Accutuation Speed: 3 triggers/second
Deep Learning Models: MaskRCNN/UNet
P.S: You can also comment on the technologies or the keywords I should search for because a vanilla search yields nothing good. | Let me summarize everything first.
What you want to do
The "object" is on the conveyer belt
The camera will take pictures of the object
MaskRCNN will run to do the analyzing
Here are some problems you're facing
"The first problem is the time model takes to create segmentation masks, it varies from one object to another."
-> if you want to reduce the processing time for each image, then an accelerator (FPGA, Chip, etc) or some acceleration technique is needed. Intel OpenVino and Intel DL stick is a good start.
-> if there are too many pictures to process then you'll have 2 choices: 1) put a lot of machines so all the job can be done or 2) select only the important job and discard others. The fact that you set the "Maximum Accutuation" to a fixed number (3/sec) made me think that this is the problem you're facing. A background subtractor is a good start for creating images capture triggers.
"Another issue is how do I maintain signals that are generated after computer vision processing, send them to actuators in a manner that it won't get misaligned with the computer vision-based inferencing."
-> a "job distributor" like Celery is good choice here. If the message is stacked inside the broker (Redis), then some tasks will have to wait. But this can easily by scaling up your computer.
Just a few advice here:
a vision system also includes the hardware parts, so a hardware specification is a must.
Clarify the requirements
Impossible things do exist, so sometimes you could reduce some factors (reliable, cost) of your project. | 1.2 | true | 1 | 6,351 |
2019-10-16 13:41:40.667 | Is there another way to plot a graph in python without matplotlib? | As the title says, that's basically it. I have tried to install matplotlib already but:
I am on Windows and "sudo" doesn't work
Every solution and answers on Stack Overflow regarding matplotlib (or some other package) not being able to be installed doesn't work for me...
I get "Error Code 1"
So! Is there any other way to plot a graph in python without matplotlib? If not, can I have help with how to install matplotlib, successfully? | in cmd (coammand prompt) type pip install matplotlib | -0.386912 | false | 1 | 6,352 |
2019-10-17 06:41:12.867 | File related operations python subprocess vs. native python | I have a simple task I want to perform over ssh: return all files from a given file list that do not exist.
The way I would go about doing this would be to wrap the following in an ssh session:
for f in $(files); do stat $f > /dev/null ;done
The stdout redirect will ignore all good files and then reading the stderr will give me a list of all non found files.
I first thought of using this bash code with the ssh part inside a subprocess.run(..., shell=True) but was discouraged to do so. Instead,paramikowas suggested.
I try to understand why and when native python is better than subprocessing bash
Computability with different OS (not an issue for me as the code is pretty tightly tied to Ubuntu)
Error and exception handling - this one I do get and think it's important, though catching an exception or exit code from subprocess is kinda easy too
The con in my eyes with native python is the need to involve somewhat complicated modules such as paramiko when bash's ssh and stat seem to me as more plain and easy to use
Are there any guidelines for when and how to choose bash over python?
This question is mainly about using a command over ssh, but is relevant for any other command that bash is doing in a short and easy way and python wraps | There are really three choices here: doing something in-process (like paramiko), running ssh directly (with subprocess), and running ssh with the shell (also with subprocess). As a general rule, avoid running the shell programmatically (as opposed to, say, upon interactive user request).
The reason is that it’s a human-oriented interface (thus the easy separation of words with spaces and shortcuts for $HOME and globbing) that is vastly underpowered as an API. Consider, for example, how your code would detect that ssh was missing: the situation doesn’t arise with paramiko (so long as it is installed), is obvious with subprocess, and is just an (ambiguous) exit code and stderr message from the shell. Also consider how you supply the command to run: it already must be a command suitable for the shell (due to limitations in the SSH protocol), but if you invoke ssh with the shell it must be encoded (sometimes called “doubly escaped”) so as to have the local shell’s interpretation be the desired multi-word command for the remote shell.
So far, paramiko and subprocess are pretty much equivalent. As a more difficult case, consider how a key verification failure would manifest: paramiko would describe the failure as data, whereas the others would attempt to interact with the user (which might or might not be present). paramiko also supports opening multiple channels over one authenticated connection; ssh does so as well but only via a complicated ControlMaster configuration involving Unix socket files (which might not have any good place to exist in some deployments). Speaking of configuration, you may need to pass -F to avoid complications from the user’s .ssh/config if it is not designed with this automated use case in mind.
In summary, libraries are designed for use cases like yours, and so it should be no surprise that they work better, especially for edge cases, than assembling your own interface from human-oriented commands (although it is very useful that such manual compositions are possible!). If installing a non-standard dependency like paramiko is a burden, at least use subprocess directly; cutting out the second shell is already a great improvement. | 1.2 | true | 1 | 6,353 |
2019-10-17 13:02:33.333 | Auto activate virtual environment in Visual Studio Code | I want VS Code to turn venv on run, but I can't find how to do that.
I already tried to add to settings.json this line:
"terminal.integrated.shellArgs.windows": ["source${workspaceFolder}\env\Scripts\activate"]
But, it throws me an 127 error code. I found what 127 code means. It means, Not found. But how it can be not found, if I see my venv folder in my eyes right now?
I think it's terminal fault. I'm using Win 10 with Git Bash terminal, that comes when you install Git to your machine. | This is how I did it in 2021:
Enter Ctrl+Shift+P in your vs code.
Locate your Virtual Environment:
Python: select interpreter > Enter interpreter path > Find
Once you locate your virtual env select your python version:
your-virtual-env > bin > python3.
Now in your project you will see .vscode directory created open settings.json inside of it and add:
"python.terminal.activateEnvironment": true
don't forget to add comma before to separate it with already present key value pair.
Now restart the terminal.
You should see your virtual environment activated automatically. | 1.2 | true | 2 | 6,354 |
2019-10-17 13:02:33.333 | Auto activate virtual environment in Visual Studio Code | I want VS Code to turn venv on run, but I can't find how to do that.
I already tried to add to settings.json this line:
"terminal.integrated.shellArgs.windows": ["source${workspaceFolder}\env\Scripts\activate"]
But, it throws me an 127 error code. I found what 127 code means. It means, Not found. But how it can be not found, if I see my venv folder in my eyes right now?
I think it's terminal fault. I'm using Win 10 with Git Bash terminal, that comes when you install Git to your machine. | There is a new flag that one can use: "python.terminal.activateEnvironment": true | 0.573727 | false | 2 | 6,354 |
2019-10-17 14:15:46.367 | Implement 1-ply, 2-ply or 3-ply search td-gammon | I've read some articles and most of them say that 3-ply improves the performance of the self-player train.
But what is this in practice? and how is that implemented? | There is stochasticity in the game because of the dice rolls, so one approach would be evaluate state positions by self play RL, and then while playing do a 2-ply search over all the possible dice combinations. That would be 36 + 6 i.e. 42 possible rolls, and then you have to make different moves that are available which increases the breath of the tree to an insane degree. I tried this and it failed because my Mac could not handle such computation. Instead what we could do is just randomize a few dice rolls and perform a MiniMax tree search with Alpha Beta pruning ( using the AfterState value function).
For a 1 ply search we just use the rolled dice, or if we want to predict the value before we roll the dice then we can simply loop over all the possible combinations. Then we just argmax over the afterstates. | 0 | false | 1 | 6,355 |
2019-10-17 17:03:00.937 | Most efficient way to execute 20+ SQL Files? | I am currently overhauling a project here at work and need some advice. We currently have a morning checklist that runs daily and executes roughly 30 SQL files with 1 select statement each. This is being done in an excel macro which is very unreliable. These statements will be executed against an oracle database.
Basically, if you were re-implementing this project, how would you do it? I have been researching concurrency in python, but have not had any luck. We will need to capture the results and display them, so please keep that in mind.If more information is needed, please feel free to ask.
Thank you. | There are lots of ways depending on how long the queries run, how much data is output, are there input parameters and what is done to the data output.
Consider:
1. Don't worry about concurrency up front
2. Write a small python app to read in every *.sql file in a directory and execute each one.
3. Modify the python app to summarize the data output in the format that it is needed
4. Modify the python app to save the summary back into the database into a daily check table with the date / time the SQL queries were run. Delete all rows from the daily check table before inserting new rows
5. Have the Excel spreadsheet load it's data from that daily check table including the date / time the data was put in the table
6. If run time is slows, optimize the PL/SQL for the longer running queries
7. If it's still slow, split the SQL files into 2 directories and run 2 copies of the python app, one against each directory.
8. Schedule the python app to run at 6 AM in the Windows task manager. | 0.673066 | false | 1 | 6,356 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.