GUI and Desktop Applications
int64 0
1
| A_Id
int64 5.3k
72.5M
| Networking and APIs
int64 0
1
| Python Basics and Environment
int64 0
1
| Other
int64 0
1
| Database and SQL
int64 0
1
| Available Count
int64 1
13
| is_accepted
bool 2
classes | Q_Score
int64 0
1.72k
| CreationDate
stringlengths 23
23
| Users Score
int64 -11
327
| AnswerCount
int64 1
31
| System Administration and DevOps
int64 0
1
| Title
stringlengths 15
149
| Q_Id
int64 5.14k
60M
| Score
float64 -1
1.2
| Tags
stringlengths 6
90
| Answer
stringlengths 18
5.54k
| Question
stringlengths 49
9.42k
| Web Development
int64 0
1
| Data Science and Machine Learning
int64 1
1
| ViewCount
int64 7
3.27M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 53,747,543 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2018-12-12T15:51:00.000 | 1 | 1 | 0 | Is it possible to import bokeh figures from the html file they have been saved in? | 53,746,686 | 1.2 | python-3.x,plot,import,bokeh | As of Bokeh 1.0.2, there is not any existing API for this, and I don't think there is any simple technique that could accomplish this either. I think the only options are: some kind of (probably somewhat fragile) text scraping of the HTML files, or distributing all the HTML files and using something like <iframe> to collect the individual subplot files into one larger view.
Going forward, for reference there is autoload_static that allows plots to be encapsulated in "sidecar" JS files that can be individually distributed and embedded, or there is json_item that produces and isolated JSON representation of the document that can also be individually distributed and embedded. | I've produced a few Bokeh output files as the result of a fairly time intensive process. It would be really cool to pull the plots together from their respective files and build a new output file where i could visualize them all in a column. I know I should have thought to do this earlier on before producing all the individual plots, but, alas, i did not.
Is there a way to import the preexisting figures so that I can aggregate them together into new multi-multi plot output file? | 1 | 1 | 90 |
0 | 53,771,189 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2018-12-12T16:00:00.000 | 3 | 1 | 0 | Why is watershed function from SciKit too slow? | 53,746,868 | 0.53705 | python,time,native,scikit-image,watershed | It's hard to know without more details why your particular application runs slowly. In general, though, the scikit-image code is not as optimized as OpenCV, but covers many more use cases. For example, it can work with floating point values as input, rather than just uint8, and it can work with 3D or even higher-dimension images.
About the performance: OpenCV is coded in highly optimized C/C++, while scikit-image is coded in Cython, a hybrid language that compiles Python code to C, achieving C-performance. However, several optimizations are not available in Cython, and as I mentioned above, there are differences in what is actually implemented, resulting in a performance difference. | I have made a comparison between time of execution only for watershed functions in OpenCV, Skimage(SciPy) and BoofCV. Although OpenCV appears to be much faster than the other two (average time: 0.0031 seconds on 10 samples), Skimage time of execution varies significantly (from 0.03 to 0.554 seconds). I am wondering why this happens? Isn´t it supposed to be a native python function? | 0 | 1 | 276 |
0 | 59,012,601 | 0 | 0 | 0 | 0 | 1 | true | 4 | 2018-12-12T17:20:00.000 | 3 | 2 | 0 | how to compare two text document with tfidf vectorizer? | 53,748,236 | 1.2 | python,nltk,cosine-similarity,tfidfvectorizer | As G. Anderson already pointed out, and to help the future guys on this, when we use the fit function of TFIDFVectorizer on document D1, it means that for the D1, the bag of words are constructed.
The transform() function computes the tfidf frequency of each word in the bag of word.
Now our aim is to compare the document D2 with D1. It means we want to see how many words of D1 match up with D2. Thats why we perform fit_transform() on D1 and then only the transform() function on D2 would apply the bag of words of D1 and count the inverse frequency of tokens in D2.
This would give the relative comparison of D1 against D2. | I have two different text which I want to compare using tfidf vectorization.
What I am doing is:
tokenizing each document
vectorizing using TFIDFVectorizer.fit_transform(tokens_list)
Now the vectors that I get after step 2 are of different shape.
But as per the concept, we should have the same shape for both the vectors. Only then the vectors can be compared.
What am I doing wrong? Please help.
Thanks in advance. | 0 | 1 | 2,848 |
0 | 53,748,320 | 0 | 1 | 0 | 0 | 4 | false | 2 | 2018-12-12T17:21:00.000 | 1 | 4 | 0 | Unsorted Lists vs Linear and Binary Search | 53,748,254 | 0.049958 | python,algorithm,time-complexity | Alternative 1 of course, since that only requires you to go through the list once. If you are to sort the list, you have to traverse the list at least once for the sorting, and then some for the search. | Hey guys so I've been studying for an upcoming test and I came across this question:
If you had an unsorted list of one million unique items, and knew that you would only search it once for a value, which of the following algorithms would be the fastest?
Use linear search on the unsorted list
Use insertion sort to sort the list and then binary search on the sorted list
Wouldn't the second choice be the fastest? Sorting the list and then looking for the value than only using linear search? | 0 | 1 | 1,753 |
0 | 53,748,319 | 0 | 1 | 0 | 0 | 4 | false | 2 | 2018-12-12T17:21:00.000 | 2 | 4 | 0 | Unsorted Lists vs Linear and Binary Search | 53,748,254 | 0.099668 | python,algorithm,time-complexity | sorting a list has a O(log(N)*N) complexity at best.
Linear search has O(N) complexity.
So if you have to search more than once, you begin to gain time after some searches.
If objects are hashable (ex: integers) a nice alternative (when searching more than once only) to sorting+bisection search is to put them in a set. Then complexity is down to O(1) because of hashing, but still O(N) to create it, and the hashing adds to the toll.
If you need only to search once, linear search is the best choice. | Hey guys so I've been studying for an upcoming test and I came across this question:
If you had an unsorted list of one million unique items, and knew that you would only search it once for a value, which of the following algorithms would be the fastest?
Use linear search on the unsorted list
Use insertion sort to sort the list and then binary search on the sorted list
Wouldn't the second choice be the fastest? Sorting the list and then looking for the value than only using linear search? | 0 | 1 | 1,753 |
0 | 53,748,316 | 0 | 1 | 0 | 0 | 4 | true | 2 | 2018-12-12T17:21:00.000 | 3 | 4 | 0 | Unsorted Lists vs Linear and Binary Search | 53,748,254 | 1.2 | python,algorithm,time-complexity | Linear search takes just O(n), while sorting a list first takes O(n log n). Since you are going to search the list only once for a value, the fact that subsequent searches in the sorted list with a binary search takes only O(log n) does not help overcome the overhead of the O(n log n) time complexity involved in the sorting, and hence a linear search would be more efficient for the task. | Hey guys so I've been studying for an upcoming test and I came across this question:
If you had an unsorted list of one million unique items, and knew that you would only search it once for a value, which of the following algorithms would be the fastest?
Use linear search on the unsorted list
Use insertion sort to sort the list and then binary search on the sorted list
Wouldn't the second choice be the fastest? Sorting the list and then looking for the value than only using linear search? | 0 | 1 | 1,753 |
0 | 53,748,430 | 0 | 1 | 0 | 0 | 4 | false | 2 | 2018-12-12T17:21:00.000 | 1 | 4 | 0 | Unsorted Lists vs Linear and Binary Search | 53,748,254 | 0.049958 | python,algorithm,time-complexity | For solving these types of questions, it is simply necessary to see where you'd spend more time. For a million elements:
Insertion sort with 'n' inversions would take O(n) and then it would take an additional O(log(n)) time.
Whereas linear search would take only O(n) time.
Since there is only a single query method 1 would be a better alternative but for multiple queries(search element in the list) there will be a point where bin-srch*x<lin-srch*x where x is the number of queries. | Hey guys so I've been studying for an upcoming test and I came across this question:
If you had an unsorted list of one million unique items, and knew that you would only search it once for a value, which of the following algorithms would be the fastest?
Use linear search on the unsorted list
Use insertion sort to sort the list and then binary search on the sorted list
Wouldn't the second choice be the fastest? Sorting the list and then looking for the value than only using linear search? | 0 | 1 | 1,753 |
0 | 53,756,944 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2018-12-13T06:58:00.000 | 1 | 2 | 0 | How to implement machine learning models on mobile phones? | 53,756,524 | 0.099668 | android,python,ios,machine-learning | I think I'm qualified to answer this because it was yesterday that I viewed Google's "DevFestOnAir 2018". There was an "End to End Machine Learning" talk where the speaker mentioned what TensorFlow(TF) has to support AI in mobile devices.
Now, TF is available for JS , Java and many other languages, so this captures the entirety of the model that runs on your PC, uses other functionalities to make it run on lesser RAM and Processors. Do check this out. If I'm not wrong TF has a feature that would do the conversion for you. | I've built Machine Learning Models Random Forest and XGBOOST on Python or R
How can I implement that my model work in mobile phone IOS / Android? Not for training, just to predict the probability for users by properties and events. | 0 | 1 | 525 |
0 | 53,759,191 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2018-12-13T09:12:00.000 | 1 | 3 | 0 | Could you explain me the output of keras at each iteration? | 53,758,399 | 0.066568 | python,machine-learning,keras,deep-learning | As far as I can tell the output of the keras function is a running average loss and the loss is quite a lot larger at the beginning of the epoch, than in the end. The loss is reset after each epoch and a new running average is formed. Therefore, the old running average is quite a bit higher (or at least different), than the beginning loss in the next epoch. | When I train a sequential model with keras using the method fit_generator, I see this output
Epoch 1/N_epochs
n/N [====================>..............] - ETA xxxx - loss: yyyy
I noticed that the loss decreased gradually with the number of steps, as expected. My problem is that I also noticed that when one epoch finishes and another one starts, the value of the loss is quite different from the one that I see at the end of the previous epoch.
Why so? I thought that epoch and number of steps per epochs was arbitrary values and using, for instance, 10 epochs with 1000 steps should be the same of 1000 epochs with 10 steps. But what does exactly happen between one epoch and the next one in Keras 2.0?
Disclaimer: I know the definition of epoch and how the number of steps should be decided using a batch generator, but I have too many data and I cannot apply this rule. | 0 | 1 | 740 |
0 | 53,759,336 | 0 | 0 | 0 | 0 | 2 | true | 2 | 2018-12-13T09:12:00.000 | 4 | 3 | 0 | Could you explain me the output of keras at each iteration? | 53,758,399 | 1.2 | python,machine-learning,keras,deep-learning | The loss that Keras calculates during the epoch is accumulated and estimated online. So it includes the loss from the model after different weight updates.
Let we clarify with an easy case: assume for a second that the model is only improving (every weight update results in better accuracy and loss), and that each epoch contains 2 weight updates (each min-batch is half the training dataset).
At epoch X, the first mini-batch is processed and the result is a loss score 2.0.
After updating the weights, the model runs its second mini-batch which results in a loss score of 1.0 (for just the mini-batch). however you will see a loss of 2.0 change to 1.5 (average over all the dataset).
Now we start epoch X+1, but it happens after another weight update which leads to a loss of 0.8 over the first mini-batch, which is shown to you. And so on and on...
The same thing happens during your training, only that obviously, not all changes are positive. | When I train a sequential model with keras using the method fit_generator, I see this output
Epoch 1/N_epochs
n/N [====================>..............] - ETA xxxx - loss: yyyy
I noticed that the loss decreased gradually with the number of steps, as expected. My problem is that I also noticed that when one epoch finishes and another one starts, the value of the loss is quite different from the one that I see at the end of the previous epoch.
Why so? I thought that epoch and number of steps per epochs was arbitrary values and using, for instance, 10 epochs with 1000 steps should be the same of 1000 epochs with 10 steps. But what does exactly happen between one epoch and the next one in Keras 2.0?
Disclaimer: I know the definition of epoch and how the number of steps should be decided using a batch generator, but I have too many data and I cannot apply this rule. | 0 | 1 | 740 |
0 | 53,821,015 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2018-12-14T03:59:00.000 | 1 | 1 | 0 | Can sklearn.preprocessing.KBinsDiscretizer with strategy='quantile' drop the duplicated bins? | 53,773,352 | 1.2 | python-2.7,scikit-learn | That will not be possible. Set strategy='uniform' to achieve your goal. | I used sklearn.preprocessing.KBinsDiscretizer(n_bins=10, encode='ordinal') to discretize my continuous feature.
The strategy is 'quantile', by defalut. But my data distribution is actually not uniformly, like 70% of rows is 0.
Then I got KBinsDiscretizer.bins_edges=[0.,0.,0.,0.,0.,0.,0.,256.,602., 1306., 18464.].
There're many duplicate bins. So, is there a method to drop the duplicates in KBinsDiscretizer's bins?
KBinsDiscretizer calculates the quantile of input. If the most samples of input are zero, the 10-quantiles will have multiple zeros. The result I expected is a discretizer with unique bins. For the example I mentioned, is [0.,256.,602., 1306., 18464.]. | 0 | 1 | 858 |
0 | 53,821,335 | 0 | 0 | 0 | 0 | 1 | true | 4 | 2018-12-17T12:36:00.000 | 5 | 1 | 0 | Value of alpha in gensim word-embedding (Word2Vec and FastText) models? | 53,815,402 | 1.2 | python-3.x,gensim,word2vec,word-embedding,fasttext | The default starting alpha is 0.025 in gensim's Word2Vec implementation.
In the stochastic gradient descent algorithm for adjusting the model, the effective alpha affects how strong of a correction to the model is made after each training example is evaluated, and will decay linearly from its starting value (alpha) to a tiny final value (min_alpha) over the course of all training.
Most users won't need to adjust these parameters, or might only adjust them a little, after they have a reliable repeatable way of assessing whether a change improves their model on their end tasks. (I've seen starting values of 0.05 or less commonly 0.1, but never as high as your reported 0.5.) | I just want to know the effect of the value of alpha in gensim word2vec and fasttext word-embedding models? I know that alpha is the initial learning rate and its default value is 0.075 form Radim blog.
What if I change this to a bit higher value i.e. 0.5 or 0.75? What will be its effect? Does it is allowed to change the same? However, I have changed this to 0.5 and experiment on a large-sized data with D = 200, window = 15, min_count = 5, iter = 10, workers = 4 and results are pretty much meaningful for the word2vec model. However, using the fasttext model, the results are bit scattered, means less related and unpredictable high-low similarity scores.
Why this imprecise result for same data with two popular models with different precision? Does the value of alpha plays such a crucial role during building of the model?
Any suggestion is appreciated. | 0 | 1 | 2,594 |
0 | 53,818,650 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-12-17T14:56:00.000 | 0 | 2 | 0 | Matplotlib erase figure and plot new series of subplots | 53,817,735 | 0 | python,matplotlib | Call plt.show() before the 10th chart, then start over with plt.subplot(3, 3, 1), followed by the code to plot the 10th chart | I want to make a series of figures with 3x3 subplots using matplotlib. I can make the first figure fine (9 total subplots), but when I try to make a tenth subplot I get this error: ValueError: num must be 1 <= num <= 9, not 10. What I think I want to do is plot the first 9 subplots, clear the figure, and then plot the next 9 subplots. I haven't been able to get this approach to work so far though. If anyone could offer some suggestions I would really appreciate it!
Thanks! | 0 | 1 | 84 |
0 | 53,825,670 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2018-12-18T02:24:00.000 | 1 | 1 | 0 | How to use cross validation after imputing on a training and validation set? | 53,825,586 | 0.197375 | python,cross-validation,imputation | Generally, you'll want to split your data into three sets- a training set, testing set, and validation set. The testing set should be completely left out of training (your concern is correct.) When using cross validation, you don't need to worry about splitting your training and validation set- that's what cross validation does for you! Simply pass the training set to the cross validator, allow it to split into training and validation behind the scenes, and test the final model on your testing set (which has been completely left out of the training process.) | So I've gotten myself a little confused.
At the moment, I've got a dataset of about 800 instances. I've split it into a training and validation set because there were missing values so I used SimpleImputer from sklearn and fit_transform-ed the training set and transformed the testing set. I did that because if I want to predict for new instances, if there's missing values then I'll need to impute it the same way I imputed the test set.
Now I want to use cross validation to train and score models, but that would involve using the whole dataset and splitting it up into different training and testing sets, so then I'm worried about leakage from the training set because of the imputed values being fitted? | 0 | 1 | 726 |
0 | 53,830,482 | 0 | 0 | 0 | 0 | 2 | true | 0 | 2018-12-18T09:50:00.000 | 1 | 2 | 0 | How to obtain small bins after FFT in python? | 53,830,329 | 1.2 | python,signal-processing,fft | Even if you use another transform, that will not make more data.
If you have a sampling of 1kHz and 2s of samples, then your precision is 0.5Hz. You can interpolate this with chirpz (or just use sinc(), that's the shape of your data between the samples of your comb), but the data you have on your current point is the data that determines what you have in the lobes (between 0Hz and 0.5Hz).
If you want a real precision of 0.1Hz, you need 10s of data. | I'm using scipy.signal.fft.rfft() to calculate power spectral density of a signal. The sampling rate is 1000Hz and the signal contains 2000 points. So frequency bin is (1000/2)/(2000/2)=0.5Hz. But I need to analyze the signal in [0-0.1]Hz.
I saw several answers recommending chirp-Z transform, but I didn't find any toolbox of it written in Python.
So how can I complete this small-bin analysis in Python? Or can I just filter this signal to [0-0.1]Hz using like Butterworth filter?
Thanks a lot! | 0 | 1 | 279 |
0 | 53,860,082 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2018-12-18T09:50:00.000 | 1 | 2 | 0 | How to obtain small bins after FFT in python? | 53,830,329 | 0.099668 | python,signal-processing,fft | You can't get smaller frequency bins to separate out close spectral peaks unless you use more (a longer amount of) data.
You can't just use a narrower filter because the transient response of such a filter will be longer than your data.
You can get smaller frequency bins that are just a smooth interpolation between nearby frequency bins, for instance to plot the spectrum on wider paper or at a higher dpi graphic resolution, by zero-padding the data and using a longer FFT. But that won't create more detail. | I'm using scipy.signal.fft.rfft() to calculate power spectral density of a signal. The sampling rate is 1000Hz and the signal contains 2000 points. So frequency bin is (1000/2)/(2000/2)=0.5Hz. But I need to analyze the signal in [0-0.1]Hz.
I saw several answers recommending chirp-Z transform, but I didn't find any toolbox of it written in Python.
So how can I complete this small-bin analysis in Python? Or can I just filter this signal to [0-0.1]Hz using like Butterworth filter?
Thanks a lot! | 0 | 1 | 279 |
0 | 53,830,550 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-12-18T09:51:00.000 | 0 | 1 | 0 | Centroid of a contour | 53,830,357 | 0 | python,opencv | You can use cv2.connectedComponentsWithStats it return the centroid and size of contour. | In OpenCV under Python, is there no better way to compute the centroid of the inside a contour than with the function cv2.moments, which computes all moments up to order 3 (and is overkill) ? | 0 | 1 | 153 |
0 | 53,831,640 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2018-12-18T11:01:00.000 | 1 | 2 | 0 | Sum of different CSV columns in python | 53,831,555 | 0.099668 | python,csv | Why don't you :
create a columnTotal integer array (one index for each column).
read the file line by line, per line:
Split the line using the comma as separator
Convert the splitted string parts to integers
Add the value of each column to the columnTotal array's colum index. | I am quite new to Python and therefore this might seem easy but I am really stuck here.
I have a CSV file with values in a [525599 x 74] matrix. For each column of the 74 columns I would like to have the total sum of all 525599 values saved in one list.
I could not figure out the right way to iterate over each column and save the sum of each column in a list. | 0 | 1 | 106 |
0 | 53,844,061 | 0 | 1 | 0 | 0 | 2 | true | 2 | 2018-12-18T23:09:00.000 | 1 | 2 | 0 | install numpy on python 3.5 Mac OS High sierra | 53,842,426 | 1.2 | python,python-3.x,macos,numpy | First, you need to activate the virtual environment for the version of python you wish to run. After you have done that then just run "pip install numpy" or "pip3 install numpy".
If you used Anaconda to install python then, after activating your environment, type conda install numpy. | I wanted to install the numpy package for python 3.5 on my Mac OS High Sierra, but I can't seem to make it work.
I have it on python2.7, but I would also like to install it for the next versions.
Currently, I have installed python 2.7, python 3.5, and python 3.7.
I tried to install numpy using:
brew install numpy --with-python3 (no error)
sudo port install py35-numpy@1.15.4 (no error)
sudo port install py37-numpy@1.15.4 (no error)
pip3.5 install numpy (gives "Could not find a version that satisfies the requirement numpy (from versions: )
No matching distribution found for numpy" )
I can tell that it is not installed because when I type python3 and then import numpy as np gives "ModuleNotFoundError: No module named 'numpy'"
Any ideas on how to make it work?
Thanks in advance. | 0 | 1 | 3,167 |
0 | 53,928,674 | 0 | 1 | 0 | 0 | 2 | false | 2 | 2018-12-18T23:09:00.000 | 0 | 2 | 0 | install numpy on python 3.5 Mac OS High sierra | 53,842,426 | 0 | python,python-3.x,macos,numpy | If running pip3.5 --version or pip3 --version works, what is the output when you run pip3 freeze? If there is no output, it indicates that there are no packages installed for the Python 3 environment and you should be able to install numpy with pip3 install numpy. | I wanted to install the numpy package for python 3.5 on my Mac OS High Sierra, but I can't seem to make it work.
I have it on python2.7, but I would also like to install it for the next versions.
Currently, I have installed python 2.7, python 3.5, and python 3.7.
I tried to install numpy using:
brew install numpy --with-python3 (no error)
sudo port install py35-numpy@1.15.4 (no error)
sudo port install py37-numpy@1.15.4 (no error)
pip3.5 install numpy (gives "Could not find a version that satisfies the requirement numpy (from versions: )
No matching distribution found for numpy" )
I can tell that it is not installed because when I type python3 and then import numpy as np gives "ModuleNotFoundError: No module named 'numpy'"
Any ideas on how to make it work?
Thanks in advance. | 0 | 1 | 3,167 |
0 | 59,532,530 | 0 | 0 | 0 | 0 | 2 | false | 5 | 2018-12-19T01:32:00.000 | 1 | 5 | 0 | Spark copying dataframe columns best practice in Python/PySpark? | 53,843,406 | 0.039979 | python,apache-spark,pyspark | Use dataframe.withColumn() which Returns a new DataFrame by adding a column or replacing the existing column that has the same name. | This is for Python/PySpark using Spark 2.3.2.
I am looking for best practice approach for copying columns of one data frame to another data frame using Python/PySpark for a very large data set of 10+ billion rows (partitioned by year/month/day, evenly). Each row has 120 columns to transform/copy. The output data frame will be written, date partitioned, into another parquet set of files.
Example schema is:
input DFinput (colA, colB, colC) and
output DFoutput (X, Y, Z)
I want to copy DFInput to DFOutput as follows (colA => Z, colB => X, colC => Y).
What is the best practice to do this in Python Spark 2.3+ ?
Should I use DF.withColumn() method for each column to copy source into destination columns?
Will this perform well given billions of rows each with 110+ columns to copy?
Thank you | 0 | 1 | 9,487 |
0 | 53,884,751 | 0 | 0 | 0 | 0 | 2 | false | 5 | 2018-12-19T01:32:00.000 | 0 | 5 | 0 | Spark copying dataframe columns best practice in Python/PySpark? | 53,843,406 | 0 | python,apache-spark,pyspark | Bit of a noob on this (python), but might it be easier to do that in SQL (or what ever source you have) and then read it into a new/separate dataframe? | This is for Python/PySpark using Spark 2.3.2.
I am looking for best practice approach for copying columns of one data frame to another data frame using Python/PySpark for a very large data set of 10+ billion rows (partitioned by year/month/day, evenly). Each row has 120 columns to transform/copy. The output data frame will be written, date partitioned, into another parquet set of files.
Example schema is:
input DFinput (colA, colB, colC) and
output DFoutput (X, Y, Z)
I want to copy DFInput to DFOutput as follows (colA => Z, colB => X, colC => Y).
What is the best practice to do this in Python Spark 2.3+ ?
Should I use DF.withColumn() method for each column to copy source into destination columns?
Will this perform well given billions of rows each with 110+ columns to copy?
Thank you | 0 | 1 | 9,487 |
0 | 53,875,465 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-12-19T13:11:00.000 | 1 | 1 | 0 | Linear algebra with Pyomo | 53,851,982 | 0.197375 | python,optimization,pyomo | Pyomo is mainly a package for optimization. i.e. specifying data -> building problem -> sending to the solver -> wait for solver's results -> retrieving solution. Even if it can handle matrix-like data, it cannot manipulate it with matrix operations. This should be done using a good external library, before you send your data to Pyomo. Once you have all your matrixes ready to be used as data in your optimization model, then you can use Pyomo for optimization.
That being said, you should look into finding a library that fits your needs to build your data, since your data values must be static, once you provide it as an input to your model.
Also, keep in mind that Pyomo, like any optimization tools, is deterministic. It is not meant to do data analysis or data description, but to provide a way to find one optimal solution of a mathematical problem. In your case, Pyomo is not meant to do the Kalman filter problem, but to give you the solution of minimizing the mean square error. | I'm trying put my optimization problem into Pyomo, but it is strongly dependent upon standard linear algebra operations - qr, inverse, transpose, product. Actually, this is Kalman filter problem; recursive linear algebra for long time series. I failed to find pyomo functions to implement it like I could in tensor flow. Is it possible?
Connected questions:
Am I right that numpy target function is practically not usable in pyomo?
Is there a better free optimization solution for the purpose? (scipy cannot approach efficiency of Matlab by far, tensor flow is extremely slow for particular problem, though I do not see why, algorithmic differentiation in Matlab was reasonably fast though not fast enough)
Many thanks,
Vladimir | 0 | 1 | 600 |
0 | 53,903,711 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2018-12-20T03:25:00.000 | 0 | 2 | 0 | Would a Logistic Regression Machine Learning Model Work Here? | 53,862,029 | 0 | python,machine-learning,neural-network,classification,logistic-regression | It's true that you need a lot of data for applying neural networks.
It would have been helpful if you could be more precise about your dataset and the features. You can also try implementing K-Means-Clustering for your project. If your aim is to find out that did the patient took medicine or not then it can be done using logistic regression. | I am in 10th grade and I am looking to use a machine learning model on patient data to find a correlation between the time of week and patient adherence. I have separated the week into 21 time slots, three for each time of day (1 is Monday morning, 2 is monday afternoon, etc.). Adherence values will be binary (0 means they did not take the medicine, 1 means they did). I will simulate training, validation and test data for my model. From my understanding, I can use a logistic regression model to output the probability of the patient missing their medication on a certain time slot given past data for that time slot. This is because logistic regression outputs binary values when given a threshold and is good for problems dealing with probability and binary classes, which is my scenario. In my case, the two classes I am dealing with are yes they will take their medicine, and no they will not. But the major problem with this is that this data will be non-linear, at least to my understanding. To make this more clear, let me give a real life example. If a patient has yoga class on Sunday mornings, (time slot 19) and tends to forget to take their medication at this time, then most of the numbers under time slot 19 would be 0s, while all the other time slots would have many more 1s. The goal is to create a machine learning model which can realize given past data that the patient is very likely going to miss their medication on the next time slot 19. I believe that logistic regression must be used on data that still has an inherently linear data distribution, however I am not sure. I also understand that neural networks are ideal for non-linear distributions, but neural networks require a lot of data to function properly, and ideally the goal of my model is to be able to function decently with simply a few weeks of data. Of course any model becomes more accurate with more data, but it seems to me that generally neural networks need thousands of data sets to truly become decently accurate. Again, I could very well be wrong.
My question is really what model type would work here. I do know that I will need some form of supervised classification. But can I use logistic regression to make predictions when given time of week about adherence?
Really any general feedback on my project is greatly appreciated! Please keep in mind I am only 15, and so certain statements I made were possibly wrong and I will not be able to fully understand very complex replies.
I also have to complete this within the next two weeks, so please do not hesitate to respond as soon as you can! Thank you so much! | 0 | 1 | 78 |
0 | 53,902,459 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2018-12-20T03:25:00.000 | 1 | 2 | 0 | Would a Logistic Regression Machine Learning Model Work Here? | 53,862,029 | 0.099668 | python,machine-learning,neural-network,classification,logistic-regression | In my opinion a logistic regression won't be enough for this as u are going to use a single parameter as input. When I imagine a decision line for this problem, I don't think it can be achieved by a single neuron(a logistic regression). It may need few more neurons or even few layers of them to do so. And u may need a lot of data set for this purpose. | I am in 10th grade and I am looking to use a machine learning model on patient data to find a correlation between the time of week and patient adherence. I have separated the week into 21 time slots, three for each time of day (1 is Monday morning, 2 is monday afternoon, etc.). Adherence values will be binary (0 means they did not take the medicine, 1 means they did). I will simulate training, validation and test data for my model. From my understanding, I can use a logistic regression model to output the probability of the patient missing their medication on a certain time slot given past data for that time slot. This is because logistic regression outputs binary values when given a threshold and is good for problems dealing with probability and binary classes, which is my scenario. In my case, the two classes I am dealing with are yes they will take their medicine, and no they will not. But the major problem with this is that this data will be non-linear, at least to my understanding. To make this more clear, let me give a real life example. If a patient has yoga class on Sunday mornings, (time slot 19) and tends to forget to take their medication at this time, then most of the numbers under time slot 19 would be 0s, while all the other time slots would have many more 1s. The goal is to create a machine learning model which can realize given past data that the patient is very likely going to miss their medication on the next time slot 19. I believe that logistic regression must be used on data that still has an inherently linear data distribution, however I am not sure. I also understand that neural networks are ideal for non-linear distributions, but neural networks require a lot of data to function properly, and ideally the goal of my model is to be able to function decently with simply a few weeks of data. Of course any model becomes more accurate with more data, but it seems to me that generally neural networks need thousands of data sets to truly become decently accurate. Again, I could very well be wrong.
My question is really what model type would work here. I do know that I will need some form of supervised classification. But can I use logistic regression to make predictions when given time of week about adherence?
Really any general feedback on my project is greatly appreciated! Please keep in mind I am only 15, and so certain statements I made were possibly wrong and I will not be able to fully understand very complex replies.
I also have to complete this within the next two weeks, so please do not hesitate to respond as soon as you can! Thank you so much! | 0 | 1 | 78 |
0 | 53,866,818 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2018-12-20T10:15:00.000 | 2 | 1 | 0 | I installed CUDA10 but Anaconda installs CUDA9. Can I remove the former? | 53,866,591 | 0.379949 | python,tensorflow,cuda,anaconda,gpu | If you installed via conda install tensorflow-gpu all dependencies are in the Conda environment (e.g., CUDA dlls are in the lib subfolder in the environment), so yes you can safely uninstall CUDA 10.
Note: at least on Ubuntu I saw that XLA JIT optimization of code (which is an experimental feature still) requires CUDA to be installed properly in the system (it looks from some binaries in the CUDA install dir and it seems to be hardcoded that way), but for normal TF execution the Conda setup is perfectly fine. | As a started with GPU programming, CUDA and Python I decided to install the latest version of CUDA (10) in order to experiment with ML.
After spending considerable time installing (huge downloads) I ended up with a version that isn't supporting Tensorflow.
I discovered the tensorflow-gpu meta package using Anaconda though! Now unfortunately I have two versions installed and I am not sure how can I uninstall the version 10! Any ideas?
Thanks! | 0 | 1 | 1,375 |
0 | 53,871,442 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-12-20T15:10:00.000 | 5 | 1 | 0 | How to choose between keras.backend and keras.layers? | 53,871,303 | 1.2 | python,tensorflow,keras,deep-learning,keras-layer | You should definitely use keras.layers if there is a layer that achieves what you want to do. That's because, when building a model, Keras layers only accept Keras Tensors (i.e. the output of layers) as the inputs. However, the output of methods in keras.backend.* is not a Keras Tensor (it is the backend Tensor, such as TensorFlow Tensor) and therefore you can't pass them directly to a layer.
Although, if there is an operation that could not be done with a layer, then you can use keras.backned.* methods in a Lambda layer to perform that custom operation/computation.
Note: Keras Tensor is actually the same type as the backend Tensor (e.g. tf.Tensor); however, it has been augmented with some additional Keras-specific attributes which Keras needs when building a model. | I found there are a lot of same names in keras.backend or keras.layers, for example keras.backend.concatenate and keras.layers.Concatenate. I know vaguely that one is for tensor while the other is for layer. But when the code is so big, so many function made me confused that which is tensor or which is layer. Anybody has a good idea to solve this problem?
One way I found is to define all placeholders in one function at first, but the function take it as variable may return layers at end, while another function take this layer as variable may return another variable. | 0 | 1 | 523 |
0 | 53,885,612 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-12-20T16:20:00.000 | 0 | 1 | 0 | Image segmentation with 3 classes but one of them is easy to find, How can I write the network to not train on the easy one? | 53,872,418 | 0 | python-2.7,keras,neural-network,image-segmentation | I'm not sure if you can do that. I think You should apply some regularization and/or dropout to the network and feed it more data.
But what you could do is to label all the empty pixels as noise as signal is usually in the middle and noise is on the outer side of the signal graph. Then you train the network that way. You will have to set network outputs to 2: noise or signal. From the original image you know which pixels were empty and then you can set those pixels from noise to empty. Then you will have the result you wanted.
The only thing that can happen here is that the network will perform bad because of the imbalanced classes as you will have much more noise than signal pixels. | I am using MS-D or UNet network for image segmentation. My image has three classes: noise, signal and empty. The class empty is easy to find because the pixel values for the empty class is mainly -1 while for the two other classes is between 0-1.
Is there a way that I only ask the network to find noise and signal class and not bother the network about the easy one? Or any other clue that can help? I am seeing that the network sometimes is confused when predicting the signal pixels and gives about the same score but with higher to the signal class (e.g. empty0.0001, noise0.0003, signal0.0005) to all three classes. I want to make it easier for the network to figure it out.
Just more information about my image, around 25% of pixels are signal, 40% noise, and 35% are empty. I am using dice_coef for the metric and loss function. | 0 | 1 | 158 |
0 | 53,992,065 | 0 | 0 | 1 | 0 | 1 | false | 2 | 2018-12-20T20:15:00.000 | 3 | 2 | 0 | How to find row-echelon matrix form (not reduced) in Python? | 53,875,432 | 0.291313 | python,python-3.x,matrix | Bill M's answer is correct. When you find the LU decomposition the U matrix is a correct way of writing M in REF (note that REF is not unique so there are multiple possible ways to write it). To see why, remember that the LU decomposition finds P,L,U such that PLU = M. When L is full rank we can write this as U = (PL)-1M. So (PL)-1 defines the row operations that you have to preform on M to change it into U. | I am working on a project for my Linear Algebra class. I am stuck with a small problem. I could not find any method for finding the row-echelon matrix form (not reduced) in Python (not MATLAB).
Could someone help me out?
Thank you.
(I use python3.x) | 0 | 1 | 3,252 |
0 | 53,876,998 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2018-12-20T20:55:00.000 | 1 | 1 | 0 | Word Embedding Interpretation | 53,875,910 | 1.2 | python,tensorflow | Yes and yes. So, if you have "I" [4.55, 6.78], "like" [3.12, 8.17], and "dogs" [1.87, 10.95], each embedded representation roughly equates directly to each word, and thus the order isn't lost when the embedding is done. And yes, the shape would be (batch_size, 600, 15) for batches of 600-word-sentences and embedding dimension 15. I think the question you're indirectly asking is something like "Does each word directly correlate to a single embedding vector of length embedding_dimension?" aka "Does 'I' directly correlate to [4.55, 6.78] independent of the other words/embedding vectors?" For the most part, the answer is yes.
For what it's worth, it's been useful to think of it like languages. Doing a hash representation (excluding the duplicate values) or a categorical column with a unique value for each word is somewhat akin to how classic Chinese is, with a unique character for every word. Whereas embedded representations are more akin to the English language, with a "word" being a fixed number (embedding dimension) of letters (floats). The advantage is similar to how we gain advantages in the English language. For example "dog" vs "dogs" has 3 similar characters because they are very related concepts. Similarly, you can take advantage of embedding by representing "dog" as [1.23 4.56 7.89 1.12] and "dogs" as [1.23 4.56 7.89 9.87] or some such.
Random but I hope this helped. Good luck~~ =) | Before I ask the question, let me preface this by stating that this question has been answered in many articles, but I still struggle to understand the basic format of word embeddings.
Let's start with the sentence "I like dogs". Assuming a simple hashing approach, "I like dogs" can be represented in the vector [1, 4, 6] where the elements of the vector correspond to the hash of each word (assuming these aren't the only words in the vocabulary). From what I understand, this vector is fed into an embedding layer which adds an extra embedding dimension onto the input tensor of the RNN (doesn't have to be vanilla RNN).
The embedding tensor (with lets say an embedding dimension of 2) will look something like this for a single entry in the batch:
[[4.55, 6.78], -> I
[3.12, 8.17], -> like
[1.87, 10.95]] -> dogs
This tensor has the shape (1, 3, 2). Does the length of the second axis (3 in this case) always equal the length of the input vector and therefore represent each individual word in the sequence or do I have a fundamental misconception of how the tensorflow embeddings work?
To clarify: say I had a much longer sentence with 600 words, would each word after embedding be remembered in their original order and be represented by a vector of whatever size was chosen for the embedding dimension (we'll say 15), thus making the shape of the embedded tensor (batch_size, 600, 15)?
Note: these are just random numbers and don't represent anything in particular. | 0 | 1 | 85 |
0 | 53,965,459 | 0 | 0 | 1 | 0 | 1 | false | 0 | 2018-12-21T02:43:00.000 | 0 | 1 | 0 | Backtesting a Universe of Stocks | 53,878,551 | 0 | python,excel,stocks,universe,back-testing | The amout of data is too much for EXCEL or CALC. Even if you want to screen only 500 Stocks from S&P 500, you will get 2,2 Millions of rows (approx. 220 days/year * 20 years * 500 stocks). For this amount of data, you should use a SQL Database like MySQL. It is performant enough to handle this amount of data. But you have to find a way for updating. If you get the complete time series daily and store it into your database, this process can take approx. 1 hour. You could also use delta downloads but be aware of corporate actions (e.g. splits).
I don't know Quantopia, but I know a similar backtesting service where I have created a python backtesting script last year. The outcome was quite different to what I have expected. The research result was that the backtesting service was calculating wrong results because of wrong data. So be cautious about the results. | I would like to develop a trend following strategy via back-testing a universe of stocks; lets just say all NYSE or S&P500 equities. I am asking this question today because I am unsure how to handle the storage/organization of the massive amounts of historical price data.
After multiple hours of research I am here, asking for your experience and awareness. I would be extremely grateful for any information/awareness you can share on this topic
Personal Experience background:
-I know how to code. Was a Electrical Engineering major, not a CS major.
-I know how to pull in stock data for individual tickers into excel.
Familiar with using filtering and custom studies on ThinkOrSwim.
Applied Context:
From 1995 to today lets evaluate the best performing equities on a relative strength/momentum basis. We will look to compare many technical characteristics to develop a strategy. The key to this is having data for a universe of stocks that we can run backtests on using python, C#, R, or any other coding language. We can then determine possible strategies by assesing the returns, the omega ratio, median excess returns, and Jensen's alpha (measured weekly) of entries and exits that are technical driven.
Here's where I am having trouble figuring out what the next step is:
-Loading data for all S&P500 companies into a single excel workbook is just not gonna work. Its too much data for excel to handle I feel like. Each ticker is going to have multiple MB of price data.
-What is the best way to get and then store the price data for each ticker in the universe? Are we looking at something like SQL or Microsoft access here? I dont know; I dont have enough awareness on the subject of handling lots of data like this. What are you thoughts?
I have used ToS to filter stocks based off of true/false parameters over a period of time in the past; however the capabilities of ToS are limited.
I would like a more flexible backtesting engine like code written in python or C#. Not sure if Rscript is of any use. - Maybe, there are libraries out there that I do not have awareness of that would make this all possible? If there are let me know.
I am aware that Quantopia and other web based Quant platforms are around. Are these my best bets for backtesting? Any thoughts on them?
Am I making this too complicated?
Backtesting a strategy on a single equity or several equities isnt a problem in excel, ToS, or even Tradingview. But with lots of data Im not sure what the best option is for storing that data and then using a python script or something to perform the back test.
Random Final thought:-Ultimately would like to explore some AI assistance with optimizing strategies that were created based off parameters. I know this is a thing but not sure where to learn more about this. If you do please let me know.
Thank you guys. I hope this wasn't too much. If you can share any knowledge to increase my awareness on the topic I would really appreciate it.
Twitter:@b_gumm | 0 | 1 | 603 |
0 | 53,882,507 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-12-21T09:36:00.000 | 1 | 1 | 0 | How to create a tensorflow network of two saved tensorflow networks? | 53,882,317 | 1.2 | python,python-3.x,tensorflow | I am not sure if I got your point correctly, but Block Based Neural Networks might be what you are searching for. In BBNN each node can be a neural network and w.r.t what you describe one layer BBNN is what you need. | Let's say I've trained and saved 6 different networks where all of the values for hidden layer counts, neuron counts, and learn rates differ.
For example:
1 with 8 hidden layers with 16 neurons in each trained at .1 learn rate.
1 with 4 hidden layers with 4 neurons in each trained at .01 learn rate.
1 with 4 hidden layers with 4 neurons in each trained at .03 learn rate.
1 with 4 hidden layers with 8 neurons in each trained at .01 learn rate.
1 with 4 hidden layers with 8 neurons in each trained at .001 learn rate.
1 with 6 hidden layers with 4 neurons in each trained at .01 learn rate.
How could I create a new network where each of these saved networks acts basically as a neuron? While training this combined network, I don't want to affect the saved network weights and biases, but rather would like to essentially determine which one is more accurate for a given input.
I've achieved this in practice by loading each network, running the data through each network, and then storing all of the outputs which then later feed into the new network, but I feel like there must be a simpler and most importantly, a faster way of doing this.
An example might be two networks: image detection at night and image detection during the day. Each trained and saved separately. I would want another network which essentially takes an image and says "oh we're somewhere in the middle here so let's use 50/50, or oh it's closer to night, but not night completely, use 90% night data and 10% day." So I would want to feed loads of images where it tries each model, but then weights out how valued the data was based on night vs day to create a network which works for either night or day.
Any help highly appreciated. In reality the network I'm shooting for is far bigger and more complicated, but I'm looking for a strategy. | 0 | 1 | 36 |
0 | 53,905,344 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-12-23T10:29:00.000 | 0 | 1 | 0 | Slicing Array of four matrices | 53,902,843 | 1.2 | python,numpy,numpy-slicing | It's not really clear what RM and M are based on your description.
Is M the ndarray containing all 4 images, and RM the 2x2 array for a given pixel containing the data from the 4 images?
You can put the 4 images into the same ndarray so it has shape (4,N,M) and then reshape slices.
For example, to get the (0,0) entry you would do A[:,0,0] to get the 4 pixels, and then reshape it to get a 2x2 array. | I've got an array of 4 images, each image lets say is NxM (all images share this same size)
(I'm implementing a Harris Corner detector by the way.)
Now I made a matrix M = ([Ix^2, Ixy],[Ixy, Iy^2]).reshape(2,2)
and now I'd like to compute my response.
which is usually Det(RM) - k*(trace(RM)**2)
RM being a 2x2 Matrix each point in this matrix is derived from the same coordinate location for each image in M.
How can I slice M to create RM?
In other words how can I slice the Matrix M to create a smaller matrix 2x2 RM for every pixel in the NxM images?
For example the first RM matrix should be a 2x2 matrix taking the 0,0 coordinate from each image in M. | 0 | 1 | 54 |
0 | 56,282,819 | 0 | 1 | 0 | 0 | 1 | false | 2 | 2018-12-24T07:37:00.000 | 1 | 2 | 0 | module 'seaborn' has no attribute 'relplot' | 53,910,548 | 0.099668 | python,seaborn,google-colaboratory | Change directory to where pip3.exe is located:
for me: cd C:\Users\sam\AppData\Local\Programs\Python\Python37-32\Scripts
use .\
.\pip3 install seaborn==0.9.0 | I having trouble running relplot function in colab notebook but it works fine in jupyter notebook.
Getting the following error in colab
AttributeError Traceback (most recent call
last) in ()
----> 1 sns.relplot(x="total_bill", y="tip",
2 col="time", # Categorical variables that will determine the faceting of the grid.
3 hue="smoker", # Grouping variable that will produce elements with different colors.
4 style="smoker", # Grouping variable that will produce elements with different styles.
5 size="size", # Grouping variable that will produce elements with different sizes.
AttributeError: module 'seaborn' has no attribute 'relplot' | 0 | 1 | 11,284 |
0 | 54,089,983 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-12-25T10:26:00.000 | 0 | 4 | 0 | How to train your own model in AWS Sagemaker? | 53,921,454 | 0 | python,amazon-web-services,tensorflow,keras,amazon-sagemaker | You can convert your Keras model to a tf.estimator and train using the TensorFlow framework estimators in Sagemaker.
This conversion is pretty basic though, I reimplemented my models in TensorFlow using the tf.keras API which makes the model nearly identical and train with the Sagemaker TF estimator in script mode.
My initial approach using pure Keras models was based on bring-your-own-algo containers similar to the answer by Matthew Arthur. | I just started with AWS and I want to train my own model with own dataset. I have my model as keras model with tensorflow backend in Python. I read some documentations, they say I need a Docker image to load my model. So, how do I convert keras model into Docker image. I searched through internet but found nothing that explained the process clearly. How to make docker image of keras model, how to load it to sagemaker. And also how to load my data from a h5 file into S3 bucket for training? Can anyone please help me in getting clear explanation? | 1 | 1 | 2,089 |
0 | 53,925,094 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-12-25T13:11:00.000 | 0 | 1 | 0 | What data structure to use for ranking system which divides itself in groups? | 53,922,685 | 0 | django,python-3.x,data-structures | Just save the ranking score for every student. Calculate their group when you displaying them. | I have a quiz app where students can take tests. There is ranking based on every test. It's implemented with simple lists (Every new score is inserted into the list and then sorted (index+1 is the rank)).
But I want to add another abstraction. ie. Suppose 1000 students took the test and my ranking was 890. But those 1000 students should automatically be divided into 10 groups ie. group 1 of ranking 1 to 99, group2 of ranking 100 to 199 and so on. So if my overall ranking is 890. I should be subscribed to group 9 with 90th rank in that group.
How should this be implemented? | 0 | 1 | 27 |
0 | 53,944,352 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2018-12-27T09:49:00.000 | 0 | 1 | 0 | how to constrain scipy curve_fit in positive result | 53,942,983 | 0 | python,scipy | One of the simpler ways to handle negative value in y, is to make a log transformation. Get the best fit for log transformed y, then do exponential transformation for actual error in the fit or for any new value prediction. | I'm using scipy curve_fit to curve a line for retention. however, I found the result line may produce negative number. how can i add some constrain?
the 'bounds' only constrain parameters not the results y | 0 | 1 | 149 |
0 | 53,962,197 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2018-12-27T20:37:00.000 | 1 | 2 | 0 | Create dataframe from Excel attachment in Outlook | 53,950,601 | 1.2 | python,excel,pandas,outlook | Attachments are MIME-encoded and have to be decoded back into the original format (which essentially means making a disk copy) for programs that are expecting that format.
What you want is to give pandas the identifier of the email, the name of the attachment, the details of the message store, and suitable authentication, and have pandas read the attachment directly. This would entail extending the function pandas.read_csv() or maybe adding a new function read_csv_attachment().
While I am sure this is possible, it is a more ambitious project than I (as one unfamiliar with pandas internals) would want to tackle myself. And certainly much more work than saving the attachments manually, unless you have thousands of them. | Is it possible to read an Excel file from an Outlook attachment without saving it, and return a pandas dataframe from the attached file? The file will always be in the same format. | 0 | 1 | 2,158 |
0 | 53,964,532 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2018-12-28T09:02:00.000 | 0 | 1 | 0 | How to get list of context words in Gensim | 53,955,958 | 0 | python,gensim,word2vec,fasttext | The plain model doesn't retain any such co-occurrence statistics from the original corpus. It just has the trained results: vectors per word.
So, the ranked list of most_similar() vectors – which isn't exactly words that appeared-together, but strongly correlates to that – is the best you'll get from that file.
Only going back to the original training corpus would give you exactly what you've requested. | How to get most frequent context words from pretrained fasttext model?
For example:
For word 'football' and corpus ["I like playing football with my friends"]
Get list of context words: ['playing', 'with','my','like']
I try to use
model_wiki = gensim.models.KeyedVectors.load_word2vec_format("wiki.ru.vec")
model.most_similar("блок")
But it's not satisfied for me | 0 | 1 | 473 |
0 | 63,126,102 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2018-12-28T10:28:00.000 | 1 | 2 | 0 | AttributeError: 'AxesSubplot' object has no attribute 'hold' | 53,957,042 | 0.099668 | python-3.x | The API Changes document says:
Setting or unsetting hold (deprecated in version 2.0) has now been completely removed. Matplotlib now always behaves as if hold=True. To clear an axes you can manually use cla(), or to clear an entire figure use clf(). | I change a new computer and install Python3.6 and matplotlib,When I run the code last month in the old computer, I get the following error:
ax.hold(True)
AttributeError: 'AxesSubplot' object has no attribute 'hold' | 0 | 1 | 5,619 |
0 | 53,958,035 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2018-12-28T11:27:00.000 | 0 | 2 | 0 | What is the difference between Methods and Properties for an object in python? | 53,957,850 | 0 | python,methods,properties | Here in the above example you mentioned, you can pass argument to df.head() function, where as you cannot pass arguments for properties.
for same above example, if you have written df.head(20) it would return first 20 rows. | Suppose i have a data-frame object named as df, head() is a method that can be applied to df to see the first 5 records of the data-frame and df.size is a property to get the size of the data-frame.
For the property we are not using '()' as we used for a method. This was little confusing initially.
Could anyone explain whats basic difference between a property and a method in python. I mean why we had to define size as a property for a dataframe, why not it was defined as a method which would have just returned the size of the data-frame. | 0 | 1 | 311 |
0 | 53,968,421 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2018-12-29T08:57:00.000 | 0 | 1 | 0 | Tuple treated as single value in group by statement, any workaround? | 53,968,141 | 0 | python,pandas | First I had to use a list as suggested by Gennady Kandaurov, and to later rename the columns I just had to add the two lists.
target = ['Shop', 'Route']
DF1.columns = target + ['static columns'] | I have some calculations roughly looking like this: trip_count = DF_Trip.groupby([target], as_index=False)['Delivery'].count()
All my DF could possibly be grouped by Shop, Route and Driver. When I enter a single value for target, f.e. target = 'Route' it works fine.
But when I want to enter multiple values, f.e. target = 'Shop', 'Route' it only works when I enter it directly in the place where the variable is, f.e. trip_count = DF_Trip.groupby(['Shop', 'Route'], as_index=False)['Delivery'].count() but when I set the variable to target = 'Shop', 'Route', it gives me a ton of errors.
I've realized from debugging, that target = 'Shop', 'Route' is treated as a tuple and read in the pandas.df.groupby documentation, that tuples are treated as a single value. Is there any workaround for this? | 0 | 1 | 35 |
0 | 53,976,520 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2018-12-29T16:27:00.000 | 1 | 1 | 0 | Analyze just Pretty_Midi Instruments | 53,971,287 | 1.2 | python,artificial-intelligence,midi,music21,midi-instrument | In MIDI files, bank and program numbers uniquely identity instruments.
In General MIDI, drums are on channel 10 (and, in theory, should not use a Program Change message).
In GM2/GS/XG, the defaults for drums are the same, but can be changed with bank select messages. | Trying to figure out a good way of solving this problem but wanted to ask for the best way of doing this.
In my project, I am looking at multiple instrument note pairs for a neural network. The only problem is that there are multiple instruments with the same name and just because they have the same name doesn't mean that they are the same instrument 100% of the time. (It should be but I want to be sure.)
I personally would like to analyze the instrument itself (like metadata on just the instrument in question) and not the notes associated with it. Is that possible?
I should also mention that I am using pretty-midi to collect the musical instruments. | 0 | 1 | 431 |
0 | 53,976,226 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2018-12-29T23:04:00.000 | 0 | 2 | 0 | Reinforcement Learning Using Multiple Stock Ticker’s Datasets? | 53,974,005 | 0 | python-3.x,tensorflow,reinforcement-learning,stocks,openai-gym | Thanks to @Primusa I normalized my separate datasets by dividing each value by their respective maximums, then combined the datasets into one for training. Thanks! | Here’s a general question that maybe someone could point me in the right direction.
I’m getting into Reinforcement Learning with Python 3.6/Tensorflow and I have found/tweaked my own model to train on historical data from a particular stock. My question is, is it possible to train this model on more than just one stock’s dataset? Every single machine learning article I’ve read on time series prediction and RL uses one dataset for training and testing, but my goal is to train a model on a bunch of tickers with varying prices in the hopes that the model can recognize similar price patterns, regardless of the price or ticker so that I could apply the trained model to a new dataset and it’ll work.
Right now it trains on one ticker and it’s prices, but when I try to add a new dataset for added training, it performs horribly because it doesn’t know the new prices, if that makes sense.
This is a basic question and I don’t necessarily expect a coded answer, just somewhere I could learn how to train a model using multiple datasets. I’m using OpenAI gym environment if that helps anything.
Thanks! | 0 | 1 | 296 |
0 | 62,697,169 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2018-12-29T23:04:00.000 | 0 | 2 | 0 | Reinforcement Learning Using Multiple Stock Ticker’s Datasets? | 53,974,005 | 0 | python-3.x,tensorflow,reinforcement-learning,stocks,openai-gym | I think normalizing the dataset with % change from previous close on all datasets could be a good start. in that way, any stock with any price seems normalized. | Here’s a general question that maybe someone could point me in the right direction.
I’m getting into Reinforcement Learning with Python 3.6/Tensorflow and I have found/tweaked my own model to train on historical data from a particular stock. My question is, is it possible to train this model on more than just one stock’s dataset? Every single machine learning article I’ve read on time series prediction and RL uses one dataset for training and testing, but my goal is to train a model on a bunch of tickers with varying prices in the hopes that the model can recognize similar price patterns, regardless of the price or ticker so that I could apply the trained model to a new dataset and it’ll work.
Right now it trains on one ticker and it’s prices, but when I try to add a new dataset for added training, it performs horribly because it doesn’t know the new prices, if that makes sense.
This is a basic question and I don’t necessarily expect a coded answer, just somewhere I could learn how to train a model using multiple datasets. I’m using OpenAI gym environment if that helps anything.
Thanks! | 0 | 1 | 296 |
0 | 67,994,767 | 0 | 0 | 0 | 0 | 1 | false | 6 | 2018-12-31T15:23:00.000 | 1 | 3 | 0 | Get training hyperparameters from a trained keras model | 53,988,984 | 0.066568 | python,keras,hdf5 | Configuration - model.get_config()
Optimizer config - model.optimizer.get_config()
Training Config model.history.params (this will be empty, if model is saved and reloaded)
Loss Fuction - model.loss | I am trying to figure out some of the hyperparamters used for training some old keras models I have. They were saved as .h5 files. When using model.summary(), I get the model architecture, but no additional metadata about the model.
When I open this .h5 file in notepad++, most of the file is not human readable, but there are bits that I can understand, for instance;
{"loss_weights": null, "metrics": ["accuracy"], "sample_weight_mode":
null, "optimizer_config": {"config": {"decay": 0.0, "momentum":
0.8999999761581421, "nesterov": false, "lr": 9.999999747378752e-05}, "class_name": "SGD"}, "loss": "binary_crossentropy"}
which is not present in the output printed by model.summary().
Is there a way to make these files human readable or to get a more expanded summary that includes version information and training parameters? | 0 | 1 | 3,271 |
0 | 54,002,191 | 0 | 1 | 0 | 0 | 1 | true | 70 | 2019-01-01T19:23:00.000 | 71 | 1 | 0 | How does the "number of workers" parameter in PyTorch dataloader actually work? | 53,998,282 | 1.2 | python,memory-management,deep-learning,pytorch,ram | When num_workers>0, only these workers will retrieve data, main process won't. So when num_workers=2 you have at most 2 workers simultaneously putting data into RAM, not 3.
Well our CPU can usually run like 100 processes without trouble and these worker processes aren't special in anyway, so having more workers than cpu cores is ok. But is it efficient? it depends on how busy your cpu cores are for other tasks, speed of cpu, speed of your hard disk etc. In short, its complicated, so setting workers to number of cores is a good rule of thumb, nothing more.
Nope. Remember DataLoader doesn't just randomly return from what's available in RAM right now, it uses batch_sampler to decide which batch to return next. Each batch is assigned to a worker, and main process will wait until the desired batch is retrieved by assigned worker.
Lastly to clarify, it isn't DataLoader's job to send anything directly to GPU, you explicitly call cuda() for that.
EDIT: Don't call cuda() inside Dataset's __getitem__() method, please look at @psarka's comment for the reasoning | If num_workers is 2, Does that mean that it will put 2 batches in the RAM and send 1 of them to the GPU or Does it put 3 batches in the RAM then sends 1 of them to the GPU?
What does actually happen when the number of workers is higher than the number of CPU cores? I tried it and it worked fine but How does it work? (I thought that the maximum number of workers I can choose is the number of cores).
If I set num_workers to 3 and during the training there were no batches in the memory for the GPU, Does the main process waits for its workers to read the batches or Does it read a single batch (without waiting for the workers)? | 0 | 1 | 54,707 |
0 | 54,008,906 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2019-01-02T06:13:00.000 | 0 | 2 | 0 | Dask Dataframe View Entire Row | 54,002,006 | 0 | python-3.x,dask | Dask does not normally display the data in a dataframe at all, because it represents lazily-evaluated values. You may want to get a specific row by index, using the .loc accessor (same as in Pandas, but only efficient if the index is known to be sorted).
If you meant to get the whole list of columns only, you can get this by the .columns attribute. | I want to see the entire row for a dask dataframe without the fields being cutoff, in pandas the command is pd.set_option('display.max_colwidth', -1), is there an equivalent for dask? I was not able to find anything. | 0 | 1 | 2,155 |
0 | 54,002,342 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-01-02T06:46:00.000 | 0 | 1 | 0 | What is use of function mnist.train.next_batch() in training dataset? | 54,002,301 | 0 | python,tensorflow | The function sample batch_size number of samples from a shuffled training dataset, then return the batch for training.
You could write your own next_batch() method that does the same thing, or modify it as you wish. Then use it similarly when you're training your model. | I am using TensorFlow for training my own dataset using capsule network. While training mnist dataset, it contains function mnist.train.next_batch(batch size). How to replace this function for training own dataset using TensorFlow? | 0 | 1 | 472 |
0 | 54,019,576 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2019-01-03T09:31:00.000 | 1 | 2 | 0 | Documentations for Numpy Functions in Jupyter | 54,019,510 | 0.099668 | python,jupyter-notebook | Highlight and press SHIFT + TAB. | Is it possible to display documentation of numpy functions from jupyter notebook?
help(linspace) did not work for me | 0 | 1 | 55 |
0 | 54,025,044 | 0 | 1 | 0 | 0 | 1 | true | 3 | 2019-01-03T14:52:00.000 | 0 | 2 | 0 | error while installing tensorflow in conda environment (CondaError: Cannot link a source that does not exist.) | 54,024,671 | 1.2 | python,tensorflow,anaconda,conda | Try to run conda clean --all --yes and conda update anaconda.
Do you have a conda.exe file in the following folder C:\ProgramData\Anaconda3\Scripts\?
Do you use the latest Conda?
Another solution could be to create a conda environments conda create -n name_environment pip python=3.5 and using pip to install tensorflow pip install tensorflow inside the new environment
after having activated it (activate name_environment).
P.S. I can not write a comment because I do not have enough reputation.
EDIT - Now i can! | trying to install tensorflow using conda package manager
using following command
conda install -c conda-forge tensorflow
but it gives following error while executing transaction
CondaError: Cannot link a source that does not exist.
C:\ProgramData\Anaconda3\Scripts\conda.exe | 0 | 1 | 2,069 |
0 | 54,032,226 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-01-03T23:38:00.000 | 1 | 1 | 0 | k means clustering with fixed constraints (sum of specific attribute should be less than or equal 90,000) | 54,031,283 | 0.197375 | python,cluster-analysis,mean,arcgis | A turnkey solution will not work for you.
You'll have to formulate this as a standard constraint optimization problem and run a silver to optimize this. It's fairly straightforward: take the k-means objective and add your constraints... | Suppose I have 20,000 features on map, and each feature have many attributes (as well as the latitude and longitude). One of the attributes called population.
I want to split these 20,000 features into 3 clusters where the total sum of population of each cluster are equal to specific value 90,000 and features in each cluster should be near each others(ie will take locations in our consideration)
So, the output clusters should have the following conditions:
Sum(population) of all points/items/features in cluster 1=90,000
Sum(population) of all points/items/features in cluster 2=90,000
Sum(population) of all points/items/features in cluster 3=90,000
I tried to use the k-mean clustering algorithm which gave me 3 clusters, but how to force the above constraint (sum of population should equal 90,000)
Any idea is appreciated. | 0 | 1 | 791 |
0 | 54,173,014 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-01-04T08:03:00.000 | 2 | 1 | 0 | Do Dash apps reload all data upon client log in? | 54,035,114 | 1.2 | python,performance,plotly-dash | The only thing that is called on every page load is the function you can assign to app.layout. This is useful if you want to display dynamic content like the current date on your page.
Everything else is just executed once when the app is starting.
This means if you load your data outside the app.layout (which I assume is the case) everything is loaded just once. | I'm wondering about how a dash app works in terms of loading data, parsing and doing initial calcs when serving to a client who logs onto the website.
For instance, my app initially loads a bunch of static local csv data, parses a bunch of dates and loads them into a few pandas data frames. This data is then displayed on a map for the client.
Does the app have to reload/parse all of this data every time a client logs onto the website? Or does the dash server load all the data only the first time it is instantiated and then just dish it out every time a client logs on?
If the data reloads every time, I would then use quick parsers like udatetime, but if not, id prefer to use a convenient parser like pendulum which isn't as efficient (but wouldn't matter if it only parses once).
I hope that question makes sense. Thanks in advance! | 1 | 1 | 50 |
0 | 54,041,995 | 0 | 1 | 0 | 0 | 2 | false | 2 | 2019-01-04T15:20:00.000 | 1 | 4 | 0 | Shuffling with constraints on pairs | 54,041,705 | 0.049958 | python,shuffle | A possible solution is to think of your number set as n chunks of item, each chunk having the length of m. If you randomly select for each chunk exactly one item from each lists, then you will never hit dead ends. Just make sure that the first item in each chunk (except the first chunk) will be of different list than the last element of the previous chunk.
You can also iteratively randomize numbers, always making sure you pick from a different list than the previous number, but then you can hit some dead ends.
Finally, another possible solution is to randomize a number on each position sequentially, but only from those which "can be put there", that is, if you put a number, none of the constraints will be violated, that is, you will have at least a possible solution. | I have n lists each of length m. assume n*m is even. i want to get a randomly shuffled list with all elements, under the constraint that the elements in locations i,i+1 where i=0,2,...,n*m-2 never come from the same list. edit: other than this constraint i do not want to bias the distribution of random lists. that is, the solution should be equivalent to a complete random choice that is reshuffled until the constraint hold.
example:
list1: a1,a2
list2: b1,b2
list3: c1,c2
allowed: b1,c1,c2,a2,a1,b2
disallowed: b1,c1,c2,b2,a1,a2 | 0 | 1 | 247 |
0 | 54,088,978 | 0 | 1 | 0 | 0 | 2 | false | 2 | 2019-01-04T15:20:00.000 | 1 | 4 | 0 | Shuffling with constraints on pairs | 54,041,705 | 0.049958 | python,shuffle | A variation of b above that avoids dead ends: At each step you choose twice. First, randomly chose an item. Second, randomly choose where to place it. At the Kth step there are k optional places to put the item (the new item can be injected between two existing items). Naturally, you only choose from allowed places.
Money! | I have n lists each of length m. assume n*m is even. i want to get a randomly shuffled list with all elements, under the constraint that the elements in locations i,i+1 where i=0,2,...,n*m-2 never come from the same list. edit: other than this constraint i do not want to bias the distribution of random lists. that is, the solution should be equivalent to a complete random choice that is reshuffled until the constraint hold.
example:
list1: a1,a2
list2: b1,b2
list3: c1,c2
allowed: b1,c1,c2,a2,a1,b2
disallowed: b1,c1,c2,b2,a1,a2 | 0 | 1 | 247 |
0 | 54,049,978 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-01-04T16:12:00.000 | 0 | 1 | 0 | TimeDistribution Wrapper Fails the Compilation | 54,042,532 | 1.2 | python,tensorflow,video,keras | The problem was that input_shape must be specified outside Conv2D and inside TimeDistributed. Keep in mind it must be 4D '(batch_size, width, height, channels)' | I have an extremely simple cnn which i will be trying to bind to an rnn (but that in the future). For now, all I have is conv2D->maxpool>conv2d->maxpool->dense->dense. The CNN works well, no problems, compiles, runs.
'model.add(TimeDistributed(Conv2D(..., input_shape=(32,32,1))
RuntimeError: You must compile your model before using it.
And of course, model.compile() is immediately after the model definition and .fit following the compile...
Hence, is it me not getting something right or it is really an issue with the current Keras build? | 0 | 1 | 74 |
0 | 59,746,722 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-01-05T08:26:00.000 | 0 | 1 | 0 | Invalid Argument error:Load a (frozen) Tensorflow model into memory (While testing the model on local machine) | 54,050,290 | 0 | python,python-3.x,tensorflow,object-detection-api | I had a similar issue. The solution for me was to take my GPU training files from TF1.9 and move them to my local TF1.5 CPU environment (which doesn't support AVX instructions). I then created the frozen model on the local environment from the training files and was successfully able to use it. | I am using the tensorflow object detection API.
I have performed the training on the remote server GPU and saved the frozen model and checkpoints.
After that i took that frozen model along with checkpoints and copied to my local machine and then performed the testing on my test data using the the script "object_detection_tutorial.ipnyb"
When i run this cell "Load a (frozen) Tensorflow model into memory", it gives the invalid argument error.
Can you please explain what is the issue while running the save model on my local machine ? Is it necessary that that the training and testing should be on same machine? I was encountered with the following error:
InvalidArgumentError Traceback (most recent call
last)
~/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/importer.py
in import_graph_def(graph_def, input_map, return_elements, name,
op_dict, producer_op_list)
417 results = c_api.TF_GraphImportGraphDefWithResults(
--> 418 graph._c_graph, serialized, options) # pylint: disable=protected-access
419 results = c_api_util.ScopedTFImportGraphDefResults(results)
InvalidArgumentError: NodeDef mentions attr 'T' not in
Op
selected_indices:int32>; NodeDef: {{node
Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/non_max_suppression/NonMaxSuppressionV3}}
= NonMaxSuppressionV3[T=DT_FLOAT](Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/unstack,
Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/Reshape,
Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/Minimum,
Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/non_max_suppression/iou_threshold,
Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/non_max_suppression/score_threshold).
(Check whether your GraphDef-interpreting binary is up to date with
your GraphDef-generating binary.).
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call
last) in
5 serialized_graph = fid.read()
6 od_graph_def.ParseFromString(serialized_graph)
----> 7 tf.import_graph_def(od_graph_def, name='')
~/anaconda3/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py
in new_func(*args, **kwargs)
486 'in a future version' if date is None else ('after %s' % date),
487 instructions)
--> 488 return func(*args, **kwargs)
489 return tf_decorator.make_decorator(func, new_func, 'deprecated',
490 _add_deprecated_arg_notice_to_docstring(
~/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/importer.py
in import_graph_def(graph_def, input_map, return_elements, name,
op_dict, producer_op_list)
420 except errors.InvalidArgumentError as e:
421 # Convert to ValueError for backwards compatibility.
--> 422 raise ValueError(str(e))
423
424 # Create _DefinedFunctions for any imported functions.
ValueError: NodeDef mentions attr 'T' not in
Op
selected_indices:int32>; NodeDef: {{node
Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/non_max_suppression/NonMaxSuppressionV3}}
= NonMaxSuppressionV3[T=DT_FLOAT](Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/unstack,
Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/Reshape,
Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/Minimum,
Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/non_max_suppression/iou_threshold,
Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/non_max_suppression/score_threshold).
(Check whether your GraphDef-interpreting binary is up to date with
your GraphDef-generating binary.). | 0 | 1 | 300 |
0 | 54,056,410 | 0 | 1 | 0 | 0 | 3 | false | 1 | 2019-01-05T21:18:00.000 | 1 | 4 | 0 | Does installing Python also install libraries like scipy and numpy? | 54,056,362 | 0.049958 | python | If you copied your data from your previous computer to this one, you may have copied the python installation (and thereby the libraries you had installed before) in your appdata folder.
Another possibility is that you have install Anaconda, which is targeted especially at scientific things, and comes with numpy, scipy and some other things preinstalled. | I just got a new computer, and I was installing some Python libraries. When I tried to install numpy, I got a message on the console saying numpy was already downloaded. I went into the library folder, and not only was numpy there, but scipy, matplotlib, and a bunch of other libraries as well. How is this possible, considering this computer is brand new? I had installed Python the previous evening, so does installing Python automatically install these libraries as well? | 0 | 1 | 379 |
0 | 54,056,390 | 0 | 1 | 0 | 0 | 3 | false | 1 | 2019-01-05T21:18:00.000 | 1 | 4 | 0 | Does installing Python also install libraries like scipy and numpy? | 54,056,362 | 0.049958 | python | Python does not ship with these libraries unless you are using a pre-packaged distribution such as Anaconda. | I just got a new computer, and I was installing some Python libraries. When I tried to install numpy, I got a message on the console saying numpy was already downloaded. I went into the library folder, and not only was numpy there, but scipy, matplotlib, and a bunch of other libraries as well. How is this possible, considering this computer is brand new? I had installed Python the previous evening, so does installing Python automatically install these libraries as well? | 0 | 1 | 379 |
0 | 54,056,384 | 0 | 1 | 0 | 0 | 3 | false | 1 | 2019-01-05T21:18:00.000 | 1 | 4 | 0 | Does installing Python also install libraries like scipy and numpy? | 54,056,362 | 0.049958 | python | Although this is not the place for these types of questions, yes, there is no need to install libraries, as most of the times when you download Python in a distribution, such as Anaconda, they are also included. | I just got a new computer, and I was installing some Python libraries. When I tried to install numpy, I got a message on the console saying numpy was already downloaded. I went into the library folder, and not only was numpy there, but scipy, matplotlib, and a bunch of other libraries as well. How is this possible, considering this computer is brand new? I had installed Python the previous evening, so does installing Python automatically install these libraries as well? | 0 | 1 | 379 |
0 | 54,060,749 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-01-06T09:25:00.000 | 0 | 1 | 0 | Python K-Means clustering and maximum distance | 54,060,208 | 0 | python,scikit-learn,cluster-analysis | Use hierarchical clustering.
With complete linkage.
Finding the true minimum cover is NP hard. So you don't want to do this. But this should produce a fairly good approximation in "just" O(n³).
This is basic knowledge. When looking for a clustering algorithm, at least read the Wikipedia article. Better even some book, to get an overview. There is not just k-means... | I would like to start by saying that my knowledge of clustering techniques is extremely limited, please don’t shoot me down too harshly.
I have a sizable set of 3D points (around 8,000) - think of a X, Y, Z triplets, for which the Z coordinate represents a point in the earth underground (negative). I would like to cluster these points using the absolute minimum number of clusters, with the following constraints:
Use the least number of clusters
All points should be included in the clustering, which means that any point should at least belong to one cluster
The maximum distance between any point and the cluster centroid (shifted at Z=0, on the earth surface) should not exceed a certain fixed distance d.
I was thinking to use scikit-learn k-means approach, by iteratively incrementing the number of clusters and then, for all points in the dataset calculate if the distance between the point and the cluster centroid (at Z=0) is less than the specific distance provided.
Of course, I am open to better/more efficient suggestions - the clusters, for example, do not need to be circular as the ones returned by k-means. They can be ellipses or anything else, as long as the constraints above are satisfied.
I welcome any suggestion, thank you for your insights. | 0 | 1 | 1,912 |
0 | 54,090,809 | 0 | 0 | 0 | 1 | 1 | false | 0 | 2019-01-07T09:06:00.000 | 0 | 2 | 0 | How to fix upload csv file in bigquery using python | 54,071,304 | 0 | python,google-cloud-platform,google-bigquery,google-cloud-storage | Thanks to all for a response.
Here is my solution to this problem:
with open('/path/to/csv/file', 'r') as f:
text = f.read()
converted_text = text.replace('"',"'") print(converted_text)
with open('/path/to/csv/file', 'w') as f:
f.write(converted_text) | while uploading csv file on BigQuery through storage , I am getting below error:
CSV table encountered too many errors, giving up. Rows: 5; errors: 1. Please look into the error stream for more details.
In schema , I am using all parameter as string.
In csv file,I have below data:
It's Time. Say "I Do" in my style.
I am not able upload csv file in BigQuery containing above sentence | 0 | 1 | 1,178 |
0 | 54,081,952 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2019-01-07T20:59:00.000 | 1 | 2 | 0 | np.linalg.qr(A) or scipy.linalg.orth(A) for finding the orthogonal basis (python) | 54,081,800 | 1.2 | python,numpy,matrix,vector | Note that sp.linalg.orth uses the SVD while np.linalg.qr uses a QR factorization. Both factorizations are obtained via wrappers for LAPACK functions.
I don't think there is a strong preference for one over the other. The SVD will be slightly more stable but also a bit slower to compute. In practice I don't think you will really see much of a difference. | If I have a vector space spanned by five vectors v1....v5, to find the orthogonal basis for A where A=[v1,v2...v5] and A is 5Xn
should I use np.linalg.qr(A) or scipy.linalg.orth(A)??
Thanks in advance | 0 | 1 | 4,408 |
0 | 54,087,388 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-01-08T07:32:00.000 | 1 | 1 | 0 | In gradient checking, do we add/subtract epsilon (a tiny value) to both theta and constant parameter b? | 54,087,106 | 1.2 | python,neural-network,backpropagation,gradient-descent | You should do it regardless, even for constants. The reason is simple: being constants, you know their gradient is zero, so you still want to check you "compute" it correctly. You can see it as an additional safety net | I've been doing Andrew Ng's DeepLearning AI course (course 2).
For the exercise in gradient checking, he implements a function converting a dictionary containing all of the weights (W) and constants (b) into a single, one-hot encoded vector (of dimensions 47 x 1).
The starter code then iterates through this vector, adding epsilon to each entry in the vector.
Does gradient checking generally include adding epsilon/subtracting to the constant as well? Or is it simply for convenience, as constants play a relatively small role in the overall calculation of the cost function? | 0 | 1 | 222 |
0 | 54,102,609 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-01-08T17:44:00.000 | 1 | 1 | 0 | TF-IDF + Multiple Regression Prediction Problem | 54,097,067 | 1.2 | python,scikit-learn,nlp,regression,prediction | As you mentioned you could only so much with the body of text, which signifies the amount of influence of text on selling the cars.
Even though the model gives very poor prediction accuracy, you could ahead to see the feature importance, to understand what are the words that drive the sales.
Include phrases in your tfidf vectorizer by setting ngram_range parameter as (1,2)
This might gives you a small indication of what phrases influence the sales of a car.
If would also suggest you to set norm parameter of tfidf as None, to check if has influence. By default, it applies l2 norm.
The difference would come based the classification model, which you are using. Try changing the model also as a last option. | I have a dataset of ~10,000 rows of vehicles sold on a portal similar to Craigslist. The columns include price, mileage, no. of previous owners, how soon the car gets sold (in days), and most importantly a body of text that describes the vehicle (e.g. "accident free, serviced regularly").
I would like to find out which keywords, when included, will result in the car getting sold sooner. However I understand how soon a car gets sold also depends on the other factors especially price and mileage.
Running a TfidfVectorizer in scikit-learn resulted in very poor prediction accuracy. Not sure if I should try including price, mileage, etc. in the regression model as well, as it seems pretty complicated. Currently am considering repeating the TF-IDF regression on a particular segment of the data that is sufficiently huge (perhaps Toyotas priced at $10k-$20k).
The last resort is to plot two histograms, one of vehicle listings containing a specific word/phrase and another for those that do not. The limitation here would be that the words that I choose to plot will be based on my subjective opinion.
Are there other ways to find out which keywords could potentially be important? Thanks in advance. | 0 | 1 | 406 |
0 | 54,102,957 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2019-01-09T03:32:00.000 | 0 | 2 | 0 | No module named... Issue | 54,102,868 | 0 | python | tensorflow is not suport by python 3.7 its only supported by 3.6 use a virtual environment to deal with multiple python versions. | Hi so i'm trying to get started on machine learning by installing tensorflow, however it's only supported by Python 3.6.x as of now.
I guess you can say this was a failed attempt to downgrade python.
My installed version of python is 3.7.2 which has all my modules installed.
I just installed Python 3.6.8.
The IDE i use is Visual Studio Code
However now when i use Python 3.7.2 in Visual Studio Code, I get an error saying no module named... was found | 0 | 1 | 61 |
0 | 55,400,526 | 0 | 1 | 0 | 0 | 1 | true | 8 | 2019-01-09T16:18:00.000 | 4 | 1 | 0 | Python "See help(type(self)) for accurate signature." | 54,114,270 | 1.2 | python,documentation,docstring | There is a convention that the signature for constructing a class instance is put in the __doc__ on the class (since that is what the user calls) rather than on __init__ (or __new__) which determines that signature. This is especially true for extension types (written in C) whose __init__ cannot have its signature discovered via introspection.
The message that you see is part of the type class (see help(type.__init__)) and is thus inherited by metaclasses by default.
In some versions, scipy.stats.binom confuses the matter by not actually being a type; it is merely an instance of another class that (like type) is callable. So asking for help on it merely gives the help for that class (just like help(1) gets you help(int))—you have to look at its __call__ for further information (if any). And asking for help on the result of calling it gives you help for the actual class of whatever it returns, as you observed. | I have seen the following statement in a number of docstrings when help()ing a class: "See help(type(self)) for accurate signature."
Notably, it is in the help() for scipy.stats.binom.__init__ and for stockfish.Stockfish.__init__ at the very least. I assume, therefore, that it is some sort of stock message.
In any case, I can't figure out what the heck it means. Is this useful information? Note that, being "outside" of the class, so to speak, I never have access to self. Furthermore, it is impossible to instantiate a class if I cannot access the signature of the __init__ method, and can therefore not even do help(type(my_object_instantiated)). Its a catch 22. In order to use __init__, I need the signature for __init__, but in order to read the signature for __init__, I need to instantiate an object with __init__. This point is strictly academic however, for even when I do manage to instantiate a scipy.stats.binom, it actually returns an object of an entirely different class, rv_frozen, with the exact same message in its __init__ docstring, but whose signature is entirely different and entirely less useful. In other words, help(type(self)) actually does not give an accurate signature. It is useless.
Does anyone know where this message comes from, or what I'm supposed to make of it? Is it just stock rubbish from a documentation generator, or am I user-erroring? | 0 | 1 | 1,623 |
0 | 54,129,738 | 0 | 1 | 0 | 0 | 1 | true | 3 | 2019-01-10T13:02:00.000 | 1 | 2 | 0 | installed pandas but still can't import it | 54,129,321 | 1.2 | python-3.x,pandas | If you're using pycharm you can go to File -> Settings -> Project -> Project Interpreter.
There you'll get a list of all the packages installed with the current python that pycharm is using. There is a '+' sign on the right of the window that you can use to install new packages, just enter pandas there. | I already installed it with pip3 install pandas and using python3.7 but when I try to import pandas and run the code error popping up.
Traceback (most recent call last): File
"/Users/barbie/Python/Test/test.py", line 1, in
import pandas as pd ModuleNotFoundError: No module named 'pandas'
and if I try to install again.. it says this.
pip3 install
pandas Requirement already satisfied: pandas in
/usr/local/lib/python3.7/site-packages (0.23.4) Requirement already
satisfied: pytz>=2011k in /usr/local/lib/python3.7/site-packages (from
pandas) (2018.9) Requirement already satisfied: numpy>=1.9.0 in
/usr/local/lib/python3.7/site-packages (from pandas) (1.15.4)
Requirement already satisfied: python-dateutil>=2.5.0 in
/usr/local/lib/python3.7/site-packages (from pandas) (2.7.5)
Requirement already satisfied: six>=1.5 in
/usr/local/lib/python3.7/site-packages (from
python-dateutil>=2.5.0->pandas) (1.12.0) | 0 | 1 | 6,513 |
0 | 54,131,475 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-01-10T14:05:00.000 | 0 | 2 | 0 | Scipy Weibull parameter confidence intervals | 54,130,419 | 0 | python,scipy,weibull | You could use scipy.optimize.curve_fit to fit the weibull distribution to your data. This will also give you the covariance and thus you can estimate the error of the fitted parameters. | I've been using Matlab to fit data to a Weibull distribution using [paramhat, paramci] = wblfit(data, alpha). This gives the shape and scale parameters for a Weibull distribution as well as the confidence intervals for each value.
I'm trying to use Scipy to accomplish the sane task and can easily get the parameters with scipy.stats.weibull_min.fit but I cannot figure out a way to get the confidence intervals on the vlauee. Does Scipy offer this functionality? Or do I need to write the MLE confidence intervals estimation myself? | 0 | 1 | 496 |
0 | 62,532,493 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-01-10T14:20:00.000 | 0 | 3 | 0 | Role of activation function in calculating the cost function for artificial neural networks | 54,130,706 | 0 | python,neural-network,activation-function | -A cost function is a measure of error between what value your model predicts and what the value actually is. For example, say we wish to predict the value yi for data point xi . Let fθ(xi) represent the prediction or output of some arbitrary model for the point xi with parameters θ . One of many cost functions could be
∑ni=1(yi−fθ(xi))2
this function is known as the L2 loss. Training the hypothetical model we stated above would be the process of finding the θ that minimizes this sum.
-An activation function transforms the shape/representation of the data going into it. A simple example could be max(0,xi) , a function which outputs 0 if the input xi is negative or xi if the input xi is positive. This function is known as the “ReLU” or “Rectified Linear Unit” activation function. The choice of which function(s) are best for a specific problem using a particular neural architecture is still under a lot of discussion. However, these representations are essential for making high-dimensional data linearly separable, which is one of the many uses of a neural network.
I hope this gave a decent idea of what these things are. If you wish to learn more, I suggest you go through Andrew Ng’s machine learning course on Coursera. It provides a wonderful introductory look into the field. | I have some difficulty with understanding the role of activation functions and cost functions. Lets take a look at a simple example. Lets say I am building a neural network (artificial neural network). I have 5 „x“ variables and one „y“ variable.
If I do usual feature scaling and then apply, for example, Relu activation function in hidden layer, then this activation function does the transformation and as a result we get our predicted output value (y hat) between 0 and lets say M. Then the next step is to calculate the cost function.
In calculating the cost function, however, we need to compare the output value (y hat) with the actual value (y).
The question is how we can compare transformed output value (y hat) which is lets say between 0 and M with the untransformed actual value (y) (which can be any number as it is not been subjected to the Relu activation function) to calculate the cost function? There can be a large mismatch as one variable has been exposed to transformation and the other has not been.
Thank you for any help. | 0 | 1 | 179 |
0 | 54,130,955 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-01-10T14:20:00.000 | -1 | 3 | 0 | Role of activation function in calculating the cost function for artificial neural networks | 54,130,706 | -0.066568 | python,neural-network,activation-function | The value you're comparing your actual results to for the cost function doesn't (intrinsically) have anything to do with the input you used to get the output. It doesn't get transformed in any way.
Your expected value is [10,200,3] but you used Softmax on the output layer and RMSE loss? Well, too bad, you're gonna have a high cost all the time (and the model probably won't converge).
It's just on you to use the right cost functions to serve as a sane heuristic for evaluating the model performance and the right activations to be able to get sane outputs for the task at hand. | I have some difficulty with understanding the role of activation functions and cost functions. Lets take a look at a simple example. Lets say I am building a neural network (artificial neural network). I have 5 „x“ variables and one „y“ variable.
If I do usual feature scaling and then apply, for example, Relu activation function in hidden layer, then this activation function does the transformation and as a result we get our predicted output value (y hat) between 0 and lets say M. Then the next step is to calculate the cost function.
In calculating the cost function, however, we need to compare the output value (y hat) with the actual value (y).
The question is how we can compare transformed output value (y hat) which is lets say between 0 and M with the untransformed actual value (y) (which can be any number as it is not been subjected to the Relu activation function) to calculate the cost function? There can be a large mismatch as one variable has been exposed to transformation and the other has not been.
Thank you for any help. | 0 | 1 | 179 |
0 | 54,142,820 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-01-11T08:09:00.000 | 0 | 2 | 0 | How can I read a file having different column for each rows? | 54,142,589 | 0 | python,jupyter-notebook | Use something like this to split it
split2=[]
split1=txt.split("\n")
for item in split1:
split2.append(item.split(" ")) | my data looks like this.
0 199 1028 251 1449 847 1483 1314 23 1066 604 398 225 552 1512 1598
1 1214 910 631 422 503 183 887 342 794 590 392 874 1223 314 276 1411
2 1199 700 1717 450 1043 540 552 101 359 219 64 781 953
10 1707 1019 463 827 675 874 470 943 667 237 1440 892 677 631 425
How can I read this file structure in python? I want to extract a specific column from rows. For example, If I want to extract value in the second row, second column, how can I do that? I've tried 'loadtxt' using data type string. But it requires string index slicing, so that I could not proceed because each column has different digits. Moreover, each row has a different number of columns. Can you guys help me?
Thanks in advance. | 0 | 1 | 55 |
0 | 69,165,691 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-01-11T10:43:00.000 | 1 | 3 | 0 | How can I convert unicode to string of a dataframe column? | 54,144,887 | 0.066568 | python,apache-spark,pyspark,pyspark-sql,unicode-string | Since it's a string, you could remove the first and last characters:
From '[23,4,77,890,455]' to '23,4,77,890,455'
Then apply the split() function to generate an array, taking , as the delimiter. | I have a spark dataframe which has a column 'X'.The column contains elements which are in the form:
u'[23,4,77,890,455,................]'
. How can I convert this unicode to list.That is my output should be
[23,4,77,890,455...................]
. I have apply it for each element in the 'X' column.
I have tried df.withColumn("X_new", ast.literal_eval(x)) and got the error
"Malformed String"
I also tried
df.withColumn("X_new", json.loads(x)) and got the error "Expected
String or Buffer"
and
df.withColumn("X_new", json.dumps(x)) which says JSON not
serialisable.
and also
df_2 = df.rdd.map(lambda x: x.encode('utf-8')) which says rdd has no
attribute encode.
I dont want to use collect and toPandas() because its memory consuming.(But if thats the only way please do tell).I am using Pyspark
Update: cph_sto gave the answer using UDF.Though it worked well,I find that it is Slow.Can Somebody suggest any other method? | 0 | 1 | 8,294 |
0 | 54,145,335 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-01-11T11:02:00.000 | 1 | 1 | 0 | How to align training and test set when using pandas `get_dummies` with `drop_first=True`? | 54,145,226 | 0.197375 | python,machine-learning,sklearn-pandas,one-hot-encoding | When not using drop_first=True you have two options:
Perform the one-hot encoding before splitting the data in training and test set. (Or combine the data sets, perform the one-hot encoding, and split the data sets again).
Align the data sets after one-hot encoding: an inner join removes the features that are not present in one of the sets (they would be useless anyway). train, test = train.align(test, join='inner', axis=1)
You noted (correctly) that method 2 may not do what you expect because you are using drop_first=True. So you are left with method 1. | I have a data set from telecom company having lots of categorical features. I used the pandas.get_dummies method to convert them into one hot encoded format with drop_first=True option. Now how can I use the predict function, test input data needs to be encoded in the same way, as the drop_first=True option also dropped some columns, how can I ensure that encoding takes place in similar fashion.
Data set shape before encoding : (7043, 21)
Data set shape after encoding : (7043, 31) | 0 | 1 | 1,013 |
0 | 54,777,689 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-01-11T13:02:00.000 | 0 | 1 | 1 | Dask.distributed cluster administration | 54,147,096 | 0 | python,dask,dask-distributed | Usually people use a cluster manager like Kubernetes, Yarn, SLURM, SGE, PBS or something else. That system handles user authentication, resource management, and so on. A user then uses the one of the Dask-kubernetes, Dask-yarn, Dask-jobqueue projects to create their own short-lived scheduler and workers on the cluster on an as-needed basis. | I'm setting up Dask Python cluster at work (30 machines, 8 cores each in average). People use only a portion of their CPU power, so dask-workers will be running on background at low priority. All workers are listening to dask-scheduler on my master node. It works perfect if only I who use it, however it's gonna be used by several people in a concurrent manner - so i need to be able to admin this cluster:
Authenticate users, reject unknowns
Identify who submitted which jobs
Restrict number of submitted jobs per user
Restrict timeout for computation per job
Kill any job as admin
dask.distributed out of box provides a little of functionality described above. Could you please advice on some solution (may be hybrid Dask + something)? | 0 | 1 | 101 |
0 | 54,161,780 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-01-12T01:03:00.000 | 0 | 1 | 0 | How can I train dlib shape predictor using a very large training set | 54,155,910 | 0 | python,computer-vision,face-recognition,dlib | I posted this as an issue on the dlib github and got this response from the author:
It's not reasonable to change the code to cycle back and forth between disk and ram like that. It will make training very slow. You should instead buy more RAM, or use smaller images.
As designed, large training sets need tons of RAM. | I'm trying to use the python dlib.train_shape_predictor function to train using a very large set of images (~50,000).
I've created an xml file containing the necessary data, but it seems like train_shape_predictor loads all the referenced images into RAM before it starts training. This leads to the process getting terminated because it uses over 100gb of RAM. Even trimming down the data set uses over 20gb (machine only has 16gb physical memory).
Is there some way to get train_shape_predictor to load images on demand, instead of all at once?
I'm using python 3.7.2 and dlib 19.16.0 installed via pip on macOS. | 0 | 1 | 738 |
0 | 57,248,756 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-01-12T16:15:00.000 | 2 | 1 | 0 | Python3.6 add audio to cv2 processed video | 54,161,418 | 0.379949 | python-3.x,opencv,audio,video-processing,cv2 | Stephen Meschke is right ! Use FFMPEG to extract and import audio.
Type in cmd:
Extract audio:
ffmpeg -i yourvideo.avi -f mp3 -ab 192000 -vn sound.mp3
Import audio:
ffmpeg -i yourvideo.avi -i sound.mp3 -c copy -map 0:v:0 -map 1:a:0 output.avi | I have a code that takes in a video then constructs a list of frames from that video. then does something with each frame then put the frames back together into cv2 video writer. However, when the video is constructed again, it loses all its audio. | 0 | 1 | 2,044 |
0 | 56,676,328 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-01-13T01:16:00.000 | 3 | 2 | 0 | What is the Keras 2.0 equivalent of `similarity = keras.layers.merge([target, context], mode='cos', dot_axes=0)` | 54,165,333 | 1.2 | python,tensorflow,keras | I tried with:
similarity = dot([target, context], axes=1, normalize=True) | Keras 2.0 has removed keras.layers.merge and now we should use keras.layers.Concatenate,
I was wonder what is the equivalent to having the 'cos' and 'dot_axis=0' arg, for example
similarity = keras.layers.merge([target, context], mode='cos', dot_axes=0)
How would I write that in keras 2.0? | 0 | 1 | 787 |
0 | 54,192,306 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-01-13T06:55:00.000 | 2 | 1 | 0 | Make the data from the second column stay at the second column | 54,166,726 | 1.2 | python,reportlab | If I get your question correct, the problem is that you use a spacer to control the contents' visual placement in two columns/frames. By this, you see it as a single long column split in two, meanwhile you need to see it as two separate columns (two separate frames).
Therefore you will get greater control if you end the first frame (with FrameBreak() before start filling the other and only use the spacer to control any visual design within the same frame.
Tools you need to be aware of are:
FrameBreak(), if you search for it you will find many code examples.
e.g. you fill frame 1 with with 10 lines of text, then you insert a FramBreak() and instruct the script to start filling the second column.
Another tool you should be aware of is the settings used e.g for BaseDocTemplate:
allowSplitting: If set to 1, flowables (eg, paragraphs) may be split across frames or pages. If 0, you force content into the same frame. (default: 1, disabled with 0). | I'm making a form using reportlab and its in two columns. The second columns is just a copy of the first column.
I used Frame() function to create two columns and I used a Spacer() function to separate the original form from the copied form into two columns.
My expected result is to make the data from the second column stay in place. But the result that I'm getting is when the data from the first columns gets shorter the second columns starts shifting up and moves to the first column. | 0 | 1 | 44 |
0 | 54,260,769 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-01-14T02:41:00.000 | 0 | 1 | 0 | Can we use scipy to do faster LU decomposition for band matrices? | 54,175,192 | 0 | python,scipy,linear-algebra | Lapack's *gbsv routine computes the LU decomp of an input banded matrix.
From python, you can use either its f2py wrapper (see e.g. the source of scipy.linalg.solve_banded for example usage) or drop to Cython and use scipy.linalg.cython_lapack bindings. | We know that elimination requires roughly 1/3 n^3 operations, and if we use LU decomposition stored in memory, it is reduced to n^2 operations. If we have a band matrix with w upper and lower diagonals, we can skip the zeros and bring it down to about nw^2 operations, and if we use LU decomposition, it can be done in about 2nw operations.
In scipy.linalg, we have lu_factor and lu_solve, but they do not seem to be optimized for band matrices. We also have solve_banded, but it directly solves Ax=b. How can we do an efficient LU decomposition for banded matrices and efficiently perform forward and backward elimination with banded triangular L and U? | 0 | 1 | 407 |
0 | 54,182,847 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-01-14T13:41:00.000 | 1 | 2 | 0 | Convert grayscale png to RGB png image | 54,182,675 | 0.099668 | python,rgb,grayscale,medical,image-preprocessing | GIMP, Menu image -> Mode -> RGB mode | I have a dataset of medical images in grayscale Png format which must be converted to RGB format. Tried many solutions but in vain. | 0 | 1 | 1,046 |
0 | 54,188,845 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-01-14T19:17:00.000 | 1 | 1 | 0 | Python multiprocesing and NLTK wordnet path similarity | 54,187,798 | 0.197375 | python,nltk,python-multiprocessing,pool,wordnet | It is very likely that the module in separate processes attempts to access the very same file with Wordnet data. This would result in dependence on GIL to access the file or OS-level file locks use. Both cases would explain the behaviour you are observing. | I am using multiprocessing pool to speed up the title extraction process on a text corpus. At one stage of the code, I am using wordnet path similarity module to determine the similarity of two words.
If i run my code sequentially i.e. without the use of multiprocessing pool, I get normal times in calculating this path similarity. However, when I use multiprocessing to process multiple documents simultaneously, I observe great time delays in computing this path similarity as compared to sequential.
Question: Does NLTK show any problems with multiprocessing module ? | 0 | 1 | 173 |
0 | 54,777,735 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-01-14T19:53:00.000 | 1 | 1 | 0 | joblib parallel_backend with dask resources | 54,188,251 | 0.197375 | python,dask,joblib | As of 2019-02-19, there is no way to do this. | Whenever I submit a dask task, I can specify the requisite resources for that task. e.g. client.submit(process, d, resources={'GPU': 1})
However, If I abstract my dask scheduler away as a joblib.parallel_backend, it is not clear how to specify resources when I do so.
How do I call joblib.parallel_backend('dask') and still specify requisite resources? | 0 | 1 | 193 |
0 | 54,190,671 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-01-14T22:49:00.000 | 0 | 2 | 0 | How to decrypt a columnar transposition cipher | 54,190,370 | 1.2 | python,encryption,cryptography | I figured it out. Once you know the number of rows and columns, you can write the ciphertext into the rows, then permute the rows according to the key. Please correct if my explanation is wrong. The plain text is "execlent work you have cracked the code" | My question is not one of coding per say, but of understanding the algorithm.
Conceptually I understand how the column transposition deciphers text with a constant key value for example 10.
My confusion occurs, when the key is a permutation. For example key = [2,4,6,8,10,1,3,5,7,9] and a message like "XOV EK HLYR NUCO HEEEWADCRETL CEEOACT KD". The part where I'm confused is writing the cipher text into rows, then permuting the row according to the key.
Can someone please provide some clarification on this. | 0 | 1 | 1,074 |
0 | 54,197,292 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-01-15T06:52:00.000 | 1 | 1 | 0 | Good resources for video processing in Python? | 54,193,968 | 1.2 | python,opencv,video,video-streaming,video-processing | Of course are the alternatives to OpenCV in python if it comes to video capture but in my experience none of them preformed better | I am using the yolov3 model running on several surveillance cameras. Besides this I also run tensorflow models on these surveillaince streams. I feel a little lost when it comes to using anything but opencv for rtsp streaming.
So far I haven't seen people use anything but opencv in python. Are there any places I should be looking into. Please feel free to chime in.
Sorry if the question is a bit vague, but I really don't know how to put this better. Feel free to edit mods. | 0 | 1 | 51 |
0 | 54,209,309 | 0 | 0 | 0 | 1 | 1 | false | 2 | 2019-01-15T06:54:00.000 | 0 | 4 | 0 | Automate File loading from s3 to snowflake | 54,193,979 | 0 | python,amazon-s3,snowflake-cloud-data-platform | There are some aspects to be considered such as is it a batch or streaming data , do you want retry loading the file in case there is wrong data or format or do you want to make it a generic process to be able to handle different file formats/ file types(csv/json) and stages.
In our case we have built a generic s3 to Snowflake load using Python and Luigi and also implemented the same using SSIS but for csv/txt file only. | In s3 bucket daily new JSON files are dumping , i have to create solution which pick the latest file when it arrives PARSE the JSON and load it to Snowflake Datawarehouse. may someone please share your thoughts how can we achieve | 0 | 1 | 1,762 |
0 | 54,207,491 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-01-15T19:46:00.000 | 1 | 1 | 0 | Save the exact state of Tensorflow model, random state, and Datasets API pointer for debugging | 54,205,857 | 0.197375 | python,tensorflow | From my personal experience I would approach it in the following ways.
Running the code with -i flag (python -i) which takes you to the interpreter with preserved state at the moment the script stops OR (even better) calling problematic parts of code from jupyter notebook which will also preserve the state after the exception is raised and you could investigate what the problem is more easily. If the problem is inside a function you could catch the exception and return all relevant objects. Or you could also put your functions inside the class to have a single object, instantiate and run it from jupyter and when the problem occurs you will have all variables inside that class object.
Adding assert's statements for the shapes of your data and for the shapes of your model variables/placeholders. For example, if you have some preprocessing/augmentation add assert's before and after preprocessing/augmentation to be sure that the shapes are as expected.
Taking a break. Sometimes you spend a lot of time and effort on something without success, but after having a rest you solve the problem immediately.
Good luck! | TLDR: Is there a way to freeze a Tensorflow model during runtime at time t1, such that running the network from time 0 to t2>t1 would lead to exactly the same results as running it from t1 to t2?
I have searched this quite a lot and couldn't find this exact scenario:
I have a tensorflow model which is receiving inputs through Datasets API from a list of TFRecords. At very random moments I get an error regarding tensor shape incompatibility and I'm trying to figure out why. I have changed the seeds, so that the code is reproducible, but it takes about 30 minutes for the reproducible error to occur. What is the best strategy in such situations to debug the code faster?
What I have been trying has been to save a checkpoint at every iteration, hoping that by restoring the last one (right before the error) I'd be able to quickly reproduce the error later on and troubleshoot it. Unfortunately the random state and dataset api pointer get reset when I do this. Is there any way to fully store the state of a network during runtime (including its random number generator state and the Dataset API pointer), so that when it is restored the same outputs get reproduced? | 0 | 1 | 163 |
0 | 54,207,278 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-01-15T21:38:00.000 | 0 | 1 | 0 | Tensorflow does not see gpu on pycharm | 54,207,221 | 0 | python-3.x,tensorflow,pycharm | Go to File -> Settings -> Project Interpreter and set the same python environment used by Anaconda. | Specifications:
System: Ubuntu 18.0.4
Tensorflow:1.9.0,
cudnn=7.2.1
Interpreter project: anaconda environment.
When I run the script on terminal with the same anaconda env, it works fine. Using pycharm, it does not work!! What is the issue ? | 0 | 1 | 94 |
0 | 56,453,634 | 0 | 0 | 0 | 0 | 1 | false | 26 | 2019-01-16T03:37:00.000 | 0 | 3 | 0 | pd.read_hdf throws 'cannot set WRITABLE flag to True of this array' | 54,210,073 | 0 | python,pandas,pytables,hdf | It seems that time-date strings were causing the problem and when I converted these from text to numpy (pd.to_datetime()) and stored the table and the problem went away so perhaps it has something to do with text data? | When running
pd.read_hdf('myfile.h5')
I get the following traceback error:
[[...some longer traceback]]
~/.local/lib/python3.6/site-packages/pandas/io/pytables.py in
read_array(self, key, start, stop) 2487 2488 if
isinstance(node, tables.VLArray):
-> 2489 ret = node[0][start:stop] 2490 else: 2491 dtype = getattr(attrs, 'value_type', None)
~/.local/lib/python3.6/site-packages/tables/vlarray.py in
getitem(self, key)
~/.local/lib/python3.6/site-packages/tables/vlarray.py in read(self,
start, stop, step)
tables/hdf5extension.pyx in tables.hdf5extension.VLArray._read_array()
ValueError: cannot set WRITEABLE flag to True of this array
No clue what's going on. I've tried reinstalling tables, pandas everything basically, but doesn't want to read it. | 0 | 1 | 15,113 |
0 | 68,094,492 | 0 | 0 | 0 | 1 | 1 | false | 0 | 2019-01-17T08:50:00.000 | 1 | 1 | 0 | Big Data Load in Pandas Data Frame | 54,232,066 | 0.197375 | python-3.x,oracle,jupyter-notebook,bigdata | pandas is not good if you have GBS of data it would be better to use distributed architecture to improve speed and efficiency. There is a library called DASK that can load large data and use distributed architecture. | As I am new in Big Data Platform, I would like like to do some feature engineering work with my data. The Database size is about 30-50 Gb. Is is possible to load the full data (30-50Gb) in a data frame like pandas data frame?
The Database used here is Oracle. I tried to load it but I am getting out of memory error. Furthermore I like to work in Python. | 0 | 1 | 224 |
0 | 54,235,046 | 0 | 0 | 0 | 0 | 1 | true | 12 | 2019-01-17T08:51:00.000 | 16 | 1 | 1 | Dask: delayed vs futures and task graph generation | 54,232,080 | 1.2 | python,distributed-computing,dask | 1) Yup. If you're sending the data through a network, you have to have some way of asking the computer doing the computing for you how's that number-crunching coming along, and Futures represent more or less exactly that.
2) No. With Futures, you're executing the functions eagerly - spinning up the computations as soon as you can, then waiting for the results to come back (from another thread/process locally, or from some remote you've offloaded the job onto). The relevant abstraction here would be a Queque (Priority Queque, specifically).
3) For a Delayed instance, for instance, you could do some_delayed.dask, or for an Array, Array.dask; optionally wrap the whole thing in either dict() or vars(). I don't know for sure if it's reliably set up this way for every single API, though (I would assume so, but you know what they say about what assuming makes of the two of us...).
4) The simplest analogy would probably be: Delayed is essentially a fancy Python yield wrapper over a function; Future is essentially a fancy async/await wrapper over a function. | I have a few basic questions on Dask:
Is it correct that I have to use Futures when I want to use dask for distributed computations (i.e. on a cluster)?
In that case, i.e. when working with futures, are task graphs still the way to reason about computations. If yes, how do I create them.
How can I generally, i.e. no matter if working with a future or with a delayed, get the dictionary associated with a task graph?
As an edit:
My application is that I want to parallelize a for loop either on my local machine or on a cluster (i.e. it should work on a cluster).
As a second edit:
I think I am also somewhat unclear regarding the relation between Futures and delayed computations.
Thx | 0 | 1 | 1,864 |
0 | 54,235,779 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-01-17T12:10:00.000 | 0 | 2 | 0 | Access dict columns of a csv file in python pandas | 54,235,643 | 1.2 | python,python-3.x,pandas | You can only set the delmiter to one character so you can't use square brackets in this Way. You would need to use a single character such as " so that it knows to ignore the commas between the delmieters. | I have a dataset in csv file which contains one of the column as list(or dict which further includes several semi colons and commas because of key, value pair). Now trouble is accessing with Pandas and it is return mixed values because of the reason that it has several commas in the list which is in fact a single column.
I have seen several solutions such as use "" or ; as delimiter, but problem is I already have the data, find and replace will completely change my dataset.
example of csv is :
data_column1, data_column2, [{key1:value1},{key2:value2}], data_column3
Please advise any faster way to access specific columns of the data with out any ambiguity. | 0 | 1 | 105 |
0 | 54,241,747 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2019-01-17T17:59:00.000 | 5 | 1 | 0 | I get an ImportError: No module named 'numpy' when trying to use numpy in PyCarm, but it works fine in the interactive console | 54,241,710 | 1.2 | python,numpy | You probably arent using the same python installation in pycharm and in your console. Did you double-check in project settings ?
If you just want to install numpy, you can create a requirements.txt file and add numpy in it, pycharm will suggest to install it if not already done.
Alternatively, you could use a venv | I'm already installed numpy and it works in cmd.
my Python version is 3.7.2 and numpy version is 1.16.0
When I use numpy in windows cmd, It works.
import numpy is working well in the python interactive console.
But in pyCharm, it doesn't work and errors with No module named 'numpy'.
How can I solve it? | 0 | 1 | 229 |
0 | 54,247,129 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-01-18T02:42:00.000 | 0 | 2 | 0 | finding a string after identifying group | 54,247,079 | 0 | python,regex,pandas | re.match(r'(?:TEL)?:? ?([0-9 ]{9-12})').group(1)
(?:...) makes it a non-capturing group
([0-9 ]{9-12}) captures that part as the group(1) | I am iterating through a few thousand lines of some really messy data from a csv file using pandas. I'm iterating through one of the dataframe columns which contains generally fairly short strings of disparate, concatenated customer information (name, location, customer numbers, telephone numbers, etc).
There's not a lot of identifiable difference between customer numbers and telephone numbers, though most of the rows in the df column contain a TEL identifier within the string text for a telephone number, as demonstrated below (where 0123456 is a customer number, and 55555 5555 is the TEL number):
JERRY 0123456 TEL: 55555 5555 LOCATION CITY
I can clear the whitespace from the digits following the TEL: indicator, but can't seem to formulate a regular expression that only pulls the text following the TEL: indicator. My ideal output in my new df["TEL"] column could be 555555555.
So far the regular expression I have is (note, some of the phone numbers are different lengths to deal with international callers, some of which include country code, and some of which do not):
re.match(r'(TEL)?:? ?[0-9 ]{9-12}').group()
However, the above regular expression still pulls the TEL piece of the string I am matching against. How do I fix this error? | 0 | 1 | 51 |
0 | 54,271,206 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-01-18T21:17:00.000 | 0 | 1 | 0 | Machine learning through R/Python in the Netezza server | 54,261,531 | 0 | python,r,machine-learning,netezza | It is possible to install R and To my knowledge all kinds of R-packages can be installed. Some of the code will only run on the HOST but all the basics (like Apply and filtering) runs on all the SPU’s | Is it possible to run machine learning through R (RStudio) or Python in a Netezza server? More specifically, can I train models and make predictions using the Netezza server? Has anybody been able to install TensorFlow, Keras or Pytorch in the Netezza server for these ML tasks?
I appreciate any feedback whether this is feasible or not. | 0 | 1 | 159 |
0 | 66,045,559 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2019-01-19T00:00:00.000 | -1 | 2 | 0 | Python how to get labels of a generated adjacency matrix from networkx graph? | 54,262,904 | -0.099668 | python-3.x,networkx,adjacency-matrix | If the adjacency matrix is generated without passing the nodeList, then you can call G.nodes to obtain the default NodeList, which should correspond to the rows of the adjacency matrix. | If i've an networkx graph from a python dataframe and i've generated the adjacency matrix from it.
So basically, how to get labels of that adjacency matrix ? | 0 | 1 | 910 |
0 | 54,274,980 | 0 | 0 | 0 | 0 | 2 | true | 0 | 2019-01-19T00:00:00.000 | 0 | 2 | 0 | Python how to get labels of a generated adjacency matrix from networkx graph? | 54,262,904 | 1.2 | python-3.x,networkx,adjacency-matrix | Assuming you refer to nodes' labels, networkx only keeps the the indices when extracting a graph's adjacency matrix. Networkx represents each node as an index, and you can add more attributes if you wish. All node's attributes except for the index are kept in a dictionary. When generating graph's adjacency matrix only the indices are kept, so if you only wish to keep a single string per node, consider indexing nodes by that string when generating your graph. | If i've an networkx graph from a python dataframe and i've generated the adjacency matrix from it.
So basically, how to get labels of that adjacency matrix ? | 0 | 1 | 910 |
0 | 54,268,086 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2019-01-19T12:40:00.000 | 0 | 2 | 0 | How to represent bounds of variables in scipy.optimization where bound is function of another variable | 54,267,193 | 0 | python,scipy | You can try in the below way.
for i in range(0,100):
for j in range(0,int(i)):
for k in range(0,int(j)):
print(k) | I want to solve an lp optimization problem where the upper bounds of a few variables are not an integer, instead of a function of another variable. As an example, i, j and k are three variables and bounds are 0<=i<=100, 0<=j<=i-1 and 0<=k<=j-1. How can we represent such noninteger bounds in scipy lp solver? | 0 | 1 | 145 |
0 | 54,267,964 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2019-01-19T12:40:00.000 | 0 | 2 | 0 | How to represent bounds of variables in scipy.optimization where bound is function of another variable | 54,267,193 | 0 | python,scipy | Currently none of scipy's methods allows for applying dynamic bounds. You can make a non standard extension to scipy.optimize.minimize or fsolve or implement your own optimiser with dynamic bounds.
Now on whether it is a good idea to do so: NO!
That is because for a well formulated optimisation problem you want the design variables and their bounds to be orthogonally independent. If the bounds are changing based on other design variables then the problem is not orthogonally independent. | I want to solve an lp optimization problem where the upper bounds of a few variables are not an integer, instead of a function of another variable. As an example, i, j and k are three variables and bounds are 0<=i<=100, 0<=j<=i-1 and 0<=k<=j-1. How can we represent such noninteger bounds in scipy lp solver? | 0 | 1 | 145 |
0 | 54,303,968 | 0 | 0 | 0 | 0 | 1 | false | 54 | 2019-01-19T17:33:00.000 | 82 | 3 | 0 | Is there a head and tail method for Numpy array? | 54,269,647 | 1 | python,numpy | For a head-like function you can just slice the array using dataset[:10].
For a tail-like function you can just slice the array using dataset[-10:]. | I loaded a csv file into 'dataset' and tried to execute dataset.head(), but it reports an error. How to check the head or tail of a numpy array? without specifying specific lines? | 0 | 1 | 65,506 |
Subsets and Splits