GUI and Desktop Applications
int64 0
1
| A_Id
int64 5.3k
72.5M
| Networking and APIs
int64 0
1
| Python Basics and Environment
int64 0
1
| Other
int64 0
1
| Database and SQL
int64 0
1
| Available Count
int64 1
13
| is_accepted
bool 2
classes | Q_Score
int64 0
1.72k
| CreationDate
stringlengths 23
23
| Users Score
int64 -11
327
| AnswerCount
int64 1
31
| System Administration and DevOps
int64 0
1
| Title
stringlengths 15
149
| Q_Id
int64 5.14k
60M
| Score
float64 -1
1.2
| Tags
stringlengths 6
90
| Answer
stringlengths 18
5.54k
| Question
stringlengths 49
9.42k
| Web Development
int64 0
1
| Data Science and Machine Learning
int64 1
1
| ViewCount
int64 7
3.27M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 63,021,426 | 0 | 0 | 0 | 0 | 2 | false | 275 | 2014-04-06T19:24:00.000 | 43 | 16 | 0 | Filtering Pandas DataFrames on dates | 22,898,824 | 1 | python,datetime,pandas,filtering,dataframe | If you have already converted the string to a date format using pd.to_datetime you can just use:
df = df[(df['Date'] > "2018-01-01") & (df['Date'] < "2019-07-01")] | I have a Pandas DataFrame with a 'date' column. Now I need to filter out all rows in the DataFrame that have dates outside of the next two months. Essentially, I only need to retain the rows that are within the next two months.
What is the best way to achieve this? | 0 | 1 | 624,303 |
0 | 23,057,921 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2014-04-07T12:02:00.000 | 0 | 2 | 0 | DatetimeIndex with time part only: is it possible | 22,911,865 | 1.2 | python,pandas | No, it is not possible, only with datetime or with float index.
However, variant offered by unutbu is very useful. | I've stuck with such a problem.
I have a set of observation of passenger traffic. Data is stored in .xlsx file with the following structure: date_of_observation, time, station_name, boarding, alighting.
I wonder if it's possible to create Dataframe with DatetimeIndex from such data if I need only 'time' component of datetime. (No dublicates of time is presented in dataset).
The reason for this requirement is that I use specific logic based on circular time (for example, 23.00 < 0.00, but 0.01 < 0.02 when compared), so I don't want to convert them to datetime. | 0 | 1 | 938 |
0 | 62,222,727 | 0 | 0 | 0 | 0 | 1 | false | 31 | 2014-04-08T15:13:00.000 | 4 | 5 | 0 | Fastest file format for read/write operations with Pandas and/or Numpy | 22,941,147 | 0.158649 | python,numpy,pandas | If the priority is speed I would recommend:
feather - the fastest
parquet - a bit slower, but saves lots of disk space | I've been working for a while with very large DataFrames and I've been using the csv format to store input data and results. I've noticed that a lot of time goes into reading and writing these files which, for example, dramatically slows down batch processing of data. I was wondering if the file format itself is of relevance. Is there a
preferred file format for faster reading/writing Pandas DataFrames and/or Numpy arrays? | 0 | 1 | 31,690 |
0 | 22,949,986 | 0 | 0 | 0 | 0 | 1 | false | 13 | 2014-04-08T23:11:00.000 | 1 | 5 | 0 | dot product of two 1D vectors in numpy | 22,949,966 | 0.039979 | python,numpy | If you want an inner product then use numpy.dot(x,x) for outer product use numpy.outer(x,x) | I'm working with numpy in python to calculate a vector multiplication.
I have a vector x of dimensions n x 1 and I want to calculate x*x_transpose.
This gives me problems because x.T or x.transpose() doesn't affect a 1 dimensional vector (numpy represents vertical and horizontal vectors the same way).
But how do I calculate a (n x 1) x (1 x n) vector multiplication in numpy?
numpy.dot(x,x.T) gives a scalar, not a 2D matrix as I want. | 0 | 1 | 11,187 |
0 | 23,002,130 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2014-04-10T06:46:00.000 | 0 | 1 | 0 | Why is the mean smaller than the minimum and why does this change with 64bit floats? | 22,980,487 | 0 | python,arrays,numpy,floating-accuracy,floating-point-conversion | If you're working with large arrays, be aware of potential overflow problems!!
Changing from 32-bit to 64-bit floats in this instance avoids an (unflagged as far as I can tell) overflow that lead to the anomalous mean calculation. | I have an input array, which is a masked array.
When I check the mean, I get a nonsensical number: less than the reported minimum value!
So, raw array: numpy.mean(A) < numpy.min(A). Note A.dtype returns float32.
FIX: A3=A.astype(float). A3 is still a masked array, but now the mean lies between the minimum and the maximum, so I have some faith it's correct! Now for some reason A3.dtype is float64. Why?? Why did that change it, and why is it correct at 64 bit and wildly incorrect at 32 bit?
Can anyone shed any light on why I needed to recast the array to accurately calculate the mean? (with or without numpy, it turns out).
EDIT: I'm using a 64-bit system, so yes, that's why recasting changed it to 64bit. It turns out I didn't have this problem if I subsetted the data (extracting from netCDF input using netCDF4 Dataset), smaller arrays did not produce this problem - therefore it's caused by overflow, so switching to 64-bit prevented the problem.
So I'm still not clear on why it would have initially loaded as float32, but I guess it aims to conserve space even if it is a 64-bit system. The array itself is 1872x128x256, with non-masked values around 300, which it turns out is enough to cause overflow :) | 0 | 1 | 226 |
0 | 22,996,581 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2014-04-10T18:48:00.000 | 4 | 2 | 0 | How to search in one NumPy array for positions for getting at these position the value from a second NumPy array? | 22,996,507 | 1.2 | python,arrays,numpy,arcgis,arcpy | If I understand your description right, you should just be able to do B[A]. | I have two raster files which I have converted into NumPy arrays (arcpy.RasterToNumpyArray) to work with the values in the raster cells with Python.
One of the raster has two values True and False. The other raster has different values in the range between 0 to 1000. Both rasters have exactly the same extent, so both NumPy arrays are build up identically (columns and rows), except the values.
My aim is to identify all positions in NumPy array A which have the value True. These positions shall be used for getting the value at these positions from NumPy array B.
Do you have any idea how I can implement this? | 0 | 1 | 143 |
0 | 23,001,960 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2014-04-11T01:26:00.000 | 4 | 1 | 0 | Counting 1's in a n x n array of 0's and 1's | 23,001,932 | 1.2 | python,algorithm,count | Since all 1's come before the 0's, you can find the index of the first 0 using Binary search algorithm (which is log N) and you just have to do this for all the N rows. So the total complexity is NlogN. | Assuming that in each row of the array, all 1's come before the 0's, how would I be able to come up with an (O)nlogn algorithm to count the 1's in the array. I think first I would have to make a counter, search each row for 1's (n), and add that to the counter. Where does the "log n part" come into play? I read that a recursive algorithm to do this has nlogn complexity, but Im not too sure how I would do this. I know how to do this in O(n^2) with for loops. Pseudo code or hints would be helpful! Thank you | 0 | 1 | 100 |
0 | 23,028,931 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2014-04-11T17:33:00.000 | 4 | 1 | 0 | Scikit's GBM assumptions on feature type | 23,019,076 | 1.2 | python,scikit-learn | All features are continuous for gradient boosting (and practically all other estimators).
Tree-based models should be able to learn splits in categorical features that are encoded as "levels" (1, 2, 3) rather than dummy variables ([1, 0, 0], [0, 1, 0], [0, 0, 1]), but this requires deep trees instead of stumps and the exact ordering may still affect the outcome of learning. | Does scikit's GradientBoostingRegressor make any assumptions on the feature's type? Or does it treat all features as continuous? I'm asking because I have several features that are truly categorical that I have encoded using LabelEncoder(). | 0 | 1 | 138 |
0 | 23,089,696 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2014-04-11T22:51:00.000 | 3 | 1 | 0 | Restricting magnitude of change in guess in fmin_bfgs | 23,023,851 | 1.2 | python,scipy,mathematical-optimization | I recently ran into the same problem with fmin_bfgs.
As far as I could see, the answer is negative. I didn't see a way to limit the stepsize.
My workaround was to first run Nelder-Mead fmin for some iterations, and then switch to fmin_bfgs. Once I was close enough to the optimum, the curvature of my function was much nicer and fmin_bfgs didn't have problems anymore.
In my case the problem was that the gradient of my function was very large at points further away from the optimum.
fmin_l_bfgs_b works also without constraints, and several users have reported reliable performance.
aside: If you are able to convert your case to a relatively simple test case, then you could post it to the scipy issue tracker so that a developer or contributor can look into it. | I'm trying to estimate a statistical model using MLE in python using the fmin_BFGS function in Scipy.Optimize and a numerically computed Hessian.
It is currently giving me the following warning: Desired error not necessarily achieved due to precision loss.
When I print the results of each evaluation, I see that while the starting guess yields a reasonable log-likelihood. However, after a few guesses, the cost function jumps from ~230,000 to 9.5e+179.
Then it gives a runtime warning: RuntimeWarning: overflow encountered in double_scalars when trying to compute radical = B * B - 3 * A * C in the linesearch part of the routine.
I suspect that the algo is trying to estimate the cost function at a point that approaches an overflow. Is there a way to reduce the rate at which the algorithm changes parameter values to keep the function in a well-behaved region? (I would use the constrained BFGS routine but I don't have good priors over what the parameter values should be) | 0 | 1 | 236 |
0 | 27,126,021 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2014-04-12T12:19:00.000 | 1 | 1 | 0 | Content based recommender system with sklearn or numpy | 23,030,284 | 0.197375 | python,numpy,machine-learning,scikit-learn,recommendation-engine | I believe You can use centered cosine similarity /pearson corelation to make this work and make use of collaborative filtering technique to achieve this
Before you use pearson co -relation you need to fill the Null ( the fields which dont have any entries) with zero ,now pearson co relation centers the similarity matrix around zero ,which gives optimum recommendation . | I am trying to build a content-based recommender system in python/pandas/numpy/sklearn.
Here are the matrix involved and their size:
X: n_customers * n_features (contains the features of each customer)
Y: n_customers *n_products (contains the scores given by each customer to each product)
Theta: n_features * n_products
The aim is to learn Theta in order to be able to predict the score given by a customer to all products (X*Theta). Indeed, Y is a sparse matrix, a customer score only a very small % of the whole quantity of products. This is why Y contains a lot of NaN values.
Here is my problem:
This is a regression problem with many targets (here target=product). But I want to do the regression only on not null values. because the number of NaN differ from one product to another, how can I vectorize that ?
Assume there are 1000 products and 100 000 customers, each one having 20 features.
For each product I need to the regression on the not null values. So without vectorization, I would need 1000 different regressor learning each one a Theta vector of length 20.
If possible I would like to solve this problem with sklearn. The ridge regression for example takes into account multiple targets (Y as a matrix)
I hope it's clear enough.
Thank you for your help. | 0 | 1 | 3,553 |
0 | 23,038,786 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2014-04-13T03:25:00.000 | 3 | 1 | 0 | Is In place sorting algorithm always faster? What are the adavantage of in place sorting? Python | 23,038,760 | 1.2 | python,sorting | is creating a sorting algorithm a common task for a professional developer?
No. It's good to be able to do it, but most of the time, you'll just use sorts other people already wrote.
On what task do developer need to create a sorting algorithm?
If you're providing a sorting routine for other people to use, you may need to implement it yourself. For example, Python's list.sort. Alternatively, if the standard sorts don't provide some property or capability you need, you may need to write your own.
what are the advantages of an in-place sorting algorithm?
Low extra memory usage. Sometimes we care about that; usually we don't. | I am new to programming.
is creating a sorting algorithm a common task for a professional developer?
On what task do developer need to create a sorting algorithm?
And Finally, what are the advantages of an in-place sorting algorithm?
any help is appreciated!! | 0 | 1 | 330 |
0 | 50,678,634 | 0 | 0 | 0 | 0 | 1 | false | 5 | 2014-04-13T17:09:00.000 | 0 | 2 | 0 | RGB to HSI function in python | 23,045,695 | 0 | python,python-imaging-library | To convert the available RGB image to HSI format( Hue,Saturation, Intensity), you can make use of the CV_RGB2HSI function available in the openCV docs. | I want to convert an RGB image to HSI,I found lot of inbuilt functions like rgb_to_hsv, rgb_to_hls, etc. Is there any function for conversion from RGB to HSI color model in python?? | 0 | 1 | 3,209 |
0 | 23,065,974 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2014-04-14T16:06:00.000 | 0 | 2 | 0 | Curve fitting differential equations in Python | 23,064,886 | 0 | python-2.7,physics,curve-fitting,differential-equations | Certainly you intend to have the third derivative on the right.
Group your data in relatively small bins, possibly overlapping. For each bin, compute a cubic approximation of the data. From that compute the derivatives in the center point of the group. With the derivatives of all groups you now have a classical linear regression problem.
If the samples are equally spaced, you might try to move the problem into frequency space via FFT. A sensible truncation of the data might be a problem here. In the frequency space, the task reduces to a polynomial linear regression. | I have a curve of >1000 points that I would like to fit to a differential equation in the form of x'' = (a*x'' + b x' + c x + d), where a,b,c,d are constants. How would I proceed in doing this using Python 2.7? | 0 | 1 | 756 |
0 | 23,086,286 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2014-04-15T14:11:00.000 | 0 | 1 | 0 | Is dynamically writing to a csv file slower than appending to an array in Python? | 23,086,241 | 1.2 | python,csv,dynamic | I am assuming you mean to ask about separate calls to writer.writerow() vs. building a list, then writing that list with writer.writerows().
Memory wise, the former is more efficient. That said, don't worry too much about speed here; writing to disk is your bottleneck, not how you build the data. | I'm talking about time taken to dynamically write values to a csv file vs appending those same values to an array. Eventually, I will append that array to the csv file, but that's out of the scope of this question. | 0 | 1 | 658 |
0 | 25,503,548 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2014-04-16T18:21:00.000 | 0 | 1 | 0 | Python OpenCV "ImportError: undefined Symbol" or Memory Access Error | 23,117,242 | 0 | python-2.7,opencv,opensuse,undefined-symbol | Not exactly a prompt answer (nor a direct one). I had the same issue and (re)installing various dependencies didn't help either.
Ultimately, I cloned (from git) and compiled opencv (which includes the cv2.so library) from scratch, replaced the old cv2.so library and got it to work.
Here is the git repo: https://github.com/Itseez/opencv.git | I'm using OpenSUSE 13.1 64-bit on an Lenovo ThinkPad Edge E145.
I tryed to play a bit around with Python(2.7) and Python-OpenCV(2.4). Both is installed by using YAST.
When i start the Python-Interactive-Mode (by typing "python") and try to "import cv" there are 2 things that happen:
case 1: "import cv" --> End's up with:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib64/python2.7/site-packages/cv.py", line 1, in <module>
from cv2.cv import *
ImportError: /usr/lib64/python2.7/site-packages/cv2.so: undefined symbol: _ZN2cv23adaptiveBilateralFilterERKNS_11_InputArrayERKNS_12_OutputArrayENS_5Size_IiEEddNS_6Point_IiEEi
case 2: "import cv2" --> End's up with:
MemoryAccessError
and the interactive mode shutdown and i'm back at the normal commandline.
Have anyone any idea how can i solve this problem?
Greetings | 0 | 1 | 892 |
0 | 23,163,657 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2014-04-18T22:43:00.000 | 1 | 1 | 0 | Call the name of a data frame rather than its content (Pandas) | 23,163,508 | 0.197375 | python,string,list,pandas | The DataFrame doesn't know the name of the variable you've assigned to it.
Depending on how you're printing the object, either the __str__ or __repr__ method will get called to get a description of the object. If you want to get back 'df2', you could put them into a dictionary to map the name back to the object.
If you want to be very sneaky, you could patch the object's __str__ or __repr__ methods to return what you want. This is probably a very bad idea, though. | I have a list of dataframes but when I call the content of the list it returns the content of the called dataframe.
List = [df1, df2, df3, ..., dfn]
List[1]
will give,
class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 4753 entries, etc
but I want it to give
str(List[1])???
'df2'
Thanks for the help | 0 | 1 | 70 |
0 | 23,183,710 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2014-04-19T17:49:00.000 | 1 | 1 | 0 | Parameters to let random_powerlaw_tree() generate trees with more than 10 nodes | 23,173,427 | 0.197375 | python,networkx | To generate trees with more nodes it is only needed to increase the "number of tries" (parameter of random_powerlaw_tree). 100 tries is not enough even to have a tree with 11 nodes (it gives an error). For example, with 1000 tries I manage to generate trees with 100 nodes, using networkX 1.8.1 and python 3.4.0 | I am trying to use one of the random graph-generators of NetworkX (version 1.8.1):
random_powerlaw_tree(n, gamma=3, seed=None, tries=100)
However, I always get this error
File "/Library/Python/2.7/site-packages/networkx/generators/random_graphs.py", line 840, in random_powerlaw_tree
"Exceeded max (%d) attempts for a valid tree sequence."%tries)
networkx.exception.NetworkXError: Exceeded max (100) attempts for a valid tree sequence.
for any n > 10, that is starting with
G = nx.random_powerlaw_tree(11)
I would like to generate trees with hundreds of nodes. Does anyone know how to correctly set these parameters in order to make it run correctly? | 0 | 1 | 451 |
0 | 29,718,358 | 0 | 0 | 0 | 0 | 1 | false | 6 | 2014-04-20T04:01:00.000 | 1 | 1 | 0 | Text classification in python - (NLTK Sentence based) | 23,178,275 | 0.197375 | python,python-3.x,machine-learning,classification,bayesian | Ideally, it is said that the more you train your data, the 'better' your results are but it really depends after you've tested it and compared it to the real results you've prepared.
So to answer your question, training the model with keywords may give you too broad results that may not be arguments. But really, you have to compare it to something, so I suggest you might want to also train your model with some sentence structure that arguments seem to follow (a pattern of some sort), it might eliminate the ones that are not arguments. Again, do this and then test it to see if you get higher accuracy than the previous model.
To answer your next question: Which would be the best approach in terms of text classification accuracy and time to retrieve? It really depends on the data your using, I can't really answer this question because you have to perform cross-validation to see if your model achieves high accuracy. Obviously, the more features you are looking, the poorer your learning algorithm's performance. And if you are dealing with gigabytes of text to analyze, I suggest using Mapreduce to perform this job.
You might want to check out SVMs as your learning model, test it out with the learning models (naive bayes, positive naive bayes and decision trees) and see which one performs better.
Hope this helps. | I need to classify text and i am using Text blob python module to achieve it.I can use either Naive Bayes classifier/Decision tree. I am concern about the below mentioned points.
1) I Need to classify sentences as argument/ Not an argument. I am using two classifiers and training the model using apt data sets. My question is all about do i need to train the model with only keywords ? or i can train the data set with all possible argument and non argument sample sentences? Which would be the best approach in terms of text classification accuracy and time to retrieve?
2) Since the classification would be either argument/not an argument, which classifier would fetch exact results? It is Naive Bayes /Decision tree/Positive Naive bayes?
Thanks in advance. | 0 | 1 | 1,170 |
0 | 71,003,756 | 0 | 0 | 0 | 0 | 1 | false | 11 | 2014-04-21T10:28:00.000 | 0 | 6 | 0 | OpenCV - Fastest method to check if two images are 100% same or not | 23,195,522 | 0 | python,c++,opencv | I have done this task.
Compare file sizes.
Compare exif data.
Compare first 'n' byte, where 'n' is 128 to 1024 or so.
Compare last 'n' bytes.
Compare middle 'n' bytes.
Compare checksum | There are many questions over here which checks if two images are "nearly" similar or not.
My task is simple. With OpenCV, I want to find out if two images are 100% identical or not.
They will be of same size but can be saved with different filenames. | 0 | 1 | 15,991 |
0 | 39,591,989 | 0 | 0 | 0 | 0 | 1 | false | 319 | 2014-04-21T14:51:00.000 | 13 | 18 | 0 | Detect and exclude outliers in a pandas DataFrame | 23,199,796 | 1 | python,pandas,filtering,dataframe,outliers | scipy.stats has methods trim1() and trimboth() to cut the outliers out in a single row, according to the ranking and an introduced percentage of removed values. | I have a pandas data frame with few columns.
Now I know that certain rows are outliers based on a certain column value.
For instance
column 'Vol' has all values around 12xx and one value is 4000 (outlier).
Now I would like to exclude those rows that have Vol column like this.
So, essentially I need to put a filter on the data frame such that we select all rows where the values of a certain column are within, say, 3 standard deviations from mean.
What is an elegant way to achieve this? | 0 | 1 | 440,407 |
0 | 23,232,968 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2014-04-22T10:44:00.000 | 1 | 1 | 0 | Initializing the weights of a MLP with the RBM weights | 23,217,264 | 1.2 | python,scikit-learn | scikit-learn does not currently have an MLP implemented which you can initialize via an RBM, but you can still access the weights which are stored in the components_ attribute and the bias which is stored in the intercept_hidden_ attribute.
If you're interested in using modern MLPs, torch7, pylearn2, and deepnet are all modern libraries and most of them contain pretraining routines like you describe. | I want to build a Deep Believe Network with scikit-learn. As I know one should train many Restricted Boltzmann Machines (RBM) individually. Then one should create a Multilayer Perceptron (MLP) that has the same number of layers as the number of (RBMs), and the weights of the MLP should be initialized with the weights of the RBMs. However I'm unable to find a way to get the weights of the RBMs from scikit-learn's BernoulliRBM. Also it doesn't seem to be a way also to initialize the weights of a MLP in scikit-learn.
Is there a way to do what I described? | 0 | 1 | 674 |
0 | 23,234,151 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2014-04-23T03:13:00.000 | 14 | 1 | 0 | How to use pickle to save data to disk? | 23,234,103 | 1 | python,pickle | Save an object containing the game state before the program exits:
pickle.dump(game_state, open('gamestate.pickle', 'wb'))
Load the object when the program is started:
game_state = pickle.load(open('gamestate.pickle', 'rb'))
In your case, game_state may be a list of questions. | I'm making a Animal guessing game and i finish the program but i want to add pickle so it save questions to disk, so they won't go away when
the program exits. Anyone can help? | 0 | 1 | 3,645 |
0 | 23,254,015 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2014-04-23T13:34:00.000 | 1 | 1 | 0 | Condtionally selecting values from a Numpy array returned from PyFITS | 23,246,013 | 0.197375 | python,numpy,fits,pyfits | The expression data.field[('zquality' > 2) & ('pgal'==3)] is asking for fields where the string 'zquality' is greater than 2 (always true) and where the string 'pgal' is equal to 3 (also always false).
Actually chances are you're getting an exception because data.field is a method on the Numpy recarray objects that PyFITS returns tables in.
You want something like data[(data['zquality'] > 2) & (data['pgal'] == 3)].
This expression means "give me the rows of the 'zquality' column of data containing values greater than 2. Then give me the rows of the 'pgal' column of data with values equal to three. Now give me the full rows of data selected from the logical 'and' of the two row masks. | I have opened a FITS file in pyfits. The HEADER file reads XTENSION='BINTABLE' with DIMENSION= 52989R x 36C with 36 column tags like, 'ZBEST', 'ZQUALITY', 'M_B', 'UB', 'PGAL' etc.
Now, I have to choose objects from the data with 'ZQUALITY' greater than 2 & 'PGAL' equals to 3. Then I have to make a histogram for the 'ZBEST' of the corresponding objects obeying the above conditions. Also I have to plot 'M_B' vs 'UB' for those objects.
At last I want to slice the 'ZBEST' into three slices (zbest < 0.5), (0.5 < zbest < 1.0), (zbest > 1.0) and want to plot histogram and 'M_B' vs 'UB' diagram of them separately.
I am stuck at choosing the data obeying the two conditions. Can anyone please tell me how can I choose the objects from the data satisfying both the conditions ('ZQUALITY' > 2 & 'PGAL' == 3 )? I am using like: data.field[('zquality' > 2) & ('pgal'==3)] but it's not working. | 0 | 1 | 229 |
0 | 23,264,484 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2014-04-24T08:45:00.000 | 0 | 1 | 0 | Get probability of classification from decision tree | 23,264,037 | 1.2 | python,machine-learning,decision-tree,cart-analysis | When you train your tree using the training data set, every time you do a split on your data, the left and right node will end up with a certain proportion of instances from class A and class B. The percentage of instances of class A (or class B) can be interpreted as probability.
For example, assume your training data set includes 50 items from class A and 50 items from class B. You build a tree of one level, by splitting the data once. Assume after the split, your left node ends up having 40 instances of class A and 10 instances of class B and the right node has 10 instances of class A and 40 instances of class B. Now the probabilities in the nodes will be 40/(10+40) = 80% for class A in left node, and 10/(10+40) = 20% for class A in left node (and vice versa for class B).
Exactly the same applies for deeper trees: you count the instances of classes and compute the proportion. | I'm implementing decision tree based on CART algorithm and I have a question. Now I can classify data, but my task is not only classify data. I want have a probability of right classification in end nodes.
For example. I have dataset that contains data of classes A and B. When I put an instance of some class to my tree I want see with what probability the instance belongs to class A and class B.
How can I do that? How can I improve CART to have probability distribution in the end nodes? | 0 | 1 | 2,896 |
0 | 23,279,735 | 0 | 0 | 0 | 1 | 1 | false | 2 | 2014-04-24T20:51:00.000 | 1 | 2 | 0 | to_excel on desktop regardless of the user | 23,279,546 | 0.099668 | python,pandas | This depends on your operating system.
You're saying you'd like to save the file on the desktop of the user who is running the script right?
On linux (not sure if this is true of every distribution) you could pass in "~/desktop/my_file.xls" as the path where you're saving the file | Is there a way to use pandas to_excel function to write to the desktop, no matter which user is running the script? I've found answers for VBA but nothing for python or pandas. | 0 | 1 | 781 |
0 | 38,764,796 | 0 | 0 | 0 | 0 | 1 | false | 18 | 2014-04-25T04:49:00.000 | 28 | 4 | 0 | How to subtract rows of one pandas data frame from another? | 23,284,409 | 1 | python,merge,pandas | Consider Following:
df_one is first DataFrame
df_two is second DataFrame
Present in First DataFrame and Not in Second DataFrame
Solution: by Index
df = df_one[~df_one.index.isin(df_two.index)]
index can be replaced by required column upon which you wish to do exclusion.
In above example, I've used index as a reference between both Data Frames
Additionally, you can also use a more complex query using boolean pandas.Series to solve for above. | The operation that I want to do is similar to merger. For example, with the inner merger we get a data frame that contains rows that are present in the first AND second data frame. With the outer merger we get a data frame that are present EITHER in the first OR in the second data frame.
What I need is a data frame that contains rows that are present in the first data frame AND NOT present in the second one? Is there a fast and elegant way to do it? | 0 | 1 | 34,025 |
0 | 23,285,666 | 0 | 0 | 0 | 0 | 2 | false | 34 | 2014-04-25T05:24:00.000 | 6 | 7 | 0 | Opening a pdf and reading in tables with python pandas | 23,284,759 | 1 | python,pdf,pandas | this is not possible. PDF is a data format for printing. The table structure is therefor lost. with some luck you can extract the text with pypdf and guess the former table columns. | Is it possible to open PDFs and read it in using python pandas or do I have to use the pandas clipboard for this function? | 1 | 1 | 82,529 |
0 | 41,133,523 | 0 | 0 | 0 | 0 | 2 | false | 34 | 2014-04-25T05:24:00.000 | 3 | 7 | 0 | Opening a pdf and reading in tables with python pandas | 23,284,759 | 0.085505 | python,pdf,pandas | Copy the table data from a PDF and paste into an Excel file (which usually gets pasted as a single rather than multiple columns). Then use FlashFill (available in Excel 2016, not sure about earlier Excel versions) to separate the data into the columns originally viewed in the PDF. The process is fast and easy. Then use Pandas to wrangle the Excel data. | Is it possible to open PDFs and read it in using python pandas or do I have to use the pandas clipboard for this function? | 1 | 1 | 82,529 |
0 | 23,300,115 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2014-04-25T17:44:00.000 | 1 | 1 | 0 | Classification using SVM from opencv | 23,299,694 | 1.2 | python,opencv,svm | As a simple approach, you can train an additional classifier to determine if your feature is a digit or not. Use non-digit images as positive examples and the other classes' positives (i.e. images of digits 0-9) as the negative samples of this classifier. You'll need a huge amount of non-digit images to make it work, and also it's recommendable to use strategies as the selection of hard negatives: negative samples classified as "false positives" after the first training stage, which are used to re-train the classifier.
Hope that it helps! | I have problem with classification using SVM. Let's say that I have 10 classes, digts from 0 to 9. I can train SVM to recognize theese classes, but sometimes I get image which is not digt, but SVM still tries to categorize this image. Is there a way to set threshold for SVM on the output maybe (as I can set it for Neural Networks) to reject bad images? May I ask for code sample (in C++ or Python with opencv)?
Thanks in advance. | 0 | 1 | 645 |
0 | 23,440,098 | 0 | 0 | 0 | 0 | 1 | false | 16 | 2014-04-27T10:09:00.000 | 1 | 2 | 0 | Julia Dataframes vs Python pandas | 23,322,025 | 0.099668 | python,pandas,dataframe,julia | I'm a novice at this sort of thing but have definitely been using both as of late. Truth be told, they seem very quite comparable but there is far more documentation, Stack Overflow questions, etc pertaining to Pandas so I would give it a slight edge. Do not let that fact discourage you however because Julia has some amazing functionality that I'm only beginning to understand. With large datasets, say over a couple gigs, both packages are pretty slow but again Pandas seems to have a slight edge (by no means would I consider my benchmarking to be definitive). Without a more nuanced understanding of what you are trying to achieve, it's difficult for me to envision a circumstance where you would even want to call a Pandas function while working with a Julia DataFrame or vice versa. Unless you are doing something pretty cerebral or working with really large datasets, I can't see going too wrong with either. When you say "output the data" what do you mean? Couldn't you write the Pandas data object to a file and then open/manipulate that file in a Julia DataFrame (as you mention)? Again, unless you have a really good machine reading gigs of data into either pandas or a Julia DataFrame is tedious and can be prohibitively slow. | I am currently using python pandas and want to know if there is a way to output the data from pandas into julia Dataframes and vice versa. (I think you can call python from Julia with Pycall but I am not sure if it works with dataframes) Is there a way to call Julia from python and have it take in pandas dataframes? (without saving to another file format like csv)
When would it be advantageous to use Julia Dataframes than Pandas other than extremely large datasets and running things with many loops(like neural networks)? | 0 | 1 | 9,570 |
0 | 23,326,609 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2014-04-27T17:13:00.000 | 1 | 1 | 0 | learn a threshold from labels and discrimination values? | 23,326,430 | 0.197375 | python,r,algorithm | Sort the points, group them by value, and try all <=2n+1 thresholds that classify differently (<=n+1 gaps between distinct data values including the sentinels +-infinity and <=n distinct data values). The latter step is linear-time if you try thresholds lesser to greater and keep track of how many points are misclassified in each way. | I have a set of {(v_i, c_i), i=1,..., n}, where v_i in R and c_i in {-1, 0, 1} are the discrimination value and label of the i-th training example.
I would like to learn a threshold t so that the training error is the minimum when I declare the i-th example has label -1 if v_i < t, 0 if v_i=t, and 1 if v_i>t.
How can I learn the threshold t from {(v_i, c_i), i=1,..., n}, and what is an efficient algorithm for that?
I am implementing that in Python, although I also hope to know how to implement that in R efficiently.
Thanks!
Btw, why SO doesn't support LaTeX for math expressions? (I changed them to be code instead). | 0 | 1 | 72 |
0 | 23,421,883 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2014-04-30T16:31:00.000 | 0 | 1 | 0 | Categorizing points using known distributions | 23,393,456 | 1.2 | python,machine-learning,statistics,categorization | The principled way to do this is to assign probabilities to different model types and to different parameters within a model type. Look for "Bayesian model estimation". | My problem is as follows:
I am given a number of chi-squared values for the same collection of data sets, fitted with different models. (so, for example, for 5 collections of points, fitted with either a single binomial distribution, or both binomial and normal distributions, I would have 10 chi-squared values).
I would like to use machine learning categorization to categorize the data sets into "models":
e.g. data sets (1,2,5 and 7) are best fitted using only binomial distributions, whereas sets (3,4,6,8,9,10) - using normal distribution as well.
Notably, the number of degrees of freedom is likely to be different for both chi-squared distributions and is always known, as is the number of models.
My (probably) naive guess for a solution would be as follows:
Randomly distribute the points (10 chi-squared values in this case) into the number of categories (2).
Fit each of the categories using the particular chi-squared distributions (in this case with different numbers of degrees of freedom)
Move outlying points from one distribution to the next.
Repeat steps 2 and 3 until happy with result.
However I don't know how I would select the outlying points, or, for that matter, if there already is an algorithm that does it.
I am extremely new to machine learning and fairly new to statistics, so any relevant keywords would be appreciated too. | 0 | 1 | 52 |
0 | 23,396,854 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2014-04-30T19:47:00.000 | 1 | 1 | 0 | How to determine the "sentiment" between two named entities with Python/NLTK? | 23,396,807 | 0.197375 | python,nlp,nltk | In short "you cannot". This task is far beyond simple text processing which is provided with NLTK. Such objects relations sentiment analysis could be the topic of the research paper, not something solvable with a simple approach. One possible method would be to perform a grammar analysis, extraction of the conceptual relation between objects and then independent sentiment analysis of words included, but as I said before - it is rather a reasearch topic. | I'm using NLTK to extract named entities and I'm wondering how it would be possible to determine the sentiment between entities in the same sentence. So for example for "Jon loves Paris." i would get two entities Jon and Paris. How would I be able to determine the sentiment between these two entities? In this case should be something like Jon -> Paris = positive | 0 | 1 | 301 |
0 | 23,472,135 | 0 | 0 | 0 | 0 | 1 | true | 4 | 2014-05-04T16:42:00.000 | 5 | 1 | 0 | When does fit() stop running in scikit? | 23,458,792 | 1.2 | python,scikit-learn | There's no hard limit to the number of iterations for LogisticRegression; instead it tries to detect convergence with a specified tolerance, tol: the smaller tol, the longer the algorithm will run.
From the source code, I gather that the algorithms stops when the norm of the objective's gradient is less than tol times its initial value, before training started. This is worth documenting.
As for random forests, training stops when n_estimators trees have been fit of maximum depth max_depth, constrained by the parameters min_samples_split, min_samples_leaf and max_leaf_nodes. Tree learning is completely different from iterative linear model learning. | I'm using scikit-learn to train classifiers. I'm particularly using linear_model.LogisticRegression. But my question is: what's the stopping criteria for the training?! because I don't see any parameter that indicates the number of epochs!
Also the same for random forests? | 0 | 1 | 1,201 |
0 | 23,552,362 | 0 | 0 | 1 | 0 | 1 | false | 7 | 2014-05-06T19:49:00.000 | 3 | 4 | 0 | How to detect if all the rows of a non-square matrix are orthogonal in python | 23,503,667 | 0.148885 | python,math,numpy,scipy | Approach #3: Compute the QR decomposition of AT
In general, to find an orthogonal basis of the range space of some matrix X, one can compute the QR decomposition of this matrix (using Givens rotations or Householder reflectors). Q is an orthogonal matrix and R upper triangular. The columns of Q corresponding to non-zero diagonal entries of R form an orthonormal basis of the range space.
If the columns of X=AT, i.e., the rows of A, already are orthogonal, then the QR decomposition will necessarily have the R factor diagonal, where the diagonal entries are plus or minus the lengths of the columns of X resp. the rows of A.
Common folklore has it that this approach is numerically better behaved than the computation of the product A*AT=RT*R. This may only matter for larger matrices. The computation is not as straightforward as the matrix product, however, the amount of operations is of the same size. | I can test the rank of a matrix using np.linalg.matrix_rank(A) . But how can I test if all the rows of A are orthogonal efficiently?
I could take all pairs of rows and compute the inner product between them but is there a better way?
My matrix has fewer rows than columns and the rows are not unit vectors. | 0 | 1 | 7,018 |
0 | 23,613,409 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2014-05-12T07:57:00.000 | 0 | 3 | 0 | Dynamic Programming for Dice probability in a Role Playing Pen & Paper game | 23,603,762 | 0 | python,algorithm | So the problem is: how many ways can you roll with attributes 12, 13 and 12 and a talent of 7. Lets assume you know the outcome of the first dice, lets say its 11. Then the problem is reduced to how many ways can you roll with attributes 13 and 12 and with a talent of 7. Now try it with a different first roll, lets say you rolled 14 for the first time. You are over by 2, so the problem now is how many ways can you roll with attributes 13 and 12 and with a talent of 5. Now try with a first roll of 20. The question now is how many ways can you roll with attributes 13 and 12 and with a talent of -1 (in the last case its obviously 0). | The Dark Eye is a popular fantasy role-playing game, the German equivalent of Dungeons and Dragons. In this system a character has a number of attributes and talents. Each talent has several attributes associated with it, and to make a talent check the player rolls a d20 (a 20-sided die) for each associated attribute. Each time a roll result exceeds the attribute score, the difference between roll and attribute is added up. If this difference total (the excess ) is greater than the talent value, the check fails.
Your first task for this tutorial is to write a function that takes as input a list of attribute scores associated with a talent as well as the talent ranks, and returns the probability that the test succeeds. Your algorithm must work efficiently for lists of arbitrary length.
Hint: First write a function that computes the probability distribution for the excess. This can be done efficiently using dynamic programming.
What does this mean? I solved the knapsack problem without any major issues (both 0/1 and unbounded), but I have no idea what to do with this?
The smallest problem to solve first would be rank 1 with a single attribute of say 12 (using the example above) - the probability of passing would be 12/20 right? then rank 2 would be 13/20 then rank 3 would be 14/20?
Am I on the right track? I feel like I might be misunderstanding the actual game rules. | 0 | 1 | 1,791 |
0 | 23,622,236 | 0 | 1 | 0 | 0 | 2 | false | 3 | 2014-05-13T01:23:00.000 | 1 | 2 | 0 | All of Ram being used in a Cellular Automata Python Script | 23,621,423 | 0.099668 | python,arrays,memory-management,anaconda,cellular-automata | If the grid is sparsely populated, you might be better off tracking just the populated parts, using a different data structure, rather than a giant python list (array). | I have a high intensity model that is written in python, with array calculations involving over 200,000 cells for over 4000 time steps. There are two arrays, one a fine grid array, one a coarser grid mesh, Information from the fine grid array is used to inform the characteristics of the coarse grid mesh. When the program is run, it only uses 1% of the cpu but maxes out the ram (8GB). It takes days to run. What would be the best way to start to solve this problem? Would GPU processing be a good idea or do I need to find a way to offload some of the completed calculations to the HDD?
I am just trying to find avenues of thought to move towards a solution. Is my model just pulling too much data into the ram, resulting in slow calculations? | 0 | 1 | 151 |
0 | 23,622,159 | 0 | 1 | 0 | 0 | 2 | false | 3 | 2014-05-13T01:23:00.000 | 1 | 2 | 0 | All of Ram being used in a Cellular Automata Python Script | 23,621,423 | 0.099668 | python,arrays,memory-management,anaconda,cellular-automata | Sounds like your problem is memory management. You're likely writing to your swap file, which would drastically slow down your processing. GPU wouldn't help you with this, as you said you're maxing out your RAM, not your processing (CPU). You probably need to rewrite your algorithm or use different datatypes, but you haven't shared your code, so it's hard to diagnose just based on what you've written. I hope this is enough information to get you heading in the right direction. | I have a high intensity model that is written in python, with array calculations involving over 200,000 cells for over 4000 time steps. There are two arrays, one a fine grid array, one a coarser grid mesh, Information from the fine grid array is used to inform the characteristics of the coarse grid mesh. When the program is run, it only uses 1% of the cpu but maxes out the ram (8GB). It takes days to run. What would be the best way to start to solve this problem? Would GPU processing be a good idea or do I need to find a way to offload some of the completed calculations to the HDD?
I am just trying to find avenues of thought to move towards a solution. Is my model just pulling too much data into the ram, resulting in slow calculations? | 0 | 1 | 151 |
0 | 34,070,637 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2014-05-13T05:54:00.000 | 0 | 2 | 0 | Graphviz xdot utility fails to parse graphs | 23,623,717 | 0 | python,python-2.7,ubuntu,graphviz | This is a bug in latest ubuntu xdot package, please use xdot in pip repository:
sudo apt-get remove xdot
sudo pip install xdot | Lately I have observed that xdot utility which is implemented in python to view dot graphs is giving me following error when I am trying to open any dot file.
File "/usr/bin/xdot", line 4, in xdot.main()
File "/usr/lib/python2.7/dist-packages/xdot.py", line 1947, in main win.open_file(args[0])
File "/usr/lib/python2.7/dist-packages/xdot.py", line 1881, in open_file self.set_dotcode(fp.read(), filename)
File "/usr/lib/python2.7/dist-packages/xdot.py", line 1863, in set_dotcode if self.widget.set_dotcode(dotcode, filename):
File "/usr/lib/python2.7/dist-packages/xdot.py", line 1477, in set_dotcode self.set_xdotcode(xdotcode)
File "/usr/lib/python2.7/dist-packages/xdot.py", line 1497, in set_xdotcode self.graph = parser.parse()
File "/usr/lib/python2.7/dist-packages/xdot.py", line 1167, in parse DotParser.parse(self)
File "/usr/lib/python2.7/dist-packages/xdot.py", line 977, in parse self.parse_graph()
File "/usr/lib/python2.7/dist-packages/xdot.py", line 986, in parse_graph self.parse_stmt()
File "/usr/lib/python2.7/dist-packages/xdot.py", line 1032, in parse_stmt self.handle_node(id, attrs)
File "/usr/lib/python2.7/dist-packages/xdot.py", line 1142, in handle_node shapes.extend(parser.parse())
File "/usr/lib/python2.7/dist-packages/xdot.py", line 612, in parse w = s.read_number()
File "/usr/lib/python2.7/dist-packages/xdot.py", line 494, in read_number return int(self.read_code())
ValueError: invalid literal for int() with base 10: '206.05'
I have observed few things;
The same utility works fine for me on previous ubuntu versions(12.04, 13.04). The problem is when this is run on ubuntu 14.04. I am not sure if it is an ubuntu problem.
As per the trace log above the int() function has encounterd some float value which is causing the exception at the end of log.But the contents of my dot files does not contain any float value, so how come the trace shows ValueError: invalid literal for int() with base 10: '206.05'?
Any clue will be helpful. | 0 | 1 | 1,384 |
0 | 23,634,933 | 0 | 1 | 0 | 0 | 1 | false | 5 | 2014-05-13T14:56:00.000 | 8 | 3 | 0 | How to create the negative of a sentence in nltk | 23,634,759 | 1 | python,nlp,nltk | No there is not. What is more important it is quite a complex problem, which can be a topic of research, and not something that "simple built in function" could solve. Such operation requires semantic analysis of the sentence, think about for example "I think that I could run faster" which of the 3 verbs should be negated? We know that "think", but for the algorithm they are just the same. Even the case of detection whether you should use "do" or "does" is not so easy. Consider "Mary and Jane walked down the road" and "Jane walked down the road", without parse tree you won't be able to distinguish the singular/plural problem. To sum up, there is no, and cannot be any simple solution. You can design any kind of heuristic you want (one of such is proposed POS-based negation) and if it fails, start a research in this area. | I am new to NLTK. I would like to create the negative of a sentence (which will usually be in the present tense). For example, is there a function to allow me to convert:
'I run' to 'I do not run'
or
'She runs' to 'She does not run'.
I suppose I could use POS to detect the verb and its preceding pronoun but I just wondered if there was a simpler built in function | 0 | 1 | 2,143 |
0 | 23,644,971 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2014-05-14T02:11:00.000 | 0 | 2 | 1 | Using TotalOrderPartitioner in Hadoop streaming | 23,644,545 | 0 | python,hadoop | Did not try, but taking the example with KeyFieldBasedPartitioner and simply replacing:
-partitioner org.apache.hadoop.mapred.lib.KeyFieldBasedPartitioner
with
-partitioner org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner
Should work. | I'm using python with Hadoop streaming to do a project, and I need the similar functionality provided by the TotalOrderPartitioner and InputSampler in Hadoop, that is, I need to sample the data first and create a partition file, then use the partition file to decide which K-V pair will go to which reducer in the mapper. I need to do it in Hadoop 1.0.4.
I could only find some Hadoop streaming examples with KeyFieldBasedPartitioner and customized partitioners, which use the -partitioner option in the command to tell Hadoop to use these partitioners. The examples I found using TotalOrderPartitioner and InputSampler are all in Java, and they need to use the writePartitionFile() of InputSampler and the DistributedCache class to do the job. So I am wondering if it is possible to use TotalOrderPartitioner with hadoop streaming? If it is possible, how can I organize my code to use it? If it is not, is it practical to implement the total partitioner in python first and then use it? | 0 | 1 | 909 |
0 | 23,659,957 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2014-05-14T16:01:00.000 | 1 | 1 | 0 | Tell scipy.optimize.minimize to fail | 23,659,698 | 1.2 | python,scipy,mathematical-optimization,minimize | You have 2 options I can think of:
opt for constrained optimization
modify your objective function to diverge whenever your numerical simulation does not converge. Basically this means returning a large value, large compared to a 'normal' value, which depends on your problem at hand. minimize will then try to optimize going in another direction
I am however a bit surprised that minimize does not understand inf as a large value, and does not try to look for a solution in another direction. Could it be that it returns with 0 iterations only when your objective function returns nan? You could try debugging the issue by printing the value just before the return statement in your objective function. | I'm using scipy.optimize.minimize for unrestricted optimization of an objective function which receives a couple of parameters and runs a complex numerical simulation based on these parameters. This simulation does not always converge in which case I make the objective function return inf, in some cases, in others NaN.
I thought that this hack would prevent the minimization from converging anywhere near a set of parameters that makes the simulation diverge. Instead, I encountered a case where the simulation won't even converge for the starting set of parameters but instead of failing, the optimization terminates "successfully" with 0 iterations. It doesn't seem to care about the objective function returning inf.
Is there a way to tell scipy.optimize.minimize to fail, e.g. by raising some sort of exception. While in this case it's obvious that the optimization didn't terminate successfully - because of 0 iterations and the fact that I know the optimal result - at some point I want to run problems that I don't know the solution for and I need to rely on minimize to tell me if shit hit the fan. If returning lots of nans and infs doesn't "break" the algorithm I guess I'll have to do it by brute force.
Here is an example of what the almost-iteration looks like.
The function - a function of two variables - is called 4 times over all:
1) at the starting point -> simulation diverges, f(x) = inf
2) at a point 1e-5 to the right (gradient approximation) -> simulation diverges, f(x) = inf
3) at a point 1e-5 higher (grad. appr.) -> simulation converges, f(x) = some finite value
4) once more at the starting point -> simulation diverges, f(x) = inf | 0 | 1 | 2,291 |
0 | 23,668,100 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2014-05-15T01:15:00.000 | 3 | 1 | 0 | How to split one big rectangle on N smaller rectangles to look random? | 23,667,700 | 1.2 | python,c++,algorithm,boost | One rectangle can be divided into two rectangles by drawing either a horizontal or a vertical line. Divide one of those rectangles and the result is three rectangles. Continue until you have N rectangles. Some limitations to observe to improve the results
Don't divide a rectangle with a horizontal line if the height is
below some threshold
Don't divide a rectangle with a vertical line if the width is below
some threshold
Divide by quarters, thirds, or halves, i.e. avoid 90/10 splits
Keep a list of rectangles sorted by area, and always divide the
rectangle with the largest area | How to split one big rectangle on N smaller rectangles to look random ?
I need to generate couple divisions for different value of n.
Is there library for this in boost for c++ or some for python ? | 0 | 1 | 1,261 |
0 | 23,682,058 | 0 | 0 | 0 | 0 | 1 | true | 5 | 2014-05-15T12:06:00.000 | 8 | 1 | 0 | What's the meaning of p-values which produced by feature selection (i.e. chi2 method)? | 23,677,734 | 1.2 | python,classification,scikit-learn,feature-selection | In general the p-value indicates how probable a given outcome or a more extreme outcome is under the null hypothesis. In your case of feature selection, the null hypothesis is something like this feature contains no information about the prediction target, where no information is to be interpreted in the sense of the scoring method: If your scoring method tests e.g. univariate linear interaction (f_classif, f_regression in sklearn.feature_selection are options for your scoring function), then the null hypothesis says that this linear interaction is not present.
TL;DR The p-value of a feature selection score indicates the probability that this score or a higher score would be obtained if this variable showed no interaction with the target.
Another general statement: scores are better if greater, p-values are better if smaller (and losses are better if smaller) | Recently, I have used sklearn(a python meachine learning library) to do a short-text classification task. I found that SelectKBest class can choose K best of features. However, the first argument of SelectKBest is a score function, which "taking two arrays X and y, and returning a pair of arrays (scores, pvalues)". I know that scores, but what is the meaning of pvalues? | 0 | 1 | 6,714 |
0 | 23,698,952 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2014-05-15T13:38:00.000 | 0 | 1 | 0 | 'combine first' in pandas produces NA error | 23,679,951 | 0 | python,pandas | Found the best solution:
pd.tools.merge.concat([test.construction,test.ops],join='outer')
Joins along the date index and keeps the different columns. To the extent the column names are the same, it will join 'inner' or 'outer' as specified. | I have two dataframes, each with a series of dates as the index. The dates to not overlap (in other words one date range from, say, 2013-01-01 through 2016-06-15 by month and the second DataFrame will start on 2016-06-15 and run quarterly through 2035-06-15.
Most of the column names overlap (i.e. are the same) and the join just does fine. However, there is one columns in each DataFrame that I would like to preserve as 'belonging' to the original DataFrame so that I have them both available for future use. I gave each a different name. For example, DF1 has a column entitled opselapsed_time and DF2 has a column entitled constructionelapsed_time.
When I try to combine DF1 and DF2 together using the command DF1.combine_first(DF2) or vice versa I get this error: ValueError: Cannot convert NA to integer.
Could someone please give me advice on how best to resolve?
Do I need to just stick with using a merge/join type solution instead of combine_first? | 0 | 1 | 373 |
0 | 23,688,578 | 0 | 0 | 0 | 0 | 1 | true | 4 | 2014-05-15T20:41:00.000 | 9 | 1 | 0 | Confusion about Artist place in matplotlib hierarchy | 23,688,227 | 1.2 | python,matplotlib | I do not like those layers
pylab : massive namespace dump that pulls in everything from pyplot and numpy. It is not a 'layer' so much as a very cluttered name space.
pyplot : state-machine based layer (it knows what your 'current axes' and 'current figure' are and applies the functions to that axes/figure. You should use this only when you are playing around in an interactive terminal (other than plt.subplots or plt.figure for setting up the figure/axes objects). Making a distiction between this layer and pylab is dumb. This is the layer that makes it 'like MATLAB'
the OO layer : which is what you should use in all of your scripts
Figures are the top level container objects. They can contain Axes objects and Artist objects (technically Axes are Artists, but it is useful for pedagogical reasons to distinguish between the Axes objects in a figure and the other artists (such as text objects) that are in the Figure, but not associated with an Axes) and know about the Canvas object. Each Axes can contain more Artists objects. The Artists are the useful things you want to put on your graph (lines, text, images, etc). Artists know how to draw themselves on a Canvas. When you call fig.savefig (or render the figure to the screen) the Figure object loops over all of it's children and tell them to draw them selves on to it's Canvas.
The different Backends provide implementations of the Canvas objects, hence the same figure can be rendered to a raster or vector graphic by just changing the Canvas object that is being used.
Unless you want to write a new backend, many of these details are not important and the fact that matplotlib hides them from you is why it is useful.
If the book couldn't get this correct, I would take everything it says with grain of salt. | In this period I am working with matplotlib. I studied many examples, and was able to modify it to suit several of my needs. But I would like to better understand the general structure of the library. For this reason, apart reading a lot of tutorials on the web, I also purchased the ebook "Matplotlib for Python Developers", by Tosi. While it is full of good examples, I still don't fully grasp the relationships between all the different levels.
The book clearly explains that matplotlib has 3 main "modes":
1) pylab, to work similarly to Matlab
2) pyplot, to work in a procedural way
3) the full OO system
Concerning the objects of the OO system, the book lists 3 levels:
1) FigureCanvas, the container class for the Figure instance
2) Figure, the container for Axes instances
3) Axes, the areas that hold the basic elements (lines, points, text...)
The problem is that, by reading the official documentation, I also encountered the notions of Backends and Artists. While I understand the basic logic of them, I am quite confused about their role respect to the previous classifications. In particular, are the Artists in an intermediate level between the FigureCanvas and the Figure, or is that hierarchy not suitable in this case?
I would be grateful to receive some clarifications, eventually also referring to other documentation I could have missed.
Thanks. | 0 | 1 | 1,693 |
0 | 23,734,414 | 0 | 1 | 0 | 0 | 1 | false | 13 | 2014-05-16T15:40:00.000 | 0 | 7 | 0 | Get permutation with specified degree by index number | 23,699,378 | 0 | python,algorithm,permutation,time-complexity,combinatorics | The first part is straight forward if you work wholly in the lexiographic side of things. Given my answer on the other thread, you can go from a permutation to the factorial representation instantly. Basically, you imagine a list {0,1,2,3} and the number that I need to go along is the factorial representation, so for 1,2,3,4, i keep taking the zeroth element and get 000 (0*3+0*!2+0*!1!).
0,1,2,3, => 000
1032 = 3!+1! = 8th permuation (as 000 is the first permutation) => 101
And you can work out the degree trivially, as each transposition which swaps a pair of numbers (a,b) a
So 0123 -> 1023 is 000 -> 100.
if a>b you swap the numbers and then subtract one from the right hand number.
Given two permuations/lexiographic numbers, I just permute the digits from right to left like a bubble sort, counting the degree that I need, and building the new lexiographic number as I go. So to go from 0123 to the 1032 i first move the 1 to the left, then the zero is in the right position, and then I move the 2 into position, and both of those had pairs with the rh number greater than the left hand number, so both add a 1, so 101.
This deals with your first problem. The second is much more difficult, as the numbers of degree two are not evenly distributed. I don't see anything better than getting the global lexiographic number (global meaning here the number without any exclusions) of the permutation you want, e.g. 78 in your example, and then go through all the lexiographic numbers and each time that you get to one which is degree 2, then add one to your global lexiographic number, e.g. 78 -> 79 when you find the first number of degree 2. Obvioulsy, this will not be fast. Alternatively you could try generating all the numbers of degree to. Given a set of n elements, there are (n-1)(n-2) numbers of degree 2, but its not clear that this holds going forward, at least to me, which might easily be a lot less work than computing all the numbers up to your target. and you could just see which ones have lexiographic number less than your target number, and again add one to its global lexiographic number.
Ill see if i can come up with something better. | I've been working on this for hours but couldn't figure it out.
Define a permutation's degree to be the minimum number of transpositions that need to be composed to create it. So a the degree of (0, 1, 2, 3) is 0, the degree of (0, 1, 3, 2) is 1, the degree of (1, 0, 3, 2) is 2, etc.
Look at the space Snd as the space of all permutations of a sequence of length n that have degree d.
I want two algorithms. One that takes a permutation in that space and assigns it an index number, and another that takes an index number of an item in Snd and retrieves its permutation. The index numbers should obviously be successive (i.e. in the range 0 to len(Snd)-1, with each permutation having a distinct index number.)
I'd like this implemented in O(sane); which means that if you're asking for permutation number 17, the algorithm shouldn't go over all the permutations between 0 and 16 to retrieve your permutation.
Any idea how to solve this?
(If you're going to include code, I prefer Python, thank you.)
Update:
I want a solution in which
The permutations are ordered according to their lexicographic order (and not by manually ordering them, but by an efficient algorithm that gives them with lexicographic order to begin with) and
I want the algorithm to accept a sequence of different degrees as well, so I could say "I want permutation number 78 out of all permutations of degrees 1, 3 or 4 out of the permutation space of range(5)". (Basically the function would take a tuple of degrees.) This'll also affect the reverse function that calculates index from permutation; based on the set of degrees, the index would be different.
I've tried solving this for the last two days and I was not successful. If you could provide Python code, that'd be best. | 0 | 1 | 1,866 |
0 | 23,717,381 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2014-05-17T22:58:00.000 | 1 | 1 | 0 | IFFT taking orders of magnitude more than FFT | 23,716,904 | 1.2 | python,numpy,scipy,signal-processing,fft | If your IFFT's length is different from that of the FFT, and the length of the IFFT isn't composed of only very small prime factors (2,3,etc.), then the efficiency can drop off significantly.
Thus, this method of resampling is only efficient if the two sample rates are different by ratios with small prime factors, such as 2, 3 and 7 (hint). | I'm trying to resample a 1-D signal using an FFT method (basically, the one from scipy.signal). However, the code is taking forever to run, even though my input signal is a power of two in length. After looking at profiling, I found the root of the problem.
Basically, this method takes an FFT, then removes part of the fourier spectrum, then takes an IFFT to bring it back to the time domain at a lower sampling rate.
The problem is that that the IFFT is taking far longer to run than the FFT:
ncalls tottime percall cumtime percall filename:lineno(function)
1 6263.996 6263.996 6263.996 6263.996 basic.py:272(ifft)
1 1.076 1.076 1.076 1.076 basic.py:169(fft)
I assume that this has something to do with the amount of fourier points remaining after the cutoff. That said, this is an incredible slowdown so I want to make sure that:
A. This behavior is semi-reasonable and isn't definitely a bug.
B. What could I do to avoid this problem and still downsample effectively.
Right now I can pad my input signal to a power of two in order to make the FFT run really quickly, but not sure how to do the same kind of thing for the reverse operation. I didn't even realize that this was an issue for IFFTs :P | 0 | 1 | 285 |
0 | 23,722,134 | 0 | 1 | 0 | 0 | 3 | false | 3 | 2014-05-18T12:09:00.000 | 3 | 4 | 0 | Allthough not very experienced, I have a great interest in scientific programming. Is Python a good choice compared to MATLAB? | 23,721,725 | 0.148885 | python,matlab,numpy,scipy | From my experience, using Python is more rewarding, especially for a beginner in enginnering. In comparison to Matlab, Python is a general purpose language, and knowing it makes many more tasks than, say, signal analysis easy to accomplish. In my opinion it's easier to interface with external hardware or to do other tasks where you need a "glue" language.
And with respect to signal processing, numpy, scipy, and matplotlib are a very good choice! I never felt I would miss out on anything! It was rather the other way around that with Matlab I was missing all the general purpose stuff and the "batteries included" nature of Python. The number of freely available libraries for Python is just overwhelming.
On top, basing your work on an open source project pays back. As a student, you can simply install Python on all the machines that matter to you (no additional costs), you can benefit from reading the source of others (great learning experience), and once you are doing some "production" stuff later on, you have the power to fix stuff yourself. With Matlab and other closed-source packages, you always depend on somebody else.
Good luck! | I am a grad student in Engineering and I am beginning to realise how important a skill programming is in my profession. In undergrad studies we were introduced to MATLAB (Actually Octave) and I used to think that that was the way to go, but I have been doing some research and it seems that the scientific community is starting to move over to Python, through it's numpy and scipy packages, at an alarming rate.
I feel that I am at the beginning of my 'programming journey' and have been told a few times that I should choose a language and stick to it.
I have basically made up my mind that I want to move over to Python, but would like some informed opinions on my decision?
I am not looking to start a MATLAB vs Python style thread as this will only lead to people giving their opinions as facts and I know this is not the style of this forum. I am simply looking for validation that a move from MATLAB to Python is a good idea for a person in my position.
P.S. I know Python is free and MATLAB is expensive, but that it simply not a good enough reason for me to make this decision. | 0 | 1 | 375 |
0 | 23,722,826 | 0 | 1 | 0 | 0 | 3 | false | 3 | 2014-05-18T12:09:00.000 | 6 | 4 | 0 | Allthough not very experienced, I have a great interest in scientific programming. Is Python a good choice compared to MATLAB? | 23,721,725 | 1 | python,matlab,numpy,scipy | You should consider what particular capabilities you need, and see if Numpy and Scipy can meet them. Matlab's real value isn't in the base package, which is more-or-less matched by a combination of numpy, scipy and matplotlib, but in the various toolboxes one can purchase. For instance, I'm not aware of a Robust Control toolbox equivalent for Python.
Another feature of Matlab that doesn't have an easy-to-use Python equivalent is Simulink, especially the mature real-time hardware-in-the-loop simulation and embedded code-generation. There are open-source projects with similar goals: JModelica is worth looking at, as is Scilab's Scicos.
A final consideration is what is used in the industry you plan to work in.
Having said all that, if you can use Python, you should; it's more fun, and it's (probably) a fundamentally better language. If you do become proficient in Python, switching to Matlab if you have to won't be very difficult.
My experience is that using Python made me a better Matlab programmer; Python's basic facilities (list comprehensions, dictionaries, modules, etc.) made me look for similar capabilities in Matlab, and made me organize my Matlab code better. | I am a grad student in Engineering and I am beginning to realise how important a skill programming is in my profession. In undergrad studies we were introduced to MATLAB (Actually Octave) and I used to think that that was the way to go, but I have been doing some research and it seems that the scientific community is starting to move over to Python, through it's numpy and scipy packages, at an alarming rate.
I feel that I am at the beginning of my 'programming journey' and have been told a few times that I should choose a language and stick to it.
I have basically made up my mind that I want to move over to Python, but would like some informed opinions on my decision?
I am not looking to start a MATLAB vs Python style thread as this will only lead to people giving their opinions as facts and I know this is not the style of this forum. I am simply looking for validation that a move from MATLAB to Python is a good idea for a person in my position.
P.S. I know Python is free and MATLAB is expensive, but that it simply not a good enough reason for me to make this decision. | 0 | 1 | 375 |
0 | 23,722,879 | 0 | 1 | 0 | 0 | 3 | false | 3 | 2014-05-18T12:09:00.000 | 3 | 4 | 0 | Allthough not very experienced, I have a great interest in scientific programming. Is Python a good choice compared to MATLAB? | 23,721,725 | 0.148885 | python,matlab,numpy,scipy | I personally feel that working with Python is a lot better, As @bdoering mentioned working on Opensource projects is far better than working on closed source.
Matlab is quite industry specific, and is still not wide spread in the industry. If you work with these softwares, sooner or later you will be stuck between different kinds of them too (ex, Matlab vs Mathematica). However, Syntax will be easy to write and modules will run quickly and simulate. But in the end there will always be a limitation with Matlab. My observation says that using using a software like Matlab may provide you quick simulations of graphs and models, but will limit your learning curve.
Go for Python! | I am a grad student in Engineering and I am beginning to realise how important a skill programming is in my profession. In undergrad studies we were introduced to MATLAB (Actually Octave) and I used to think that that was the way to go, but I have been doing some research and it seems that the scientific community is starting to move over to Python, through it's numpy and scipy packages, at an alarming rate.
I feel that I am at the beginning of my 'programming journey' and have been told a few times that I should choose a language and stick to it.
I have basically made up my mind that I want to move over to Python, but would like some informed opinions on my decision?
I am not looking to start a MATLAB vs Python style thread as this will only lead to people giving their opinions as facts and I know this is not the style of this forum. I am simply looking for validation that a move from MATLAB to Python is a good idea for a person in my position.
P.S. I know Python is free and MATLAB is expensive, but that it simply not a good enough reason for me to make this decision. | 0 | 1 | 375 |
0 | 23,765,727 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2014-05-19T04:53:00.000 | 2 | 2 | 0 | Text summarization using deep learning techniques | 23,729,919 | 0.197375 | python,theano,summarization,deep-learning | I think you need to be a little more specific. When you say "I am unable to figure to how exactly the summary is generated for each document", do you mean that you don't know how to interpret the learned features, or don't you understand the algorithm? Also, "deep learning techniques" covers a very broad range of models - which one are you actually trying to use?
In the general case, deep learning models do not learn features that are humanly intepretable (albeit, you can of course try to look for correlations between the given inputs and the corresponding activations in the model). So, if that's what you're asking, there really is no good answer. If you're having difficulties understanding the model you're using, I can probably help you :-) Let me know. | I am trying to summarize text documents that belong to legal domain.
I am referring to the site deeplearning.net on how to implement the deep learning architectures. I have read quite a few research papers on document summarization (both single document and multidocument) but I am unable to figure to how exactly the summary is generated for each document.
Once the training is done, the network stabilizes during testing phase. So even if I know the set of features (which I have figured out) that are learnt during the training phase, it would be difficult to find out the importance of each feature (because the weight vector of the network is stabilized) during the testing phase where I will be trying to generate summary for each document.
I tried to figure this out for a long time but it's in vain.
If anybody has worked on it or have any idea regarding the same, please give me some pointers. I really appreciate your help. Thank you. | 0 | 1 | 2,167 |
0 | 23,784,889 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2014-05-21T13:25:00.000 | 0 | 1 | 0 | Pandas performance: Multiple dtypes in one column or split into different dtypes? | 23,784,578 | 0 | python,pandas | Seems to me that it may depend on what your subsequent use case is. But IMHO I would make each column unique type otherwise functions such as group by with totals and other common Pandas functions simply won't work. | I have huge pandas DataFrames I work with. 20mm rows, 30 columns. The rows have a lot of data, and each row has a "type" that uses certain columns. Because of this, I've currently designed the DataFrame to have some columns that are mixed dtypes for whichever 'type' the row is.
My question is, performance wise, should I split out mixed dtype columns into two separate columns or keep them as one? I'm running into problems getting some of these DataFrames to even save(to_pickle) and trying to be as efficient as possible.
The columns could be mixes of float/str, float/int, float/int/str as currently constructed. | 0 | 1 | 581 |
0 | 38,926,745 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2014-05-21T14:48:00.000 | 1 | 2 | 0 | Updating Pandas dependencies after installing pandas | 23,786,694 | 0.099668 | python,pandas | Yes, you will, Pandas sources those dependencies. | Pandas has a number of dependencies, e.g matplotlib, statsmodels, numexpr etc.
Say I have Pandas installed, and I update many of its dependencies, if I don't update Pandas, could I run into any problems? | 0 | 1 | 184 |
0 | 23,787,041 | 0 | 0 | 0 | 0 | 2 | true | 0 | 2014-05-21T14:48:00.000 | 3 | 2 | 0 | Updating Pandas dependencies after installing pandas | 23,786,694 | 1.2 | python,pandas | If your version of pandas is old (i.e., not 0.13.1), you should definitely update it to take advantage of any new features/optimizations of the dependencies, and any new features/bug fixes of pandas itself. It is a very actively-maintained project, and there are issues with older versions being fixed all the time.
Of course, if you have legacy code that depends on an older version, you should test it in a virtualenv with the newer versions of pandas and the dependencies before updating your production libraries, but at least in my experience the newer versions are pretty backwards-compatible, as long as you're not relying on buggy behavior. | Pandas has a number of dependencies, e.g matplotlib, statsmodels, numexpr etc.
Say I have Pandas installed, and I update many of its dependencies, if I don't update Pandas, could I run into any problems? | 0 | 1 | 184 |
0 | 23,816,393 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2014-05-21T20:56:00.000 | 1 | 3 | 0 | How to extract meaning from sentences after running named entity recognition? | 23,793,628 | 0.066568 | python,nlp,nltk | I do not think your "algo" is even doing entity recognition... however, stretching the problem you presented quite a bit, what you want to do looks like coreference resolution in coordinated structures containing ellipsis. Not easy at all: start by googling for some relevant literature in linguistics and computational linguistics. I use the standard terminology from the field below.
In practical terms, you could start by assigning the nearest antecedent (the most frequently used approach in English). Using your examples:
first extract all the "entities" in a sentence
from the entity list, identify antecedent candidates ("litigation", etc.). This is a very difficult task, involving many different problems... you might avoid it if you know in advance the "entities" that will be interesting for you.
finally, you assign (resolve) each anaphora/cataphora to the nearest antecedent. | First: Any recs on how to modify the title?
I am using my own named entity recognition algorithm to parse data from plain text. Specifically, I am trying to extract lawyer practice areas. A common sentence structure that I see is:
1) Neil focuses his practice on employment, tax, and copyright litigation.
or
2) Neil focuses his practice on general corporate matters including securities, business organizations, contract preparation, and intellectual property protection.
My entity extraction is doing a good job of finding the key words, for example, my output from sentence one might look like this:
Neil focuses his practice on (employment), (tax), and (copyright litigation).
However, that doesn't really help me. What would be more helpful is if i got an output that looked more like this:
Neil focuses his practice on (employment - litigation), (tax - litigation), and (copyright litigation).
Is there a way to accomplish this goal using an existing python framework such as nltk (after my algo extracts the practice areas) can I use ntlk to extract the other words that my "practice areas" modify in order to get a more complete picture? | 0 | 1 | 1,842 |
0 | 23,809,181 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2014-05-22T13:34:00.000 | 1 | 1 | 0 | Define a 2D Gaussian probability with five peaks | 23,808,446 | 0.197375 | python,numpy,statistics,scipy,probability | If I understand what you're asking, check out Gaussian Mixture Models and Expectation Maximization. I don't know of any pre-implemented versions of these in Python, although I haven't looked too hard. | I have a 2D data and it contains five peaks. Could I fit five 2D Gaussians function to obtain the peaks? In my problem, the peaks do not refer to the clustering problem. Which I think EM would be an appropriate answer for it.
In my case I measure a variable in x-y space and it shows maximum in more than one position. Is still fitting Fourier series or using Expectation-Maximization method an applicable solution to my problem?
In order to make my likelihood, do I need to just add up the five 2D Gaussians distributions with x and y and the height of each peak as variables? | 0 | 1 | 190 |
1 | 23,861,390 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2014-05-23T01:04:00.000 | 1 | 1 | 0 | Interoperability advice - Python, C, Matplotlib/OpenGL run-time efficency | 23,819,504 | 0.197375 | python,c,opengl,interop,python-cffi | It would help to know what the turnaround time for the simulation runs is and how fast you want to display and update graphs. More or less realtime, tens of milliseconds for each? Seconds? Minutes?
If you want to draw graphs, I'd recommend Matplotlib rather than OpenGL. Even hacking the Matplotlib code yourself to make it do exactly what you want will probably still be easier than doing stuff in OpenGL. And Matplotlib also has "XKCD" style graphs :-)
PyOpenGL works fine with wxPython. Most of the grunt work in modern 3D is done by the GPU so it probably won't be worth doing 3D graphics in C rather than Python if you decide to go that route.
Hope this helps. | Current conditions:
C code being rewritten to do almost the same type of simulation every time (learning behavior in mice)
Matlab code being written for every simulation to plot results (2D, potentially 3D graphs)
Here are my goals:
Design GUI (wxPython) that allows me to build a dynamic simulator
GUI also displays results of simulation via OpenGL (or perhaps Matplotlib)
Use a C wrapper (CFFI) to run the simulation and send the results (averages) to OpenGL or Matplotlib
Question:
In order to have this software run as efficiently as possible, it makes sense to me that CFFI should be used to run the simulation...what I'm not sure about is if it would be better to have that FFI instance (or a separate one?) use an OpenGL C binding to do all the graphics stuff and pass the resulting graphs up to the Python layer to display in the GUI, or have CFFI send the averages of the simulations (the data that gets plotted) to variables in the Python level and use PyOpenGL or Matplotlib to plot graphs. | 0 | 1 | 461 |
0 | 23,841,290 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2014-05-23T21:17:00.000 | 3 | 1 | 0 | Example order in machine learning algorithms (Scikit Learn) | 23,838,453 | 0.53705 | python,numpy,machine-learning,scipy,scikit-learn | No, the ordering of the patterns in the training set do not matter. While the ordering of samples can affect stochastic gradient descent learning algorithms (like for example the one for the NN) they are in most cases coded in a way that ensures internal randomness. SVM on the other hand is globally convergant and it will result in the exact same solution regardless of the ordering. | I'm doing some classification with Python and scikit-learn. I have a question which doesn't seem to be covered in the documentation: if I'm doing, for example, classification with SVM, does the order of the input examples matter? If I have binary labels, will the results be less accurate if I put all the examples with label 0 next to each other and all the examples with label 1 next to each to other, or would it be better to mix them up? What about the other algorithms scikit provides? | 0 | 1 | 138 |
0 | 23,876,265 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2014-05-24T12:49:00.000 | 0 | 2 | 0 | Constrained optimization in SciPy | 23,845,235 | 0 | python,numpy,scipy | The returned value of scipy.optimize.minimize is of type Result:
Result contains, among other things, the inputs (x) which minimize f. | I need, for a simulation, to find the argument (parameters) that maximizes a multivariable function with constraints.
I've seen that scipy.optimize.minimize gives the minimum of a function (and, the maximum of the minus function) of a given function and I can use constraints and bounds. But, reading the doc, I've find out that it returns the minimum value but not the parameter that minimizes it (am I right?)
scipy.optiminize.fmin does give the parameter that minimize the function, but this doesn't accept bounds or contraints.
Looking in numpy, there is a function called argmin but it takes a vector as argument and return the "parameter" that minimizes it.
Is there such a function that, like minimize, accept constraint and, like fmin, return the parameter that minimize the function?
Thanks in advance. | 0 | 1 | 4,417 |
0 | 23,895,466 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2014-05-27T17:22:00.000 | 1 | 1 | 0 | Change the column name of dataframe at runtime | 23,895,408 | 1.2 | python,pandas | i would recommend just using pandas.io.sql to download your database data. it returns your data in a DataFrame.
but if, for some reason, you want to access the columns, you already have your answer:
assignment: df['column%d' % count] = data
retrieval: df['column%d' % count] | I am trying to initialize an empty dataframe with 5 column values. Say column1, column2, column3, column4, column5.
Now I want to read data from database and want to insert specific column values from the database to this dataframe. Since there are 5 columns its easier to do it individually. But i have to extend the number of columns of the dataframe to 70. For that I am using for loop.
To update the coulmn value I was using
dataframe['column "+count+"'] = .... where count is an incremental variable ranging upto 70.
But the above code adds a new column to the dataframe. How can I use the count variable to access these column names? | 0 | 1 | 193 |
0 | 23,944,220 | 0 | 1 | 1 | 0 | 2 | false | 1 | 2014-05-28T10:02:00.000 | 1 | 2 | 0 | how improve speed of math.sqrt() with numba jit compiler in python 2.7 | 23,908,547 | 0.099668 | python,performance,jit,numba | Numba is mapping math.sqrt calls to sqrt/sqrtf in libc already. The slowdown probably comes from the overhead of Numba. This overhead comes from (un)boxing PyObjects and detecting if errors occurred in the compiled code. It affects calling small functions from Python but less when calling from another Numba compiled function because there is no (un)boxing
If you set the environment variable NUMBA_OPT=3, aggressive optimization will turn on, eliminating some of the overhead but increases the code generation time. | I have a complex function that performs math operations that cannot be vectorized. I have found that using NUMBA jit compiler actually slows performance. It it probably because I use within this function calls to python math.sqrt.
How can I force NUMBA to replace calls to python math.sqrt to faster C calls to sqrt?
--
regards
Kes | 0 | 1 | 1,612 |
0 | 23,943,709 | 0 | 1 | 1 | 0 | 2 | false | 1 | 2014-05-28T10:02:00.000 | 4 | 2 | 0 | how improve speed of math.sqrt() with numba jit compiler in python 2.7 | 23,908,547 | 0.379949 | python,performance,jit,numba | Numba already does replace calls to math.sqrt to calls to a machine-code library for sqrt. So, if you are getting slower performance it might be something else.
Can you post the code you are trying to speed up. Also, which version of Numba are you using. In the latest version of Numba, you can call the inspect_types method of the decorated function to print a listing of what is being interpreted as python objects (and therefore still being slow). | I have a complex function that performs math operations that cannot be vectorized. I have found that using NUMBA jit compiler actually slows performance. It it probably because I use within this function calls to python math.sqrt.
How can I force NUMBA to replace calls to python math.sqrt to faster C calls to sqrt?
--
regards
Kes | 0 | 1 | 1,612 |
0 | 23,930,543 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2014-05-29T02:14:00.000 | 1 | 1 | 0 | Is it possible to mask outliers within a scikit learn pipeline? | 23,924,714 | 0.197375 | python,scikit-learn,outliers | There's no support for masking in scikit-learn; outlier detection is done ad hoc by some estimators (e.g. DBSCAN, or RANSAC, which will appear in the next release).
If you want to remove outliers yourself, just use NumPy indexing. | I have a pipeline where I transform some data and fit a curve to it. Is there a preferred/standard way for masking the outliers in the data? | 0 | 1 | 906 |
0 | 23,946,348 | 0 | 1 | 1 | 0 | 1 | true | 2 | 2014-05-29T22:46:00.000 | 7 | 1 | 0 | How do numpy and GMPY2 compare with GMP in terms of speed? | 23,944,242 | 1.2 | python,c,numpy,gmp,gmpy | numpy and GMPY2 have different purposes.
numpy has fast numerical libraries but to achieve high performance, numpy is effectively restricted to working with vectors or arrays of low-level types - 16, 32, or 64 bit integers, or 32 or 64 bit floating point values. For example, numpy access highly optimized routines written in C (or Fortran) for performing matrix multiplication.
GMPY2 uses the GMP, MPFR, and MPC libraries for multiple-precision calculations. It isn't targeted towards vector or matrix operations.
The Python interpreter adds overhead to each call to an external library. Whether or not the slowdown is significant depends on the how much time is spend by the external library. If the running time of the external library is very short, say 10e-8 seconds, then Python's overhead is significant. If the running time of the external library is relatively long, several seconds or longer, then Python's overhead is probably insignificant.
Since you haven't said what you are trying to accomplish, I can't give a better answer.
Disclaimer: I maintain GMPY2. | I understand that GMPY2 supports the GMP library and numpy has fast numerical libraries. I want to know how the speed compares to actually writing C (or C++) code with GMP. Since Python is a scripting language, I don't think it will ever be as fast as a compiled language, however I have been wrong about these generalizations before.
I can't get GMP to work on my computer, so I can't run any tests. If I could, just general math like addition and maybe some trig functions. I'll figure out GMP later. | 0 | 1 | 2,014 |
0 | 27,366,408 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2014-06-02T18:06:00.000 | 0 | 1 | 0 | I get a PyBrain BackpropTrainer AssertionError on Windows 7, which requirement is missin? | 24,000,654 | 0 | python,neural-network,backpropagation,pybrain | The assert statement checks if a condition is true. In this case, if the inner dimension (indim) of your network is the same as your dataset, ds. Check if im3.flatten() is igual 12288
assert ds.indim == network.indim # 12288 != im3.flatten(), error! | I initialized ds = SupervisedDataSet(12288,1)
and add data ds.appendLinked(im3.flatten(),10) where im3 is an openCV picture.
and this is my trainer -> trainer = BackpropTrainer(red, ds)
When the running process reach BackpropTrainer, i get an AssertionError on backprop.py line 35 self.setData(dataset).
It's a pybrain error on Windows, I developed it on Linux and it run without problem. I don't know what else to do, I tried to reinstall all but it still gets the same error. Can anyone help me? | 0 | 1 | 424 |
0 | 29,903,784 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2014-06-02T18:55:00.000 | 1 | 1 | 0 | mmh3 not installed on Elastic MapReduce in AWS | 24,001,364 | 0.197375 | python,amazon-web-services,elastic-map-reduce | You can use pip install mmh3 to install it. | I need to use mmh3 for hashing. However, when I run
"python MultiwayJoin.py R.csv S.csv T.csv -r emr > output.txt" in terminal, it returned an error said that:
File "MultiwayJoin.py", line 5, in
import mmh3
ImportError: No module named mmh3 | 0 | 1 | 1,466 |
0 | 64,190,596 | 0 | 0 | 0 | 0 | 1 | false | 13 | 2014-06-02T20:13:00.000 | 0 | 4 | 0 | python: How to use POS (part of speech) features in scikit learn classfiers (SVM) etc | 24,002,485 | 0 | python,machine-learning,scikit-learn,nltk | I think a better method would be to :
Step-1: Create word/sentence embeddings for each text/sentence.
Step-2: Calculate the POS-tags. Feed the POS-tags to a embedder as Step-1.
Step-3: Elementwise multiply the two vectors. (This is to ensure that the word-embeddings in each sentence is weighted by the POS-tags associated with it.
Thanks | I want to use part of speech (POS) returned from nltk.pos_tag for sklearn classifier, How can I convert them to vector and use it?
e.g.
sent = "This is POS example"
tok=nltk.tokenize.word_tokenize(sent)
pos=nltk.pos_tag(tok)
print (pos)
This returns following
[('This', 'DT'), ('is', 'VBZ'), ('POS', 'NNP'), ('example', 'NN')]
Now I am unable to apply any of the vectorizer (DictVectorizer, or FeatureHasher, CountVectorizer from scikitlearn to use in classifier
Pls suggest | 0 | 1 | 11,721 |
0 | 24,048,037 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2014-06-03T07:42:00.000 | 0 | 1 | 0 | Matplotlib animation + IPython: temporary disabling interactive mode? | 24,009,656 | 0 | python,animation,matplotlib,ipython | As @tcaswell pointed out, the problem was caused by the callback that was indirectly calling plt.show(). | I have a python a script that generates an animation using matplotlib's animation.FuncAnimation and animation.FFMpegWriter. It works well, but there's an issue when running the code in IPython: each frame of the animation is displayed on screen while being generated, which slows down the movie generation process.
I've tried issuing plt.ioff() before running the animation code, but the figure is still displayed on screen. Is there a way to disable this behavior in IPython?
On a related note, if a run the script from a shell (i.e. python myMovieGenScript.py), only one frame is shown, blocking execution. I can close it and the rest of the frames are rendered off screen (which is what I want). Is there a way to prevent that single frame to be displayed, so no user interaction is required? | 0 | 1 | 285 |
0 | 24,024,883 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2014-06-03T20:54:00.000 | 0 | 2 | 0 | How to generate random int around specific mean? | 24,024,736 | 0 | python,random | Yes, there is. It is random.triangular(min, max, av).
Your mean value will be close, but not equal to av.
Edit: see comments below, this has drawbacks. | I need to generate 100 age values between 23 and 72 and the mean value must be 42 years.
Do you think such a function already exists in standard python? If not, I think I know python just enough and should be able to code the algorithm but I am hoping something is already there for use. Any hints? | 0 | 1 | 110 |
0 | 24,030,090 | 0 | 0 | 0 | 0 | 1 | false | 29 | 2014-06-03T23:25:00.000 | 0 | 3 | 0 | Exporting figures from Bokeh as svg or pdf? | 24,026,618 | 0 | python,bokeh | It seems that since bokeh uses html5 canvas as a backend, it will be writing things to static html pages. You could always export the html to pdf later. | Is it possible to output individual figures from Bokeh as pdf or svg images? I feel like I'm missing something obvious, but I've checked the online help pages and gone through the bokeh.objects api and haven't found anything... | 1 | 1 | 17,265 |
0 | 24,771,340 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2014-06-06T22:15:00.000 | 1 | 1 | 0 | SciPy Quad Integration: Accuracy Warning | 24,091,411 | 1.2 | python-2.7,scipy,integrate | in scipy.integrate.quad there's a lower-level call to a difference function that is iterated over, and it's iterated over divmax times. In your case, the default for divmax=20. There are some functions that you can override this default -- for example scipy.integrate.quadrature.romberg allows you to set divmax (default here is 10) as a keyword. The warning is thrown when the tolerances aren't met for the difference function. Setting divmax to a higher value will run longer, and hopefully meet the tolerance requirements. | I am currently trying to compute out an integral with scipy.integrate.quad, and for certain values I get the following error:
/Library/Frameworks/Python.framework/Versions/7.3/lib/python2.7/site-packages/scipy/integrate/quadrature.py:616: AccuracyWarning: divmax (20) exceeded. Latest difference = 2.005732e-02
AccuracyWarning)
Looking at the documentation, it isn't entirely clear to me why this warning is raised, what it means, and why it only occurs for certain values. | 0 | 1 | 2,346 |
0 | 24,121,929 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2014-06-09T13:59:00.000 | -3 | 2 | 0 | How to set bin content in a hist2d (matplotlib) | 24,121,883 | -0.291313 | python,matplotlib,histogram | Do you want to set the number of elements in each bin? I think that's a bar plot rather than a histogram. So simply use ax.bar instead.
EDIT Good point by tcaswell, for multidimensional images the equivalent is imshow rather than bar. | I am trying to plot values of z binned in (x,y). This would look like a hist2d in matplotlib but with the bin content being defined by another array instead of representing the number of counts. Is there any way to set the bin content in hist2d? | 0 | 1 | 1,859 |
0 | 27,997,033 | 0 | 0 | 0 | 0 | 1 | false | 16 | 2014-06-09T14:50:00.000 | 1 | 6 | 0 | pandas ValueError: numpy.dtype has the wrong size, try recompiling | 24,122,850 | 0.033321 | python,numpy,pandas | pip uninstall numpy uninstalls the old version of numpy
pip install numpy finds and installs the latest version of numpy | I took a new clean install of OSX 10.9.3 and installed pip, and then did
pip install pandas
pip install numpy
Both installs seemed to be perfectly happy, and ran without any errors (though there were a zillion warnings). When I tried to run a python script with import pandas, I got the following error:
numpy.dtype has the wrong size, try recompiling Traceback (most recent call last):
File "./moen.py", line 7, in import pandas File "/Library/Python/2.7/site-packages/pandas/__init__.py", line 6, in from . import hashtable, tslib, lib
File "numpy.pxd", line 157, in init pandas.hashtable (pandas/hashtable.c:22331)
ValueError: numpy.dtype has the wrong size, try recompiling
How do I fix this error and get pandas to load properly? | 0 | 1 | 27,508 |
0 | 35,639,956 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2014-06-12T13:11:00.000 | 0 | 1 | 0 | timeout pandas read_csv stringio timeout | 24,185,302 | 0 | django,python-2.7,pandas | This is over a year old, but this is the only SO thread I found on this issue so thought I'd comment on what we did to fix it. It turns out there are issues with pd.read_csv(FileObject, engine="C") on an embedded wsgi process. We ended up solving this issue by upgrading to pandas 0.17.0. Another working solution was to run mod_wsgi in daemon mode as this issue seems to relate to some conflict in how MPM is running read_csv with the C-engine while in embedded mode. We still aren't quite sure what the exact issue is however... | Pandas read_csv causes a timeout on my production server with python 2.7, django 1.6.5, apache and nginx. This happens only when using a string buffer like StringIO.StringIO or io.BytesIO. When supplying a filename as argument to read_csv everything works fine.
Debugging does not help because on my development server this problem does not occur.
Any ideas? | 0 | 1 | 1,792 |
0 | 24,209,906 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2014-06-13T16:20:00.000 | 0 | 1 | 0 | Finding the yearly mean temperatures from scrambled months python | 24,209,812 | 0 | python,mean | Use a dictionary with the year as the key and the temp and a counter as the value (as a list). If the year isn't found add an entry with the mean temp and the counter at 1. If the year is already there add the temperature to the existing temp and increment the counter.
The rest should be easy. Note this gives you the average mean temperature for the years. If you want the true mean it will be a bit more involved. | Year Month MeanTemp Max Temp Min Temp Total Rain(mm) Total Snow(cm)
2003 12 -0.1 9 -10.8 45 19.2
1974 1 -5.9 8.9 -20 34.3 35.6
2007 8 22.4 34.8 9.7 20.8 0
1993 7 21.7 32.5 11 87.7 0
1982 6 15.2 25.4 4 112.5 0
1940 10 7.4 22.8 -6.1 45.5 0
My data list is a tab-separated file, resembles this and goes on from 1938-2012. My issue is finding the yearly mean temperatures when all the months and years are out of order.
Any sort of beginning steps would be helpful, thanks | 0 | 1 | 554 |
0 | 25,560,508 | 0 | 1 | 0 | 0 | 1 | false | 2 | 2014-06-13T21:04:00.000 | 2 | 1 | 0 | Anaconda Spyder integrated IPython display for dataframes | 24,213,788 | 0.379949 | ipython,spyder | The development version of Spyder has a Pandas Dataframe editor (and an numpy array editor, up to 3d). You can run this from source or wait for the next release, 2.3.1.
This is probably more adequate to edit or visualize dataframe than using the embedded qtconsole. | The new anaconda spyder has an integrated iPython console. I really dislike the way dataframes are displayed in this console. There are some border graphics around the dataframe and resizing that occurs with window resizing that make it difficult to examine the contents. In addition, often for large dataframes if one just types the variable name, we can only see a XX rows x YY columns display, not a display of the top corner of the dataframe.
How can I reset the dataframe display so that it displays the way a standard iPython (or iPython QT) console would display a dataframe, i.e. with no graphics?
Thanks. | 0 | 1 | 1,871 |
0 | 24,214,946 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2014-06-13T22:00:00.000 | 0 | 1 | 0 | Mac OS Mavericks numpy Version Issue | 24,214,364 | 1.2 | python,macos,numpy | From our conversation in chat and adding path to .bashrc not working:
Putting:
/usr/local/bin first in /etc/paths will resolve the issue | When I run 'pip freeze' it shows that numpy==1.8.1; however, when I start Python and import numpy and then check the version number via numpy.version.version I get 1.6.
This is confusing to me and it's also creating a problem for scipy and matplotlib. For example, when I attempt to do the following import 'from matplotlib import pyplot' I get an error saying 'RuntimeError: module compiled against API version 9 but this version of numpy is 6'. This I'm guessing has something to do with the numpy versions being wrong.
Any suggestions? | 0 | 1 | 469 |
0 | 24,232,729 | 0 | 0 | 0 | 0 | 1 | true | 5 | 2014-06-15T18:23:00.000 | 6 | 1 | 0 | Pandas diff() functionality on two columns in a dataframe | 24,232,701 | 1.2 | python,python-2.7,pandas,offset | You can shift A column first:
df['A'].shift(-1) - df['B'] | I have a data frame in which column A is the start time of an activity and column B is the finish time of that activity, and each row represents an activity (rows are arranged chronologically). I want to compute the difference in time between the end of one activity and the start of the next activity, i.e. df[i+1][A] - df[i][B].
Is there a Pandas function to do this (the only thing I can find is diff(), but that only appears to work on a single column). | 0 | 1 | 5,331 |
0 | 24,289,392 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2014-06-18T11:30:00.000 | 3 | 2 | 0 | How can i implement spherical hankel function of the first kind by scipy/numpy or sympy? | 24,284,390 | 0.291313 | python,numpy,scipy,sympy | Although it would be nice if there were an existing routine for calculating the spherical Hankel functions (like there is for the ordinary Hankel functions), they are just a (complex) linear combination of the spherical Bessel functions of the first and second kind so can be easily calculated from existing routines. Since the Hankel functions are complex and depending on your application of them, it can be advantageous to rewrite your expression in terms of the Bessel functions of the first and second kind, ie entirely real quantities, particularly if your final result is real. | I knew that there is no builtin sph_hankel1 in scipy then i want to know that how to implement it in the right way?
Additional: Just show me one correct implementation of sph_hankel1 either using of Scipy or Sympy. | 0 | 1 | 997 |
0 | 24,317,131 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2014-06-19T12:10:00.000 | 7 | 2 | 0 | How to speed up Python code for running on a powerful machine? | 24,306,285 | 1 | python,performance,numpy,cuda,gpu | The comments and Moj's answer give a lot of good advice. I have some experience on signal/image processing with python, and have banged my head against the performance wall repeatedly, and I just want to share a few thoughts about making things faster in general. Maybe these help figuring out possible solutions with slow algorithms.
Where is the time spent?
Let us assume that you have a great algorithm which is just too slow. The first step is to profile it to see where the time is spent. Sometimes the time is spent doing trivial things in a stupid way. It may be in your own code, or it may even be in the library code. For example, if you want to run a 2D Gaussian filter with a largish kernel, direct convolution is very slow, and even FFT may be slow. Approximating the filter with computationally cheap successive sliding averages may speed things up by a factor of 10 or 100 in some cases and give results which are close enough.
If a lot of time is spent in some module/library code, you should check if the algorithm is just a slow algorithm, or if there is something slow with the library. Python is a great programming language, but for pure number crunching operations it is not good, which means most great libraries have some binary libraries doing the heavy lifting. On the other hand, if you can find suitable libraries, the penalty for using python in signal/image processing is often negligible. Thus, rewriting the whole program in C does not usually help much.
Writing a good algorithm even in C is not always trivial, and sometimes the performance may vary a lot depending on things like CPU cache. If the data is in the CPU cache, it can be fetched very fast, if it is not, then the algorithm is much slower. This may introduce non-linear steps into the processing time depending on the data size. (Most people know this from the virtual memory swapping, where it is more visible.) Due to this it may be faster to solve 100 problems with 100 000 points than 1 problem with 10 000 000 points.
One thing to check is the precision used in the calculation. In some cases float32 is as good as float64 but much faster. In many cases there is no difference.
Multi-threading
Python - did I mention? - is a great programming language, but one of its shortcomings is that in its basic form it runs a single thread. So, no matter how many cores you have in your system, the wall clock time is always the same. The result is that one of the cores is at 100 %, and the others spend their time idling. Making things parallel and having multiple threads may improve your performance by a factor of, e.g., 3 in a 4-core machine.
It is usually a very good idea if you can split your problem into small independent parts. It helps with many performance bottlenecks.
And do not expect technology to come to rescue. If the code is not written to be parallel, it is very difficult for a machine to make it parallel.
GPUs
Your machine may have a great GPU with maybe 1536 number-hungry cores ready to crunch everything you toss at them. The bad news is that making GPU code is a bit different from writing CPU code. There are some slightly generic APIs around (CUDA, OpenCL), but if you are not accustomed to writing parallel code for GPUs, prepare for a steepish learning curve. On the other hand, it is likely someone has already written the library you need, and then you only need to hook to that.
With GPUs the sheer number-crunching power is impressive, almost frightening. We may talk about 3 TFLOPS (3 x 10^12 single-precision floating-point ops per second). The problem there is how to get the data to the GPU cores, because the memory bandwidth will become the limiting factor. This means that even though using GPUs is a good idea in many cases, there are a lot of cases where there is no gain.
Typically, if you are performing a lot of local operations on the image, the operations are easy to make parallel, and they fit well a GPU. If you are doing global operations, the situation is a bit more complicated. A FFT requires information from all over the image, and thus the standard algorithm does not work well with GPUs. (There are GPU-based algorithms for FFTs, and they sometimes make things much faster.)
Also, beware that making your algorithms run on a GPU bind you to that GPU. The portability of your code across OSes or machines suffers.
Buy some performance
Also, one important thing to consider is if you need to run your algorithm once, once in a while, or in real time. Sometimes the solution is as easy as buying time from a larger computer. For a dollar or two an hour you can buy time from quite fast machines with a lot of resources. It is simpler and often cheaper than you would think. Also GPU capacity can be bought easily for a similar price.
One possibly slightly under-advertised property of some cloud services is that in some cases the IO speed of the virtual machines is extremely good compared to physical machines. The difference comes from the fact that there are no spinning platters with the average penalty of half-revolution per data seek. This may be important with data-intensive applications, especially if you work with a large number of files and access them in a non-linear way. | I've completed writing a multiclass classification algorithm that uses boosted classifiers. One of the main calculations consists of weighted least squares regression.
The main libraries I've used include:
statsmodels (for regression)
numpy (pretty much everywhere)
scikit-image (for extracting HoG features of images)
I've developed the algorithm in Python, using Anaconda's Spyder.
I now need to use the algorithm to start training classification models. So I'll be passing approximately 7000-10000 images to this algorithm, each about 50x100, all in gray scale.
Now I've been told that a powerful machine is available in order to speed up the training process. And they asked me "am I using GPU?" And a few other questions.
To be honest I have no experience in CUDA/GPU, etc. I've only ever heard of them. I didn't develop my code with any such thing in mind. In fact I had the (ignorant) impression that a good machine will automatically run my code faster than a mediocre one, without my having to do anything about it. (Apart from obviously writing regular code efficiently in terms of loops, O(n), etc).
Is it still possible for my code to get speeded up simply by virtue of being on a high performance computer? Or do I need to modify it to make use of a parallel-processing machine? | 0 | 1 | 6,194 |
0 | 24,306,811 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2014-06-19T12:10:00.000 | 4 | 2 | 0 | How to speed up Python code for running on a powerful machine? | 24,306,285 | 0.379949 | python,performance,numpy,cuda,gpu | I am afraid you can not speed up your program by just running it on a powerful computer. I had this issue while back. I first used python (very slow), then moved to C(slow) and then had to use other tricks and techniques. for example it is sometimes possible to apply some dimensionality reduction to speed up things while having reasonable accurate result, or as you mentioned using multi processing techniques.
Since you are dealing with image processing problem, you do a lot of matrix operations and GPU for sure would be a great help. there are some nice and active cuda wrappers in python that you can easily use, by not knowing too much CUDA. I tried Theano, pycuda and scikit-cuda (there should be more than that since then). | I've completed writing a multiclass classification algorithm that uses boosted classifiers. One of the main calculations consists of weighted least squares regression.
The main libraries I've used include:
statsmodels (for regression)
numpy (pretty much everywhere)
scikit-image (for extracting HoG features of images)
I've developed the algorithm in Python, using Anaconda's Spyder.
I now need to use the algorithm to start training classification models. So I'll be passing approximately 7000-10000 images to this algorithm, each about 50x100, all in gray scale.
Now I've been told that a powerful machine is available in order to speed up the training process. And they asked me "am I using GPU?" And a few other questions.
To be honest I have no experience in CUDA/GPU, etc. I've only ever heard of them. I didn't develop my code with any such thing in mind. In fact I had the (ignorant) impression that a good machine will automatically run my code faster than a mediocre one, without my having to do anything about it. (Apart from obviously writing regular code efficiently in terms of loops, O(n), etc).
Is it still possible for my code to get speeded up simply by virtue of being on a high performance computer? Or do I need to modify it to make use of a parallel-processing machine? | 0 | 1 | 6,194 |
0 | 24,347,728 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2014-06-19T16:47:00.000 | 1 | 1 | 0 | Is there a way generate a chart on google spreadsheet automatically using Python? | 24,312,068 | 1.2 | python-2.7,charts,google-sheets,google-spreadsheet-api | AFAIK, no. There is no way to do this with python.
Google-apps-script can do this, but the spreadsheet-api (Gdata) can't.
You can make a call from Python to Google-apps-script and pass parameters. | Is there a way generate a chart on google spreadsheet automatically using Python? I checked gspread. There seems no api for making charts.
Thanks~ | 0 | 1 | 1,378 |
0 | 24,524,206 | 0 | 0 | 0 | 0 | 2 | false | 15 | 2014-06-23T13:25:00.000 | 2 | 3 | 0 | Naive Bayes: Imbalanced Test Dataset | 24,367,141 | 0.132549 | python,machine-learning,classification,scikit-learn,text-classification | I think gustavodidomenico makes a good point. You can think of Naive Bayes as learning a probability distribution, in this case of words belonging to topics. So the balance of the training data matters. If you use decision trees, say a random forest model, you learn rules for making the assignment (yes there are probability distributions involved and I apologise for the hand waving explanation but sometimes intuition helps). In many cases trees are more robust than Naive Bayes, arguably for this reason. | I am using scikit-learn Multinomial Naive Bayes classifier for binary text classification (classifier tells me whether the document belongs to the category X or not). I use a balanced dataset to train my model and a balanced test set to test it and the results are very promising.
This classifer needs to run in real time and constantly analyze documents thrown at it randomly.
However, when I run my classifier in production, the number of false positives is very high and therefore I end up with a very low precision. The reason is simple: there are many more negative samples that the classifer encounters in the real-time scenario (around 90 % of the time) and this does not correspond to the ideal balanced dataset I used for testing and training.
Is there a way I can simulate this real-time case during training or are there any tricks that I can use (including pre-processing on the documents to see if they are suitable for the classifer)?
I was planning to train my classifier using an imbalanced dataset with the same proportions as I have in real-time case but I am afraid that might bias Naive Bayes towards the negative class and lose the recall I have on the positive class.
Any advice is appreciated. | 0 | 1 | 9,405 |
0 | 24,528,969 | 0 | 0 | 0 | 0 | 2 | true | 15 | 2014-06-23T13:25:00.000 | 11 | 3 | 0 | Naive Bayes: Imbalanced Test Dataset | 24,367,141 | 1.2 | python,machine-learning,classification,scikit-learn,text-classification | You have encountered one of the problems with classification with a highly imbalanced class distribution. I have to disagree with those that state the problem is with the Naive Bayes method, and I'll provide an explanation which should hopefully illustrate what the problem is.
Imagine your false positive rate is 0.01, and your true positive rate is 0.9. This means your false negative rate is 0.1 and your true negative rate is 0.99.
Imagine an idealised test scenario where you have 100 test cases from each class. You'll get (in expectation) 1 false positive and 90 true positives. Great! Precision is 90 / (90+1) on your positive class!
Now imagine there are 1000 times more negative examples than positive. Same 100 positive examples at test, but now there are 1000000 negative examples. You now get the same 90 true positives, but (0.01 * 1000000) = 10000 false positives. Disaster! Your precision is now almost zero (90 / (90+10000)).
The point here is that the performance of the classifier hasn't changed; false positive and true positive rates remained constant, but the balance changed and your precision figures dived as a result.
What to do about it is harder. If your scores are separable but the threshold is wrong, you should look at the ROC curve for thresholds based on the posterior probability and look to see if there's somewhere where you get the kind of performance you want. If your scores are not separable, try a bunch of different classifiers and see if you can get one where they are (logistic regression is pretty much a drop-in replacement for Naive Bayes; you might want to experiment with some non-linear classifiers, however, like a neural net or non-linear SVM, as you can often end up with non-linear boundaries delineating the space of a very small class).
To simulate this effect from a balanced test set, you can simply multiply instance counts by an appropriate multiplier in the contingency table (for instance, if your negative class is 10x the size of the positive, make every negative instance in testing add 10 counts to the contingency table instead of 1).
I hope that's of some help at least understanding the problem you're facing. | I am using scikit-learn Multinomial Naive Bayes classifier for binary text classification (classifier tells me whether the document belongs to the category X or not). I use a balanced dataset to train my model and a balanced test set to test it and the results are very promising.
This classifer needs to run in real time and constantly analyze documents thrown at it randomly.
However, when I run my classifier in production, the number of false positives is very high and therefore I end up with a very low precision. The reason is simple: there are many more negative samples that the classifer encounters in the real-time scenario (around 90 % of the time) and this does not correspond to the ideal balanced dataset I used for testing and training.
Is there a way I can simulate this real-time case during training or are there any tricks that I can use (including pre-processing on the documents to see if they are suitable for the classifer)?
I was planning to train my classifier using an imbalanced dataset with the same proportions as I have in real-time case but I am afraid that might bias Naive Bayes towards the negative class and lose the recall I have on the positive class.
Any advice is appreciated. | 0 | 1 | 9,405 |
0 | 24,424,870 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2014-06-25T04:05:00.000 | 0 | 1 | 0 | How to plot text documents in a scatter map? | 24,400,012 | 0 | python,numpy,matplotlib,scikit-learn | If X is a sparse matrix, you probably need X = X.todense() in order to get access to the data in the correct format. You probably want to check X.shape before doing this though, as if X is very large (but very sparse) it may consume a lot of memory when "densified". | I'm using scikit to perform text classification and I'm trying to understand where the points lie with respect to my hyperplane to decide how to proceed. But I can't seem to plot the data that comes from the CountVectorizer() function. I used the following function: pl.scatter(X[:, 0], X[:, 1]) and it gives me the error: ValueError: setting an array element with a sequence.
Any idea how to fix this?` | 0 | 1 | 99 |
0 | 24,466,862 | 0 | 1 | 0 | 0 | 1 | true | 1 | 2014-06-27T11:01:00.000 | 1 | 1 | 0 | SPSS equivalent of Python Dictionary | 24,450,211 | 1.2 | python,dictionary,spss | A Python dictionary is an in-memory hash table where lookup of individual elements requires fixed time, and there is no deterministic order. SPSS data files are disk-based and sequential and are designed for fast, in-order access for arbitrarily large amounts of data.
So these are intended for quite different purposes, but there is nothing stopping you from using a Python dictionary within Statistics using the apis in the Python Essentials to complement what Statistics does with the casewise data. | I was trying to Google above, but knowing absolutely nothing about SPSS I wasn't sure what search phrase I should be using.
From my initial search (tried using words: "Dictionary" and "Scripting Dictionary") it seems there is something called Data Dictionary in SPSS, but description suggest it is not the same as Python Dictionaries.
Would someone be kind enough just to confirm that SPSS has similar functionality and if yes, can you please suggest key words to be used in Google?
Many thanks
dce | 0 | 1 | 176 |
0 | 24,494,507 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2014-06-30T15:36:00.000 | -1 | 1 | 0 | Solving system of linear inequalities in 3 or more variables - Python | 24,493,849 | -0.197375 | python,python-2.7 | make a matrix object and use Crammer's method | I want to solve systems of linear inequalities in 3 or more variables. That is, to find all possible solutions.
I originally found GLPK and tried the python binding, but the last few updates to GLPK changed the APIs and broke the bindings. I haven't been able to find a way to making work.
I would like to have the symbolic answer, but numeric approximations will be fine too.
I would also be happy to use a library that solves maximization problem. I can always re-write the problems to be solve in that way. | 0 | 1 | 1,233 |
0 | 24,500,552 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2014-06-30T23:43:00.000 | 0 | 1 | 0 | Histogram bin size(matplotlib) | 24,500,522 | 0 | python,matplotlib | Use the numpy function histogram, which returns arrays with the bin locations and sizes. | I'm creating a histogram(which is NOT normalized) using matplotlib.
I want to get the exact size of each bin. That is, not the width but the length.
In other words, number of data contained in each bin.
Any tips??? | 0 | 1 | 148 |
0 | 24,505,486 | 0 | 0 | 0 | 0 | 1 | true | 4 | 2014-07-01T05:58:00.000 | 3 | 2 | 0 | Quickest linear regression implementation in python | 24,503,344 | 1.2 | python,scipy,scikit-learn,statsmodels,pymc | The scikit-learn SGDRegressor class is (iirc) the fastest, but would probably be more difficult to tune than a simple LinearRegression.
I would give each of those a try, and see if they meet your needs. I also recommend subsampling your data - if you have many gigs but they are all samples from the same distibution, you can train/tune your model on a few thousand samples (dependent on the number of features). This should lead to faster exploration of your model space, without wasting a bunch of time on "repeat/uninteresting" data.
Once you find a few candidate models, then you can try those on the whole dataset. | I'm performing a stepwise model selection, progressively dropping variables with a variance inflation factor over a certain threshold.
In order to do this, I'm running OLS many, many times on datasets ranging from a few hundred MB to 10 gigs.
What is the quickest implementation of OLS would be for larger datasets? The Statsmodel OLS implementation seems to be using numpy to invert matrices. Would a gradient descent based method be quicker? Does scikit-learn have an especially quick implementation?
Or maybe an mcmc based approach using pymc is quickest...
Update 1: Seems that the scikit learn implementation of LinearRegression is a wrapper for the scipy implementation.
Update 2: Scipy OLS via scikit learn LinearRegression is twice as fast as statsmodels OLS in my very limited tests... | 0 | 1 | 3,098 |
0 | 24,524,359 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2014-07-01T17:06:00.000 | 2 | 1 | 0 | How do you plot the hyperplane of an sklearn svm with more than 2 features in matplotlib? | 24,515,783 | 1.2 | python,matplotlib,plot | If your linear SVM classifier works quite well, then that suggests there is a hyperplane which separates your data. So there will be a nice 2D geometric representation of the decision boundary.
To understand the "how" you need to look at the support vectors themselves, see which ones contribute to which side of the hyperplane, e.g., by feeding individual support vectors into the trained classifier. In general, visualising text algos is not straightforward. | I have a scikits-learn linear svm.SVC classifier designed to classify text into 2 classes (-1,1). The classifier uses 250 features from the training set to make its predictions, and it works fairly well.
However, I can't figure out how plot the hyperplane or the support vectors in matplotlib. All the examples online use only 2 features to derive the decision boundary and the support vector points. I can't seem to find any that plot hyperplanes or support vectors that have more than 2 features or lack fixed features. I know that there is a fundamental mathematical step that I am missing here, and any help would be appreciated. | 0 | 1 | 1,876 |
0 | 24,516,539 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2014-07-01T17:47:00.000 | 2 | 3 | 0 | efficient, fast numpy histograms | 24,516,396 | 0.132549 | python,arrays,performance,numpy,histogram | First, fill in your 16 bins without considering date at all.
Then, sort the elements within each bin by date.
Now, you can use binary search to efficiently locate a given year/month/week within each bin. | I have a 2D numpy array consisting of ca. 15'000'000 datapoints. Each datapoint has a timestamp and an integer value (between 40 and 200). I must create histograms of the datapoint distribution (16 bins: 40-49, 50-59, etc.), sorted by year, by month within the current year, by week within the current year, and by day within the current month.
Now, I wonder what might be the most efficient way to accomplish this. Given the size of the array, performance is a conspicuous consideration. I am considering nested "for" loops, breaking down the arrays by year, by month, etc. But I was reading that numpy arrays are highly memory-efficient and have all kinds of tricks up their sleeve for fast processing. So I was wondering if there is a faster way to do that. As you may have realized, I am an amateur programmer (a molecular biologist in "real life") and my questions are probably rather naïve. | 0 | 1 | 4,163 |
0 | 24,518,289 | 0 | 1 | 0 | 0 | 2 | false | 7 | 2014-07-01T19:18:00.000 | 4 | 3 | 0 | How to stop NLTK stemmer from removing the trailing "e"? | 24,517,722 | 0.26052 | python,nlp,nltk | The goal of a stemmer is to remove as much of the word as possible to allow it to cover as many cases as possible, yet retain the core of the word. One reason profile might go to profil is to cover the case of profiling. You would need a conditional or another stemmer in order to guard against this, although I would imagine the majority of them will remove the trailing 'e'. (Especially giving the number of 'e' -> 'ing' cases) | I'm using NLTK stemmer to remove grammatical variations of a stem word.
However, the Port or Snowball stemmers remove the trailing "e" of the original form of a noun or verb, e.g., Profile becomes Profil.
How can I prevent this from happening? I know I can use a conditional to guard against this. But obviously it will fail on different cases.
Is there an option or another API for what I want? | 0 | 1 | 3,819 |
0 | 24,521,458 | 0 | 1 | 0 | 0 | 2 | true | 7 | 2014-07-01T19:18:00.000 | 8 | 3 | 0 | How to stop NLTK stemmer from removing the trailing "e"? | 24,517,722 | 1.2 | python,nlp,nltk | I agree with Philip that the goal of stemmer is to retain only the stem. For this particular case you can try a lemmatizer instead of stemmer which will supposedly retain more of a word and is meant to remove exactly different forms of a word like 'profiles' --> 'profile'. There is a class in NLTK for this - try WordNetLemmatizer() from nltk.stem.
Beware that it's still not perfect (like nothing when working with text) because I used to get 'physic' from 'physics'. | I'm using NLTK stemmer to remove grammatical variations of a stem word.
However, the Port or Snowball stemmers remove the trailing "e" of the original form of a noun or verb, e.g., Profile becomes Profil.
How can I prevent this from happening? I know I can use a conditional to guard against this. But obviously it will fail on different cases.
Is there an option or another API for what I want? | 0 | 1 | 3,819 |
0 | 24,519,951 | 0 | 0 | 0 | 0 | 2 | true | 2 | 2014-07-01T19:22:00.000 | 3 | 2 | 0 | Online version of scikit-learn's TfidfVectorizer | 24,517,793 | 1.2 | python,machine-learning,nlp,scikit-learn,vectorization | Intrinsically you can not use TF IDF in an online fashion, as the IDF of all past features will change with every new document - which would mean re-visiting and re-training on all the previous documents, which would no-longer be online.
There may be some approximations, but you would have to implement them yourself. | I'm looking to use scikit-learn's HashingVectorizer because it's a great fit for online learning problems (new tokens in text are guaranteed to map to a "bucket"). Unfortunately the implementation included in scikit-learn doesn't seem to include support for tf-idf features. Is passing the vectorizer output through a TfidfTransformer the only way to make online updates work with tf-idf features, or is there a more elegant solution out there? | 0 | 1 | 2,712 |
0 | 24,841,469 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2014-07-01T19:22:00.000 | 4 | 2 | 0 | Online version of scikit-learn's TfidfVectorizer | 24,517,793 | 0.379949 | python,machine-learning,nlp,scikit-learn,vectorization | You can do "online" TF-IDF, contrary to what was said in the accepted answer.
In fact, every search engine (e.g. Lucene) does.
What does not work if assuming you have TF-IDF vectors in memory.
Search engines such as lucene naturally avoid keeping all data in memory. Instead they load one column at a time (which due to sparsity is not a lot). IDF arises trivially from the length of the inverted list.
The point is, you don't transform your data into TF-IDF, and then do standard cosine similarity.
Instead, you use the current IDF weights when computing similarities, using a weighted cosine similarity (often modified with additional weighting, boosting terms, penalizing terms, etc.)
This approach will work essentially with any algorithm that allows attribute weighting at evaluation time. Many algorithms will do, but very few implementations are flexible enough, unfortunately. Most expect you to multiply the weights into your data matrix before training, unfortunately. | I'm looking to use scikit-learn's HashingVectorizer because it's a great fit for online learning problems (new tokens in text are guaranteed to map to a "bucket"). Unfortunately the implementation included in scikit-learn doesn't seem to include support for tf-idf features. Is passing the vectorizer output through a TfidfTransformer the only way to make online updates work with tf-idf features, or is there a more elegant solution out there? | 0 | 1 | 2,712 |
0 | 24,519,425 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2014-07-01T20:54:00.000 | 0 | 2 | 0 | Matplotlib saves pdf with data outside set | 24,519,113 | 0 | python,pdf,matplotlib,plot | If you don't have a requirement to use PDF figures, you can save the matplotlib figures as .png; this format just contains the data on the screen, e.g. I tried saving a large scatter plot as PDF, its size was 198M; as png it came out as 270K; plus I've never had any problems using png inside latex. | I have a problem with Matplotlib. I usually make big plots with many data points and then, after zooming or setting limits, I save in pdf only a specific subset of the original plot. The problem comes when I open this file: matplotlib saves all the data into the pdf making not visible the one outside of the range. This makes almost impossible to open afterwards those plots or to import them into latex.
Any idea of how I could solve this problem is really welcome.
Thanks a lot | 0 | 1 | 247 |
0 | 24,531,535 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2014-07-02T11:22:00.000 | 1 | 1 | 0 | I have generated a pdf file using matplotlib and I want to add a logo to this pdf file. How can I do it | 24,529,823 | 1.2 | image,python-2.7,matplotlib,pdf-generation | If you can do it the other way round, it is easier:
plot the image
load the logo from file with, e.g. Image module (PIL)
add the logo with plt.imshow, use the extent keyword to place it correctly
save the image into PDF
(You may even want to plot the logo first, so that it stays in the background.)
Unfortunately, this does not work with vector graphics, but as logos usually are not that large, you may use a .png or even a .jpg.
If you already have the PDF's then this is not a matplotlib or python question. You need some PDF editing tools or libraries to add the logo. Possible, but an entirely different thing. | I am using matplotlib to draw a graph using some data and I have saved it in Pdf format.Now I want to add a logo to this file.How can I do this.
Thanks in advance | 0 | 1 | 141 |
0 | 45,240,779 | 0 | 0 | 0 | 0 | 1 | false | 7 | 2014-07-02T16:36:00.000 | 5 | 2 | 0 | How to Combine pyWavelet and openCV for image processing? | 24,536,552 | 0.462117 | python,opencv,image-processing,dwt | Answer of Navaneeth is correct but with two correction:
1- Opencv read and save the images as BGR not RGB so you should do cv2.COLOR_BGR2GRAY to be exact.
2- Maximum level of _multilevel.py is 7 not 10, so you should do : w2d("test1.png",'db1',7) | I need to do an image processing in python. i want to use wavelet transform as the filterbank. Can anyone suggest me which one library should i use?
I had pywavelet installed, but i don't know how to combine it with opencv. If i use wavedec2 command, it raise ValueError("Expected 2D input data.")
Can anyone help me? | 0 | 1 | 11,865 |
0 | 61,270,549 | 0 | 0 | 0 | 0 | 1 | false | 6 | 2014-07-04T18:21:00.000 | 0 | 3 | 0 | Sample a truncated integer power law in Python? | 24,579,269 | 0 | python,numpy,random,distribution | Use numpy.random.zipf and just reject any samples greater than or equal to m | What function can I use in Python if I want to sample a truncated integer power law?
That is, given two parameters a and m, generate a random integer x in the range [1,m) that follows a distribution proportional to 1/x^a.
I've been searching around numpy.random, but I haven't found this distribution. | 0 | 1 | 2,482 |
Subsets and Splits