GUI and Desktop Applications
int64 0
1
| A_Id
int64 5.3k
72.5M
| Networking and APIs
int64 0
1
| Python Basics and Environment
int64 0
1
| Other
int64 0
1
| Database and SQL
int64 0
1
| Available Count
int64 1
13
| is_accepted
bool 2
classes | Q_Score
int64 0
1.72k
| CreationDate
stringlengths 23
23
| Users Score
int64 -11
327
| AnswerCount
int64 1
31
| System Administration and DevOps
int64 0
1
| Title
stringlengths 15
149
| Q_Id
int64 5.14k
60M
| Score
float64 -1
1.2
| Tags
stringlengths 6
90
| Answer
stringlengths 18
5.54k
| Question
stringlengths 49
9.42k
| Web Development
int64 0
1
| Data Science and Machine Learning
int64 1
1
| ViewCount
int64 7
3.27M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 41,861,621 | 0 | 0 | 0 | 0 | 1 | false | 14 | 2014-10-28T00:40:00.000 | 2 | 5 | 0 | Merge CSVs in Python with different columns | 26,599,137 | 0.07983 | python,csv,merge | For those of us using 2.7, this adds an extra linefeed between records in "out.csv". To resolve this, just change the file mode from "w" to "wb". | I have hundreds of large CSV files that I would like to merge into one. However, not all CSV files contain all columns. Therefore, I need to merge files based on column name, not column position.
Just to be clear: in the merged CSV, values should be empty for a cell coming from a line which did not have the column of that cell.
I cannot use the pandas module, because it makes me run out of memory.
Is there a module that can do that, or some easy code? | 0 | 1 | 16,477 |
0 | 26,640,860 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2014-10-28T18:29:00.000 | -2 | 1 | 0 | Multiple networks in Theano | 26,615,835 | 1.2 | python,theano | In a rather simplified way I've managed to find a nice solution. The trick was to create one model, define its function and then create the other model and define the second function. Works like a charm | I'd like to have 2 separate networks running in Theano at the same time, where the first network trains on the results of the second. I could embed both networks in the same structure but that would be a real mess in the entire forward pass (and probably won't even work because of the shared variables etc.)
The problem is that when I define a theano function I don't specify the model it's applied on, meaning if I'm having a predict and a train function they'll both work on the first model I define.
Is there a way to overcome that issue? | 0 | 1 | 111 |
0 | 62,499,396 | 0 | 0 | 0 | 0 | 2 | false | 31 | 2014-10-30T15:39:00.000 | 0 | 14 | 0 | Installing NumPy and SciPy on 64-bit Windows (with Pip) | 26,657,334 | 0 | python,numpy,scipy,windows64 | Follow these steps:
Open CMD as administrator
Enter this command : cd..
cd..
cd Program Files\Python38\Scripts
Download the package you want and put it in Python38\Scripts folder.
pip install packagename.whl
Done
You can write your python version instead of "38" | I found out that it's impossible to install NumPy/SciPy via installers on Windows 64-bit, that's only possible on 32-bit. Because I need more memory than a 32-bit installation gives me, I need the 64-bit version of everything.
I tried to install everything via Pip and most things worked. But when I came to SciPy, it complained about missing a Fortran compiler. So I installed Fortran via MinGW/MSYS. But you can't install SciPy right away after that, you need to reinstall NumPy. So I tried that, but now it doesn't work anymore via Pip nor via easy_install. Both give these errors:
There are a lot of errors about LNK2019 and LNK1120,.
I get a lot of errors in the range of C: C2065,C2054,C2085,C2143`, etc. They belong together I believe.
There is no Fortran linker found, but I have no idea how to install that, can't find anything on it.
And many more errors which are already out of the visible part of my cmd-windows...
The fatal error is about LNK1120:
build\lib.win-amd64-2.7\numpy\linalg\lapack_lite.pyd : fatal error LNK1120: 7 unresolved externals
error: Setup script exited with error: Command "C:\Users\me\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0\VC\Bin\amd64\link.exe /DLL /nologo /INCREMENTAL:NO /LIBPATH:C:\BLAS /LIBPATH:C:\Python27\libs /LIBPATH:C:\Python27\PCbuild\amd64 /LIBPATH:build\temp.win-amd64-2.7 lapack.lib blas.lib /EXPORT:initlapack_lite build\temp.win-amd64-2.7\Release\numpy\linalg\lapack_litemodule.obj /OUT:build\lib.win-amd64-2.7\numpy\linalg\lapack_lite.pyd /IMPLIB:build\temp.win-amd64-2.7\Release\numpy\linalg\lapack_lite.lib /MANIFESTFILE:build\temp.win-amd64-2.7\Release\numpy\linalg\lapack_lite.pyd.manifest" failed with exit status 1120
What is the correct way to install the 64-bit versions NumPy and SciPy on a 64-bit Windows machine? Did I miss anything? Do I need to specify something somewhere? There is no information for Windows on these problems that I can find, only for Linux or Mac OS X, but they don't help me as I can't use their commands. | 0 | 1 | 131,918 |
0 | 44,685,941 | 0 | 0 | 0 | 0 | 2 | false | 31 | 2014-10-30T15:39:00.000 | 0 | 14 | 0 | Installing NumPy and SciPy on 64-bit Windows (with Pip) | 26,657,334 | 0 | python,numpy,scipy,windows64 | for python 3.6, the following worked for me
launch cmd.exe as administrator
pip install numpy-1.13.0+mkl-cp36-cp36m-win32
pip install scipy-0.19.1-cp36-cp36m-win32 | I found out that it's impossible to install NumPy/SciPy via installers on Windows 64-bit, that's only possible on 32-bit. Because I need more memory than a 32-bit installation gives me, I need the 64-bit version of everything.
I tried to install everything via Pip and most things worked. But when I came to SciPy, it complained about missing a Fortran compiler. So I installed Fortran via MinGW/MSYS. But you can't install SciPy right away after that, you need to reinstall NumPy. So I tried that, but now it doesn't work anymore via Pip nor via easy_install. Both give these errors:
There are a lot of errors about LNK2019 and LNK1120,.
I get a lot of errors in the range of C: C2065,C2054,C2085,C2143`, etc. They belong together I believe.
There is no Fortran linker found, but I have no idea how to install that, can't find anything on it.
And many more errors which are already out of the visible part of my cmd-windows...
The fatal error is about LNK1120:
build\lib.win-amd64-2.7\numpy\linalg\lapack_lite.pyd : fatal error LNK1120: 7 unresolved externals
error: Setup script exited with error: Command "C:\Users\me\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0\VC\Bin\amd64\link.exe /DLL /nologo /INCREMENTAL:NO /LIBPATH:C:\BLAS /LIBPATH:C:\Python27\libs /LIBPATH:C:\Python27\PCbuild\amd64 /LIBPATH:build\temp.win-amd64-2.7 lapack.lib blas.lib /EXPORT:initlapack_lite build\temp.win-amd64-2.7\Release\numpy\linalg\lapack_litemodule.obj /OUT:build\lib.win-amd64-2.7\numpy\linalg\lapack_lite.pyd /IMPLIB:build\temp.win-amd64-2.7\Release\numpy\linalg\lapack_lite.lib /MANIFESTFILE:build\temp.win-amd64-2.7\Release\numpy\linalg\lapack_lite.pyd.manifest" failed with exit status 1120
What is the correct way to install the 64-bit versions NumPy and SciPy on a 64-bit Windows machine? Did I miss anything? Do I need to specify something somewhere? There is no information for Windows on these problems that I can find, only for Linux or Mac OS X, but they don't help me as I can't use their commands. | 0 | 1 | 131,918 |
0 | 26,727,002 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2014-11-04T02:49:00.000 | -2 | 2 | 0 | Summation of every row, column and diagonal in a 3x3 matrix numpy | 26,726,950 | -0.197375 | python,numpy,matrix,indexing,pygame | set a bool to checks every turn if someone has won. if it returns true, then whosever turn it is has won
so, for instance, it is x turn, he plays the winning move, bool checks if someone has won,returns true, print out (player whose turn it is) has won! and end game. | My assignment is Tic-Tac_Toe using pygame and numpy. I Have almost all of the program done. I just need help understanding how to find if a winner is found. I winner is found if the summation of ANY row, column, or diagonal is equal to 3.
I have two 3x3 matrices filled with 0's. Let's call them xPlayer and oPlayer. The matrices get filled with 1 every time player x or player o chooses their choice at a certain location. So if, player x selects [0,0], the matrix location at [0,0] gets a 1 value. This should continue until the summation of any row, column, or diagonal is 3. If All the places in both the matrices are 1, then there is no winner.
I need help finding the winner. I'm really new to python so I don't know much about indexing though a matrix. Any help would be greatly appreciated!
EDIT: Basically, how do you find out the summation of every row, column, and diagonal to check if ANY of them are equal to 3. | 0 | 1 | 1,887 |
0 | 29,713,740 | 0 | 0 | 0 | 0 | 1 | false | 5 | 2014-11-04T10:57:00.000 | 2 | 1 | 0 | Processing musical genres using K-nn algorithm, how to deal with extracted feature? | 26,733,418 | 0.379949 | python,algorithm,classification,extraction | One approach would be to take the least RMS energy value of the signal as a parameter for classification.
You should use a music segment, rather than using the whole music file for classification.Theoretically, the part of the music of 30 sec, starting after the first 30 secs of the music, is best representative for genre classification.
So instead of taking the whole array, what you can do is to consider the part which corresponds to this time window, 30sec-59sec. Calculate the RMS energy of the signal separately for every music file, averaged over the whole time. You may also take other features into account, eg. , MFCC.
In order to use MFCC, you may go for the averaged value of all signal windows for a particular music file. Make a feature vector out of it.
You may use the difference between the features as the distance between the data points for classification. | I'm developing a little tool which is able to classify musical genres. To do this, I would like to use a K-nn algorithm (or another one, but this one seems to be good enough) and I'm using python-yaafe for the feature extraction.
My problem is that, when I extract a feature from my song (example: mfcc), as my songs are 44100Hz-sampled, I retrieve a lot (number of sample windows) of 12-values-array, and I really don't know how to deal with that. Is there an approach to get just one representative value per feature and per song? | 0 | 1 | 449 |
0 | 26,763,840 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2014-11-05T17:30:00.000 | 0 | 1 | 0 | Can CryptGenRandom generate all possible permutations? | 26,763,448 | 0 | python,random | You are almost correct: you need a generator not with a period of 400!, but with an internal state of more than log2(400!) bits (which will also have a period larger than 400!, but the latter condition is not sufficient). So you need at least 361 bytes of internal state. CryptGenRandom doesn't qualify, but it ought to be sufficient to generate 361 or more bytes with which to seed a better generator.
I think Marsaglia has versions of MWC with 1024 and 4096 bytes of state. | I would like to shuffle a relatively long array (length ~400). While I am not a cryptography expert, I understand that using a random number generator with a period of less than 400! will limit the space of the possible permutations that can be generated.
I am trying to use Python's random.SystemRandom number generator class, which, in Windows, uses CryptGenRandom as its RNG.
Does anyone smarter than me know what the period of this number generator is? Will it be possible for this implementation to reach the entire space of possible permutations? | 0 | 1 | 431 |
0 | 26,769,689 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2014-11-05T20:48:00.000 | 0 | 1 | 0 | Multiindex or dictionaries | 26,766,803 | 0 | python,pandas,hierarchy,multi-index | In general in my experience is more difficult to compare different Data Frames, so I would suggest to use one.
With some practical example I can try to give better advice.
However, personally I prefer to use an extra column instead of many Multiindex levels, but it's just my personal opinion. | I am trying to analyze results from several thermal building simulations. Each simulation produces hourly data for several variables and for each room of the analyzed building. Simulations can be repeated for different scenarios and each one of these scenarios will produce a different hourly set of data for each room and each variable.
At the moment I create a separate dataframe for each scenario (Multiindex with variables and rooms). My goal is to be able to compare different scenarios along different dimensions: same room, rooms average, time average, etc..
From what I have understood so far there are two options:
create a dictionary of dataframes where the keys represents the scenarios
add an additional level (3rd) to the multiindex in the same dataframe representing the scenario
Which of the above will give me the best results in terms of performance and flexibility.
Typical questions could be:
in which scenario the average room temperature is below a threshold for more hours
in which scenario the maximum room temperature is below a threshold
what's the average temperature in July for each room
As you can see I need to perform operations at different hierarchical levels: within a scenario and also comparison between different scenarios.
Is it better to keep everything in the same dataframe or distribute the data? | 0 | 1 | 173 |
0 | 26,826,242 | 0 | 0 | 0 | 0 | 3 | false | 0 | 2014-11-07T03:30:00.000 | 1 | 3 | 0 | What can be the reasons for 90% of samples belong to one cluster when there is 8 clusters? | 26,793,585 | 0.066568 | python,cluster-analysis,k-means | K-means is indeed sensitive to noise BUT investigate your data!
Have you pre-processed your "real-data" before applying the distance measure on it?
Are you sure your distance metric represents proximity as you'll expected?
There are a lot of possible "bugs" that may cause this scenario.. not necessary k-means fault | I use the k-means algorithm to clustering set of documents.
(parameters are - number of clusters=8, number of runs for different centroids =10)
The number of documents are 5800
Surprisingly the result for the clustering is
90% of documents belong to cluster - 7 (final cluster)
9% of documents belong to cluster - 0 (first cluster)
and the rest 6 clusters have only a single sample. What might be the reason for this? | 0 | 1 | 272 |
0 | 26,817,383 | 0 | 0 | 0 | 0 | 3 | false | 0 | 2014-11-07T03:30:00.000 | 1 | 3 | 0 | What can be the reasons for 90% of samples belong to one cluster when there is 8 clusters? | 26,793,585 | 0.066568 | python,cluster-analysis,k-means | K-means is highly sensitive to noise!
Noise, which is farther away from the data, becomes even more influential when your square its deviations. This makes k-means really sensitive to this.
Produce a data set, with 50 points distributed N(0;0.1), 50 points distributed N(1;0.1) and 1 point at 100. Run k-means with k=2, and you are bound to get that one point a cluster, and the two real clusters merged.
It's just how k-means is supposed to work: find a least-squared quantization of the data; it does not care about "clumps" in your data set or not.
Now it may often be beneficial (with respect to the least-squares objective) to make one-element clusters if there are outliers (here, you apparently have at least 6 such outliers). In such cases, you may need to increase k by the number of such one-element clusters you get. Or use outlier detection methods, or a clustering algorithm such as DBSCAN which is tolerant wrt. noise. | I use the k-means algorithm to clustering set of documents.
(parameters are - number of clusters=8, number of runs for different centroids =10)
The number of documents are 5800
Surprisingly the result for the clustering is
90% of documents belong to cluster - 7 (final cluster)
9% of documents belong to cluster - 0 (first cluster)
and the rest 6 clusters have only a single sample. What might be the reason for this? | 0 | 1 | 272 |
0 | 26,793,992 | 0 | 0 | 0 | 0 | 3 | true | 0 | 2014-11-07T03:30:00.000 | 1 | 3 | 0 | What can be the reasons for 90% of samples belong to one cluster when there is 8 clusters? | 26,793,585 | 1.2 | python,cluster-analysis,k-means | K-means clustering attempts to minimize sum of distances between each point and a centroid of a cluster each point belongs to. Therefore, if 90% of your points are close together the sum of distances between those points and the cluster centroid is fairly small, Therefore, the k-means solving algorithm puts a centroid there. Single points are put in their own cluster because they are really far from other points, and a cluster of those points with other points would not be optimal. | I use the k-means algorithm to clustering set of documents.
(parameters are - number of clusters=8, number of runs for different centroids =10)
The number of documents are 5800
Surprisingly the result for the clustering is
90% of documents belong to cluster - 7 (final cluster)
9% of documents belong to cluster - 0 (first cluster)
and the rest 6 clusters have only a single sample. What might be the reason for this? | 0 | 1 | 272 |
0 | 36,050,176 | 0 | 0 | 0 | 0 | 1 | false | 10 | 2014-11-07T06:39:00.000 | 11 | 3 | 0 | Output 50 samples closest to each cluster center using scikit-learn.k-means library | 26,795,535 | 1 | python,scikit-learn,k-means | One correction to the @snarly's answer.
after performing d = km.transform(X)[:, j],
d has elements of distances to centroid(j), not similarities.
so in order to give closest top 50 indices, you should remove '-1', i.e.,
ind = np.argsort(d)[::][:50]
(normally, d has sorted score of distance in ascending order.)
Also, perhaps the shorter way of doing
ind = np.argsort(d)[::-1][:50]
could be
ind = np.argsort(d)[:-51:-1]. | I have fitted a k-means algorithm on 5000+ samples using the python scikit-learn library. I want to have the 50 samples closest to a cluster center as an output. How do I perform this task? | 0 | 1 | 9,095 |
0 | 26,883,907 | 0 | 0 | 1 | 0 | 1 | false | 1 | 2014-11-12T09:49:00.000 | 1 | 1 | 0 | Import error when using scipy.io module | 26,883,835 | 0.197375 | python,scipy | I would take a guess and say your Python doesnt know where you isntalled scipy.io. add the scipy path to PYTHONPATH. | I'm involved in a raspberry pi project and I use python language. I installed scipy, numpy, matplotlib and other libraries correctly. But when I type
from scipy.io import wavfile
it gives error as "ImportError: No module named scipy.io"
I tried to re-install them, but when i type the sudo cord, it says already the new version of scipy is installed. I'm stucked in this point and please help me... Thank you | 0 | 1 | 1,955 |
0 | 26,904,535 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2014-11-12T23:08:00.000 | 4 | 1 | 0 | Fastest Count Vectorizer Implementation | 26,898,410 | 1.2 | python,machine-learning,nlp,scikit-learn,vectorization | Have you tried HashingVectorizer? It's slightly faster (up to 2X if I remember correctly). Next step is to profile the code, strip the features of CountVectorizer or HashingVectorizer that you don't use and rewrite the remaining part in optimized Cython code (after profiling again).
Vowpal Wabbit's bare-bone feature processing that uses the hashing trick by default might give you a hint of what is achievable. | I'm looking for an implementation of n-grams count vectorization that is more efficient than scikit-learn's CountVectorizer. I've identified the CountVectorizer.transform() call as a huge bottleneck in a bit of software, and can dramatically increase model throughput if we're able to make this part of the pipeline more efficient. Fit time is not important, we're only concerned with transform time. The end output must be a scipy.sparse vector. If anyone has any leads for potential alternatives it would be much appreciated. | 0 | 1 | 1,295 |
0 | 26,917,183 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2014-11-13T19:45:00.000 | 2 | 2 | 1 | What is the most efficient way to write 3GB of data to datastore? | 26,917,114 | 0.197375 | python-2.7,google-app-engine,google-cloud-datastore | If you need to store each row as a separate entity, it does not matter how you create these entities - you can improve the performance by batching your requests, but it won't affect the costs.
The costs depend on how many indexed properties you have in each entity. Make sure that you only index the properties that you need to be indexed. | I have a 3Gb csv file. I would like to write all of the data to GAE datastore. I have tried reading the file row by row and then posting the data to my app, but I can only create around 1000 new entities before I exceed the free tier and start to incur pretty hefty costs. What is the most efficient / cost effective way to upload this data to datastore? | 1 | 1 | 102 |
0 | 26,942,545 | 0 | 1 | 0 | 0 | 1 | false | 52 | 2014-11-15T04:16:00.000 | 5 | 10 | 0 | Reading csv zipped files in python | 26,942,476 | 0.099668 | python-2.7,csv,zip | Yes. You want the module 'zipfile'
You open the zip file itself with zipfile.ZipInfo([filename[, date_time]])
You can then use ZipFile.infolist() to enumerate each file within the zip, and extract it with ZipFile.open(name[, mode[, pwd]]) | I'm trying to get data from a zipped csv file. Is there a way to do this without unzipping the whole files? If not, how can I unzip the files and read them efficiently? | 0 | 1 | 70,401 |
0 | 55,484,853 | 0 | 0 | 0 | 0 | 1 | false | 11 | 2014-11-15T19:23:00.000 | 4 | 2 | 0 | Pass pandas dataframe into class | 26,949,755 | 0.379949 | python,class,pandas | I would think you could create the dataframe in the first instance with
a = MyClass(my_dataframe)
and then just make a copy
b = a.copy()
Then b is independent of a | I would like to create a class from a pandas dataframe that is created from csv. Is the best way to do it, by using a @staticmethod? so that I do not have to read in dataframe separately for each object | 0 | 1 | 37,394 |
0 | 26,963,180 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2014-11-16T09:45:00.000 | 0 | 1 | 0 | How to use sklearn's DBSCAN with a spherical metric? | 26,955,646 | 1.2 | python,scikit-learn,dbscan,metric | Have you tried metric="precomputed"?
Then pass the distance matrix to the DBSCAN.fit function instead of the data.
From the documentation:
X array [n_samples, n_samples] or [n_samples, n_features] :
Array of distances between samples, or a feature array. The array is treated as a feature array unless the metric is given as ‘precomputed’. | I have a set of data distributed on a sphere and I am trying to understand what metrics must be given to the function DBSCAN distributed by scikit-learn. It cannot be the Euclidean metrics, because the metric the points are distributed with is not Euclidean. Is there, in the sklearn packet, a metric implemented for such cases or is dividing the data in small subsets the easiest (if long and tedious) way to proceed?
P.S. I am a noob at python
P.P.S. In case I "precompute" the metric, in what form do I have to submit my precomputed data?
Like this?
0 - event1 - event2 - ...
event1 - 0 - distance(event1,event2) - ...
event2 - distance(event1,event2) - 0
Please, help? | 0 | 1 | 816 |
0 | 26,958,901 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2014-11-16T15:45:00.000 | 2 | 2 | 0 | Using isinstance() versus duck typing | 26,958,759 | 0.197375 | python,matplotlib,duck-typing,isinstance | Why not write two separate functions, one that treats its input as a color map, and another that treats its input as a color? This would be the simplest way to deal with the problem, and would both avoid surprises, and leave you room to expand functionality in the future. | I'm writing an interface to matplotlib, which requires that lists of floats are treated as corresponding to a colour map, but other types of input are treated as specifying a particular colour.
To do this, I planned to use matplotlib.colors.colorConverter, which is an instance of a class that converts the other types of input to matplotlib RGBA colour tuples. However, it will also convert floats to a grayscale colour map. This conflicts with the existing functionality of the package I'm working on and I think that would be undesirable.
My question is: is it appropriate/Pythonic to use an isinstance() check prior to using colorConverter to make sure that I don't incorrectly handle lists of floats? Is there a better way that I haven't thought of?
I've read that I should generally code to an interface, but in this case, the interface has functionality that differs from what is required. | 0 | 1 | 888 |
0 | 26,967,730 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2014-11-17T06:03:00.000 | 5 | 1 | 0 | Is there a way to alter the edge opacity in Python igraph? | 26,966,487 | 1.2 | python,igraph,opacity | Edge opacity can be altered with the color attribute of the edge or with the edge_color keyword argument of plot(). The colors that you specify there are passed through the color_name_to_rgba function so you can use anything that color_name_to_rgba understands there; the easiest is probably an (R, G, B, A) tuple or the standard HTML #rrggbbaa syntax, where A is the opacity. Unfortunately this is not documented well but I'll fix it in the next release. | I know that you can adjust a graphs overall opacity in the plot function (opacity = (0 to 1)), but I cannot find anything in the manual or online searches that speak of altering the edge opacity (or transparency)? | 0 | 1 | 1,146 |
0 | 27,003,691 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2014-11-18T20:44:00.000 | 0 | 3 | 0 | numpy arrays will not concontenate | 27,003,660 | 1.2 | python,numpy | try np.hstack((a.reshape(1496, 1), b.reshape(1496, 1), c)). To be more general, it is np.hstack((a.reshape(a.size, 1), b.reshape(b.size, 1), c)) | I have three arrays a, b, c.
The are the shapes (1496,) (1496,) (1496, 1852). I want to join them into a single array or dataframe.
The first two arrays are single column vector, where the other has several columns. All three have 1496 rows.
My logic is to join into a single array by df=np.concontenate((a,b,c))
But the error says dimensions must be the same size.
I also tried np.hstack()
Thanks
MPG | 0 | 1 | 47 |
0 | 27,011,549 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2014-11-19T07:39:00.000 | 2 | 2 | 0 | Shoud I use numpy for a image manipulation program? why | 27,011,456 | 1.2 | python,arrays,image-processing,numpy | Well I think you could do that, but maybe less convenient. The reasons could be:
numpy supports all the matrix manipulations and since it is optimized, could be much faster (You can also switch to OpenBLAS to make it amazingly faster). For image-processing problems, in some cases where images become larger, it could be much demanding for the speed.
numpy has lots of useful function handles, such as numpy.fft for Fourier Tranformation, or numpy.conv to do the convolution. This could be critical for image processing.
All the modules, or packages are nearly all based on numpy, such as scipy, graphlab and matplotlib. For example, you should use 'import matplotlib.pyplot as plt; plt.imshow()' to show images; well some other arrays could be hardly accepted as the arguments. | Is there any reason why I should use numpy to represent pixels in an image manipulation program as opposed to just storing the values in my own array of numbers? Currently I am doing the latter but I see lots of people talking about using numpy for representing pixels as multidimensional arrays. Other than that are there any reasons why I should be using numpy as opposed to my own implementation? | 0 | 1 | 291 |
0 | 27,019,410 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2014-11-19T14:24:00.000 | 1 | 1 | 0 | Convert String containing letters to Int efficiently - Apache Spark | 27,019,270 | 0.197375 | java,python,scala,apache-spark | If you just want any matchable String to an int - String.hashCode(). However you will have to deal with possible hash collisions. Alternatively you'd have to convert each character to its int value and append (not add) all of these together. | I am working with a dataset that has users as Strings (ie. B000GKXY4S). I would like to convert each of these users to int, so I can use Rating(user: Int, product: Int, rating: Double) class in Apache Spark ALS. What is the most efficient way to do this? Preferably using Spark Scala functions or python native functions. | 0 | 1 | 735 |
0 | 27,033,373 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2014-11-20T06:34:00.000 | 2 | 3 | 0 | Import csv into QGIS using Python | 27,033,261 | 0.132549 | python,csv,qgis | There is a parenthesis missing from the end of your --6 line of code. | I am attempting to import a file into QGIS using a python script. I'm having a problem getting it to accept the CRS. Code so far
from PyQt4.QtGui import *
from PyQt4.QtCore import *
from qgis.core import *
from qgis.utils import iface
----1 Set file name here
InFlnm='Input.CSV'
---2 Set pathname here
InDrPth='G:/test'
---3 Build the file name and path for uri
InFlPth="file:///"+InDrPth+InFlnm
---4 Set import Sting here note only need to set x and y others come for free!
uri = InFlPth+"?delimiter=%s&xField=%s&yField=%s" % (",","x","y")
---5 Load the points into a layer
bh = QgsVectorLayer(uri, InFlnm, "delimitedtext")
---6 Set the CRS (Not sure if this is working seems to?)
bh.setCrs(QgsCoordinateReferenceSystem(32365, QgsCoordinateReferenceSystem.EpsgCrsId)
---7 Display the layer in QGIS (Here I get a syntax error?)
QgsMapLayerRegistry.instance().addMapLayer(bh)
Now all the above works OK and QGIC prompts me for a CRS before executing the last line of the script to display the layer - as long as I comment-out step 6
However, if a attempt to set the CRS removing ### from step 6 I get a syntax error reporting on the last line that displays the points (Step 7). Note sure what the trick is here - I'm pretty new to Python but know my way around some other programming lagnuages | 0 | 1 | 8,411 |
0 | 27,035,506 | 0 | 0 | 0 | 0 | 2 | true | 0 | 2014-11-20T08:50:00.000 | 1 | 3 | 0 | X=sm.add_constant(X, prepend=True) is not working | 27,035,257 | 1.2 | python,regression,linear-regression | If sm is a defined object in statsmodels, you need to invoke it by statsmodels.sm, or using from statsmodel import sm, then you can invoke sm directly. | I am trying to get the beta and the error term from a linear regression(OLS) in python. I am stuck at the statement X=sm.add_constant(X, prepend=True), which is returning an
error:"AttributeError: 'module' object has no attribute 'add_constant'"
I already installed the statsmodels module. | 0 | 1 | 8,510 |
0 | 61,634,875 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2014-11-20T08:50:00.000 | 5 | 3 | 0 | X=sm.add_constant(X, prepend=True) is not working | 27,035,257 | 0.321513 | python,regression,linear-regression | Try importing statsmodel.api
import statsmodels.api as sm | I am trying to get the beta and the error term from a linear regression(OLS) in python. I am stuck at the statement X=sm.add_constant(X, prepend=True), which is returning an
error:"AttributeError: 'module' object has no attribute 'add_constant'"
I already installed the statsmodels module. | 0 | 1 | 8,510 |
0 | 27,050,808 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2014-11-20T21:38:00.000 | 6 | 2 | 0 | Is there a reason that scikit-learn only allows access to clf.coef_ with linear svms? | 27,050,055 | 1.2 | python,machine-learning,scikit-learn,svm | They simply don't exist for kernels that are not linear: The kernel SVM is solved in the dual space, so in general you only have access to the dual coefficients.
In the linear case this can be translated to primal feature space coefficients. In the general case these coefficients would have to live in the feature space spanned by the chosen kernel, which can be infinite dimensional. | I would like to calculate the primal variables w with a polynomial kernel svm, but to do this i need to compute clf.coef_ * clf.support_vectors_. Access is restricted to .coef_ on all kernel types except for linear - is there a reason for this, and is there another way to derive w in that case? | 0 | 1 | 1,416 |
0 | 27,070,113 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2014-11-21T20:36:00.000 | 0 | 1 | 0 | medium datasets under source control | 27,069,898 | 1.2 | python,git,svn,csv | If you're asking whether it would be efficient to put your datasets under version control, based on your description of the data, I believe the answer is yes. Both Mercurial and Git are very good at handling thousands of text files. Mercurial might be a better choice for you, since it is written in python and is easier to learn than Git. (As far as I know, there is no good reason to adopt Subversion for a new project now that better tools are available.)
If you're asking whether there's a way to speed up your application's writes by borrowing code from a version control system, I think it would be a lot easier to make your application modify existing files in place. (Maybe that's what you're doing already? It's not clear from what you wrote.) | This is more of a general question about how feasible is it to store data sets under source control.
I have 20 000 csv files with number data that I update every day. The overall size of the directory is 100Mbytes or so, that are stored on a local disk on ext4 partition.
Each day changes should be diffs of about 1kbyte.
I may have to issue corrections on the data, so I am considering versioning the whole directory= 1 toplevel contains 10 level1 dirs, each contain 10 level2 dirs, each containing 200 csv files.
The data is written to files by python processes( pandas frames ).
The question is about performance of writes where the deltas are small like this compared to the entire data.
svn and git come to mind, and they would have python modules to use them.
What works best?
Other solutions are I am sure possible but I would stick to keeping data is files as is... | 0 | 1 | 50 |
0 | 27,140,986 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2014-11-23T07:50:00.000 | 2 | 2 | 0 | Finding the most similar documents (nearest neighbours) from a set of documents | 27,086,753 | 0.197375 | python,scikit-learn,nltk | You should learn about hashing mechanisms that can be used to calculate similarity between documents.
Typical hash functions are designed to minimize collision mapping near duplicates to very different hash keys. In cryptographic hash functions, if the data is changed with one bit, the hash key will be changed to a completely different one.
The goal of similarity hashing is to create a similarity hash function. Hash based techniques for near duplicate detection are designed for the opposite intent of cryptographic hash algorithms. Very similar documents map to very similar hash keys, or even to the same key. The difference between bitwise hamming distance of keys is a measure of similarity.
After calculating the hash keys, keys can be sorted to increase the speed of near duplicate detection from O(n2) to O(nlog(n)). A threshold can be defined and tuned by analysing accuracy of training data.
Simhash, Minhash and Local sensitive hashing are three implementations of hash based methods. You can google and get more information about these. There are a lot of research papers related to this topic... | I have 80,000 documents that are about a very vast number of topics. What I want to do is for every article, provide links to recommend other articles (something like top 5 related articles) that are similar to the one that a user is currently reading. If I don't have to, I'm not really interested in classifying the documents, just similarity or relatedness, and ideally I would like to output a 80,000 x 80,000 matrix of all the documents with the corresponding distance (or perhaps correlation? similarity?) to other documents in the set.
I'm currently using NLTK to process the contents of the document and get ngrams, but from there I'm not sure what approach I should take to calculate the similarity between documents.
I read about using tf-idf and cosine similarity, however because of the vast number of topics I'm expecting a very high number of unique tokens, so multiplying two very long vectors might be a bad way to go about it. Also 80,000 documents might call for a lot of multiplication between vectors. (Admittedly, it would only have to be done once though, so it's still an option).
Is there a better way to get the distance between documents without creating a huge vector of ngrams? Spearman Correlation? or would a more low-tech approach like taking the top ngrams and finding other documents with the same ngrams in the top k-ngrams be more appropriate? I just feel like surely I must be going about the problem in the most brute force way possible if I need to multiply possibly 10,000 element vectors together 320 million times (sum of the arithmetic series 79,999 + 79,998... to 1).
Any advice for approaches or what to read up on would be greatly appreciated. | 0 | 1 | 2,909 |
0 | 27,095,449 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2014-11-23T16:30:00.000 | 0 | 1 | 0 | Using cross-validation to find the right value of k for the k-nearest-neighbor classifier | 27,091,319 | 0 | ipython,classification,decision-tree,nearest-neighbor,cross-validation | I assume here that you mean the value of k that returns the lowest error in your wine quality model.
I find that a good k can depend on your data. Sparse data might prefer a lower k whereas larger datasets might work well with a larger k. In most of my work, a k between 5 and 10 have been quite good for problems with a large number of cases.
Trial and Error can at times be the best tool here, but it shouldn't take too long to see a trend in the modelling error.
Hope this Helps! | I am working on a UCI data set about wine quality. I have applied multiple classifiers and k-nearest neighbor is one of them. I was wondering if there is a way to find the exact value of k for nearest neighbor using 5-fold cross validation. And if yes, how do I apply that? And how can I get the depth of a decision tree using 5-fold CV?
Thanks! | 0 | 1 | 185 |
0 | 27,152,696 | 0 | 1 | 0 | 0 | 1 | true | 4 | 2014-11-23T17:15:00.000 | 3 | 1 | 0 | Reading time from analog clock using Hough Line Transform in Python (OpenCV) | 27,091,836 | 1.2 | python,opencv,hough-transform | I've managed to solve my problem.
I've been trying to use Hough Line Transform where I was supposed to use Hough Probabilistic Transform. The moment I got it, I grouped lines drawn along similar functions, sorted them by length, and used arcsine as well as locations of their ends to find precise degrees at wchich hands stood. | I've been trying to write a program that locates clock's face on picture and then proceeds to read time from it. Locating works fairly well, reading time - not so much.
The cv2.HoughLines function returns angles at which lines lay (measuring from the top of the image) and their distance from upper-left corner of the image. After a bit of tweaking I've managed to convince my code to find a single line for each of clock's hands, but as for now I remain unable to actually read time from it.
Using appropriate formulas I could find intersection of those lines (middle of the clock) and then iterate along the hands in both directions at once. This could tell me the length of each hand (allowing me to tell them apart) as well as at which direction are they pointing. I'm fairly hesitant about implementing this solution though - not only does it seem somehow ugly but also creates certain risks. For example: problems with rounding could cause the program to check the wrong pixel and find the end of line prematurely.
So, would you kindly suggest an alternative solution? | 0 | 1 | 2,056 |
0 | 27,368,753 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2014-11-25T23:31:00.000 | 1 | 1 | 0 | naive bayes feature vectors in pmml | 27,138,752 | 1.2 | python,machine-learning,scikit-learn,pmml | Since the PMML representation of the Naive Bayes model implements representing joint probabilities via the "PairCounts" element, one can simply replace that ratio with the probabilities output (not the log probability). Since the final probabilities are normalized, the difference doesn't matter. If the requirements involve a large number of proabilities which are mostly 0, the "threshold" attribute of the model can be used to set the default values for such probabilities. | I am trying to build my own pmml exporter for Naive Bayes model that I have built in scikit learn. In reading the PMML documentation it seems that for each feature vector you can either output the model in terms of count data if it is discrete or as a Gaussian/Poisson distribution if it is continous. But the coefficients of my scikit learn model are in terms of Empirical log probability of features i.e p(y|x_i). Is it possible to specify the Bayes input parameters in terms of these probability rather than counts? | 0 | 1 | 170 |
0 | 27,156,673 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2014-11-26T09:43:00.000 | 0 | 4 | 0 | How to find biggest sum of items not exceeding some value? | 27,145,789 | 0 | python,algorithm,mathematical-optimization,knapsack-problem,greedy | This problem can be phrased as a zero-one assignment problem, and solved with a linear programming package, such as GLPK, which can handle integer programming problems. The problem is to find binary variables x[i] such that the sum of x[i]*w[i] is as large as possible, and less than the prescribed limit, where w[i] are the values which are added up.
My advice is to use an existing package; combinatorial optimization algorithms are generally very complex. There is probably a Python interface for some package you can use; I don't know if GLPK has such an interface, but probably some package does. | How to find biggest sum of items not exceeding some value? For example I have 45 values like this: 1.0986122886681098, 1.6094379124341003, 3.970291913552122, 3.1354942159291497, 2.5649493574615367. I need to find biggest possible combination not exceeding 30.7623.
I can't use bruteforce to find all combinations as amount of combination will be huge. So I need to use some greedy algorithm. | 0 | 1 | 1,217 |
0 | 27,156,595 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2014-11-26T15:14:00.000 | 1 | 1 | 0 | Plot two images side by side with skimage | 27,152,624 | 1.2 | python-2.7,image-processing,plot,scikit-image | See skimage.feature.plot_matches, pass empty list of keypoints and matches if you only want to plot the images without points. | Looking up at different feature matching tutorials I've noticed that it's tipical to illustrate how the matching works by plotting side by side the same image in two different version (one normal and the other one rotated or distorted). I want to work on feature matching by using two distinct images (same scene shot from slightly different angles). How do I plot them together side by side?
I'm willing to use skimage on python 2.7 | 0 | 1 | 722 |
0 | 27,238,940 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2014-11-26T19:10:00.000 | 0 | 1 | 0 | Append data to end of human-readable file Python | 27,157,087 | 0 | python,numpy,save,append | Thanks for your thoughts. These two options came to my mind too but I need the mixture of both: My specific use case requires the file to be human readable - as far as I know pickling does not provide that and saving to a dictionary destroys the order. I need the data to be dropped as they need to be manipulated in other scripts before the next data is produced.
The not very elegant way I am doing it now: numpy.savetxt() to files labeled by the run, and bash "cat" applied in the end. | In one run my python script calculates and returns the results for the variables A, B, C.
I would like to append the results run by run, row by row to a human-readable file.
After the runs i, I want to read the data back as numpy.arrays of the columns.
i | A B C
1 | 3 4 6
2 | 4 6 7
And maybe even access the row where e.g. A equals 3 specifically. | 0 | 1 | 105 |
0 | 27,162,750 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2014-11-27T03:42:00.000 | 1 | 3 | 0 | Python: How to check that two CSV files with header rows contain same information disregarding row and column order? | 27,162,717 | 0.066568 | python,unit-testing,csv | store in a dictionary with corresponding head of csv file as key and first row as values
read the second file and check against with dictionary. | For unit testing a method, I want to compare a CSV file generated by that method (the actual result) against a manually created CSV (the expected result).
The files are considered equal, if the fields of the first row are exactly the same (i.e. the headers), and if the remaining row contain the same information.
The following things must not matter: order of the columns, order of the rows (except for the header row), empty lines, end-of-line encoding, encoding of boolean values. | 0 | 1 | 1,472 |
0 | 27,177,167 | 0 | 0 | 0 | 1 | 1 | false | 0 | 2014-11-27T08:43:00.000 | 1 | 2 | 0 | Elastic Search query filtering | 27,166,357 | 0.099668 | python,search,curl,elasticsearch | the above search example looks correct.Try lowercasing the Data "Analyst" as "data analyst".
if doesn't help post your mappings,query you firing and response you are getting. | I have uploaded some data into Elastic server as " job id , job place , job req , job desc ". My index is my_index and doctype = job_list.
I need to write a query to find a particular term say " Data Analyst " and it should give me back matching results with a specified field like " job place " .
ie, Data Analyst term matching in the documents , and I need to have all "job place" information only.
Any help. I tried curd . but not working. if it is in python good. | 0 | 1 | 5,426 |
0 | 27,378,733 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2014-11-27T19:37:00.000 | 1 | 1 | 0 | Training a LDA model with gensim from some external tf-idf matrix and term list | 27,177,721 | 0.197375 | python-3.x,tf-idf,lda,topic-modeling,gensim | id2word must map each id (integer) to term (string).
In other words, it must support id2word[123] == 'koala'.
A plain Python dict is the easiest option. | I have a tf-idf matrix already, with rows for terms and columns for documents. Now I want to train a LDA model with the given terms-documents matrix. The first step seems to be using gensim.matutils.Dense2Corpus to convert the matrix into the corpus format. But how to construct the id2word parameter? I have the list of the terms (#terms==#rows) but I don't know the format of the dictionary so I cannot construct the dictionary from functions like gensim.corpora.Dictionary.load_from_text. Any suggestions? Thank you. | 0 | 1 | 505 |
1 | 27,192,613 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2014-11-28T16:11:00.000 | 3 | 1 | 0 | Generate a random point in space (x, y, z) with a boundary | 27,192,467 | 1.2 | python,random,spatial,coordinate | There's a lot that's unspecified in your question, such as what distribution you want to use. For the sake of this answer, I'll assume a uniform distribution.
The straightforward way to handle an arbitrary volume uniform distribution is to choose three uniformly random numbers as coordinates in the range of the bounding rectilinear solid enclosing your volume, then check to see if the chosen coordinate lies within the volume. If the coordinate is not within the volume, discard it and generate a new one.
If this is not sufficient, due to its non-constant performance or whatever other reason, you'll need to constrain your problem (say, to only tetrahedra) and do a bunch of calculus to compute the necessary random distributions and model the dependencies between the axes.
For example, you could start with the x axis and integrate the area of the intersecting shapes between the volume and the plane where x = t. This will give you a function p(x) which, when normalized, is the probability density function along the X axis. (If you want nonuniform distribution, you need to put that in the integrated function, too.)
Then you need to do another set of integrals to determine p(y|x0), the probability distribution function on the Y axis given the chosen x coordinate. Finally, you'll need to determine p(z|x0,y0), the probability distribution function on the z axis.
Once you have all this, you need to use whatever random number algorithm you have to choose random numbers in these distributions: first choose x0 from p(x), then use that to choose y0 from p(y|x0), then use those to choose z0 from p(z|x0,y0), and you'll have your result (x0, y0, z0).
There are various algorithms to determine if a point is outside a volume, but a simple one could be:
For each polygon face:
Compute its characteristic planes.
Use cross product to compute plane normals.
One vertex of the face and the plane normal are sufficient to define the plane.
Remember the right-hand rule and choose the points so that the plane normal consistently points into or out of the polyhedron.
Check that the random point lies on the "inside" half-space of that plane.
A half-space is the set of all points on one side of the plane.
Compute the vector from the plane vertex to the random point.
Compute the dot product between the plane normal and this vector.
If you defined the plane normals to point out of the polyhedron, then all dot products must be negative.
If you defined the plane normals to point into the polyhedron, then all dot products must be positive.
Note that you only have to recompute characteristic planes when the volume moves, not for each random point.
There are probably much better algorithms out there, and their discussion is outside the scope of this question and answer. This algorithm is what I could come up with with no research, and is probably as good as a bubble sort. | I would like to generate a uniformly random coordinate that is inside a convex bounding box defined by its (at least) 4 vertices (for the case of a tetrahedron).
Can someone suggest an algorithm that I can use?
Thanks!
If a point is generated in a bounding box, how do you detect whether or not it is outside the geometry but inside the box? | 0 | 1 | 1,463 |
0 | 27,195,171 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2014-11-28T16:35:00.000 | 1 | 1 | 1 | To run python script in apache spark/Storm | 27,192,852 | 0.197375 | python,hadoop,apache-spark | First and foremost what are you trying to achieve? What does running on Hadoop technology mean to you? If the goal is to work with a lot of data, this is one thing, if it's to parallelize the algorithm, it's another. My guess is you want both.
First thing is: is the algorithm parallelizable? Can it run on multiple pieces of data at the same time and gather them all in the end to make the final answer? Some algorithms are not, especially if they are recursive and require previously computed data to process the next.
In any case, running on Hadoop means running using Hadoop tools, whether it is Spark, Storm or other services that can run on Python, taking advantage of Hadoop means writing your algorithm for it.
if your algorithm is parallelizable, then likely you can easily take the piece that processes one piece of data and adapt it to run with Spark or Storm on huge datasets. | I am having an algorithm written in python (not hadoop compatible i.e. not mapper.py and reducer.py) and it is running perfectly in local system (not hadoop). My objective is to run this in hadoop.
Option 1: Hadoop streaming. But, I need to convert this python script into mapper and reducer. Any other way?
Option 2: To run this python script through Storm. But, I am using cloudera which doesn't have Storm. either I need to install storm in cloudera or need to use Spark. If I install storm in cloudera. Is it better option?
Option 3: To run this python script through Spark (Cloudera). Is it possible.
This algorithm is not for real time processing. But, we want to process it in hadoop technology.
Please help with other suitable solution. | 0 | 1 | 1,217 |
0 | 27,239,565 | 0 | 1 | 0 | 0 | 1 | true | 5 | 2014-11-30T16:04:00.000 | 4 | 1 | 0 | Integrating exisiting Python Library to Anaconda | 27,215,170 | 1.2 | python,anaconda | There is no need to remove your system Python. Anaconda sits alongside it. When it installs, it adds a line to your .bashrc that adds the Anaconda directory first in your PATH. This means that whenever you type python or ipython in the terminal, it will use the Anaconda Python (and the Anaconda Python will automatically use all the Anaconda Python libraries like numpy and scipy rather than the system ones). You should leave the system Python alone, as some system tools use it. The important points are:
Whichever Python is first on your PATH is what gets used when you use Python in the terminal. If you create a conda environment with conda and use source activate it will put that environment first on the PATH.
Each Python (Anaconda or the system) will use its own libraries and not look at the others (this is not true if you set the PYTHONPATH environment variable, but I recommend that you don't). | I've been installing few Library/Toolkit for Python like NLTK, SciPy and NumPy on my Ubuntu. I would like to try to use Anaconda distribution though. Should I remove my existing libraries before installing Anaconda? | 0 | 1 | 3,390 |
0 | 27,259,240 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2014-11-30T19:08:00.000 | 0 | 1 | 0 | network animation with static nodes in python or even webgl | 27,217,051 | 1.2 | python,opengl,webgl,ipython,vispy | This looks like a good use-case for Vispy indeed. You'd need to use a PointVisual for the nodes, and a LineVisual for the edges. Then you can update the edges in real time as the simulation is executed.
The animation would also work in the IPython notebook with WebGL.
Note that other graphics toolkits might also work for you (although you'd not necessarily have GPU acceleration through OpenGL) if you specify static positions for the nodes. I think you can fix the nodes positions with d3js or networkx instead of relying on an automatic layout algorithm. | So I have a particular task I need help with, but I was not sure how to do it. I have a model for the formation of ties between a fixed set of network nodes. So I want to set up a window or visualization that shows the set of all nodes on some sort of 2-dimensional or 3-dimensional grid. Then for each timestep, I want to update the visualization window with the latest set of ties between nodes. So I would start with a set of nodes positioned in space, and then with each timestep the visualization will gradually add the new edges.
The challenge here is that I know in something like networkx, redrawing the network at each timestep won't work. Many of the common network display algorithms randomly place nodes so that as to maximize the distance between thenm and better show the edges. So if I were to redraw the network at each timestep, the nodes would end up in different locations each time, and it would be hard to identify the pattern of network growth. That is why I want a set of static nodes, so I can see how the edges get added at each timestep.
I am looking to visualize about 100 nodes at a time. So I will start with a small number of nodes like 20 or so, and gradually build up to 100 nodes. After the model is validated, then I would build up to 1000 or 2000 nodes. Of course it is hard to visualize 1000 or 2000 node network, that is why I just want to make sure I can visualize the network when I just have 100 nodes in the simulation.
I was not sure if I could do this in webgl or something, or if there is a good way to do this in python. I can use Vispy for communication between python and webgl if needed. | 0 | 1 | 308 |
0 | 27,256,151 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2014-12-02T17:36:00.000 | 1 | 1 | 0 | Training a Machine Learning predictor | 27,255,560 | 1.2 | python,machine-learning,language-features,feature-selection | You either need to under-sample the bigger class (take a small random sample to match the size of the smaller class), over-sample the smaller class (bootstrap sample), or use an algorithm that supports unbalanced data - and for that you'll need to read the documentation.
You need to turn your words into a word vector. Columns are all the unique words in your corpus. Rows are the documents. Cell values are one of: whether the word appears in the document, the number of times it appears, the relative frequency of its appearance, or its TFIDF score. You can then have these columns along with your other non-word columns.
Now you probably have more columns than rows, meaning you'll get a singularity with matrix-based algorithms, in which case you need something like SVM or Naive Bayes. | I have been trying to build a prediction model using a user’s data. Model’s input is documents’ metadata (date published, title etc) and document label is that user’s preference (like/dislike). I would like to ask some questions that I have come across hoping for some answers:
There are way more liked documents than disliked. I read somewhere that if somebody train’s a model using way more inputs of one label than the other this affects the performance in a bad way (model tends to classify everything to the label/outcome that has the majority of inputs
Is there possible to have input to a ML algorithm e.g logistic regression be hybrid in terms of numbers and words and how that could be done, sth like:
input = [18,23,1,0,’cryptography’] with label = [‘Like’]
Also can we use a vector ( that represents a word, using tfidf etc) as an input feature (e.g. 50-dimensions vector) ?
In order to construct a prediction model using textual data the only way to do so is by deriving a dictionary out of every word mentioned in our documents and then construct a binary input that will dictate if a term is mentioned or not? Using such a version though we lose the weight of the term in the collection right?
Can we use something as a word2vec vector as a single input in a supervised learning model?
Thank you for your time. | 0 | 1 | 232 |
0 | 27,308,244 | 0 | 1 | 0 | 0 | 2 | true | 6 | 2014-12-05T03:25:00.000 | 8 | 2 | 1 | /usr/bin/python vs /opt/local/bin/python2.7 on OS X | 27,308,234 | 1.2 | python,macos,python-2.7,numpy,matplotlib | Points to keep in mind about Python
If a script foobar.py starts with #!/usr/bin/env python, then you will always get the OS X Python. That's the case even though MacPorts puts /opt/local/bin ahead of /usr/bin in your path. The reason is that MacPorts uses the name python2.7. If you want to use env and yet use MacPorts Python, you have to write #!/usr/bin/env python2.7.
If a script foobar.py starts explicitly with #!/usr/bin/python or with #!/opt/local/bin/python2.7, then the corresponding Python interpreter will be used.
What to keep in mind about pip
To install pip for /usr/bin/python, you need to run sudo /usr/bin/easy_install pip. You then call pip (which will not be installed by easy_install in /usr/bin/pip, but rather in /usr/local/bin/pip)
To install pip for /opt/local/bin/python2.7, you need to run sudo port install py27-pip. You would then call pip-2.7. You will get the pip in /opt/local/bin. Be careful, because if you type pip2.7 you will get /usr/local/bin/pip2.7 (the OS X pip).
Installing networkx and matplotlib
To install networkx for the OS X Python you would run sudo /usr/local/bin/pip install networkx. I don't know how to install matplotlib on OS X Lion. It may be that OS X has to stick to numpy 1.5.1 because it uses it internally.
To install networkx and matplotlib for MacPorts-Python, call sudo pip-2.7 install networkx and sudo pip-2.7 install matplotlib. matplotlib installs with a lot of warnings, but it passes. | Can you shed some light on the interaction between the Python interpreter distributed with OS X and the one that can be installed through MacPorts?
While installing networkx and matplotlib I am having difficulties with the interaction of /usr/bin/python and /opt/local/bin/python2.7. (The latter is itself a soft pointer to /opt/local/Library/Frameworks/Python.framework/Versions/2.7/bin/python2.7)
How can I be certain which Python, pip, and Python libraries I am using at any one time?
More importantly, it appears that installing matplotlib is not possible on Lion. It fails with Requires numpy 1.6 or later to build. (Found 1.5.1). If I upgrade by running sudo pip install --upgrade numpy, it does not help. Subsequently attempting to install matplotlib (sudo /usr/local/bin/pip install matplotlib) still fails with the same (Requires numpy 1.6...) message. How can I install matplotlib? | 0 | 1 | 10,161 |
0 | 27,400,616 | 0 | 1 | 0 | 0 | 2 | false | 6 | 2014-12-05T03:25:00.000 | 0 | 2 | 1 | /usr/bin/python vs /opt/local/bin/python2.7 on OS X | 27,308,234 | 0 | python,macos,python-2.7,numpy,matplotlib | May I also suggest using Continuum Analytics "anaconda" distribution. One benefit in doing so would be that you won't then need to modify he standard OS X python environment. | Can you shed some light on the interaction between the Python interpreter distributed with OS X and the one that can be installed through MacPorts?
While installing networkx and matplotlib I am having difficulties with the interaction of /usr/bin/python and /opt/local/bin/python2.7. (The latter is itself a soft pointer to /opt/local/Library/Frameworks/Python.framework/Versions/2.7/bin/python2.7)
How can I be certain which Python, pip, and Python libraries I am using at any one time?
More importantly, it appears that installing matplotlib is not possible on Lion. It fails with Requires numpy 1.6 or later to build. (Found 1.5.1). If I upgrade by running sudo pip install --upgrade numpy, it does not help. Subsequently attempting to install matplotlib (sudo /usr/local/bin/pip install matplotlib) still fails with the same (Requires numpy 1.6...) message. How can I install matplotlib? | 0 | 1 | 10,161 |
0 | 27,312,169 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2014-12-05T08:45:00.000 | 1 | 1 | 0 | How to convert numpy distribution to an array? | 27,311,941 | 1.2 | python,arrays,numpy | Just putting a list(...) call around your call to normal will turn it into a regular Python list. | I am using the function numpy.random.normal(0,0.1,20) to generate some numbers. Given below is the output I get from the function. The problem is I want these numbers to be in an array format.
[ 0.13500488 0.11023982 0.09908623 -0.01437589 0.00619559 -0.17200946
-0.00501746 0.07422642 0.1226481 -0.01422786 -0.02986386 -0.02507335
-0.12959589 -0.09346143 -0.01287027 0.02656667 -0.07538371 -0.10534301
-0.02208811 -0.14634084]
Can anyone help me? | 0 | 1 | 330 |
0 | 27,370,090 | 0 | 1 | 0 | 0 | 2 | true | 1 | 2014-12-06T01:02:00.000 | 0 | 2 | 0 | OS X not using most recent NumPY version | 27,327,104 | 1.2 | python,macos,numpy | The new NumPY version would install (via pip) into the System path, where it wasn't being recognized by Python. To solve this I ran pip install --user numpy==1.7.1 to specify I want NumPY version 1.7.1 on my Python (user) path.
:) | Trying to update NumPY by running pip install -U numpy, which yields "Requirement already up-to-date: numpy in /Library/Python/2.7/site-packages". Then checking the version with import numpy and numpy.version.version yields '1.6.2' (old version). Python is importing numpy via the path '/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy'. Please help me out here. | 0 | 1 | 708 |
0 | 27,328,371 | 0 | 1 | 0 | 0 | 2 | false | 1 | 2014-12-06T01:02:00.000 | 0 | 2 | 0 | OS X not using most recent NumPY version | 27,327,104 | 0 | python,macos,numpy | You can remove the old version of numpy from
/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy
.
Just delete the numpy package from there and then try to import numpy from the python shell. | Trying to update NumPY by running pip install -U numpy, which yields "Requirement already up-to-date: numpy in /Library/Python/2.7/site-packages". Then checking the version with import numpy and numpy.version.version yields '1.6.2' (old version). Python is importing numpy via the path '/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy'. Please help me out here. | 0 | 1 | 708 |
0 | 27,384,466 | 0 | 1 | 0 | 0 | 2 | true | 0 | 2014-12-09T16:26:00.000 | 1 | 2 | 0 | fuzzy matching lots of strings | 27,383,896 | 1.2 | python,sql,r,fuzzy-search,fuzzy-logic | That is exactly what I am facing at my new job daily (but lines counts are few million). My approach is to:
1) find a set of unique strings by using p = unique(a)
2) remove punctuation, split strings in p by whitespaces, make a table of words' frequencies, create a set of rules and use gsub to "recover" abbreviations, mistyped words, etc. E.g. in your case "AUTH" should be recovered back to "AUTHORITY", "UNIV" -> "UNIVERSITY" (or vice versa)
3) recover typos if I spot them by eye
4) advanced: reorder words in strings (to often an improper English) to see if the two or more strings are identical albeit word order (e.g. "10pack 10oz" and "10oz 10pack"). | I've got a database with property owners; I would like to count the number of properties owned by each person, but am running into standard mismatch problems:
REDEVELOPMENT AUTHORITY vs. REDEVELOPMENT AUTHORITY O vs. PHILADELPHIA REDEVELOPMEN vs. PHILA. REDEVELOPMENT AUTH
COMMONWEALTH OF PENNA vs. COMMONWEALTH OF PENNSYLVA vs. COMMONWEALTH OF PA
TRS UNIV OF PENN vs. TRUSTEES OF THE UNIVERSIT
From what I've seen, this is a pretty common problem, but my problem differs from those with solutions I've seen for two reasons:
1) I've got a large number of strings (~570,000), so computing the 570000 x 570000 matrix of edit distances (or other pairwise match metrics) seems like a daunting use of resources
2) I'm not focused on one-off comparisons--e.g., as is most common for what I've seen from big data fuzzy matching questions, matching user input to a database on file. I have one fixed data set that I want to condense once and for all.
Are there any well-established routines for such an exercise? I'm most familiar with Python and R, so an approach in either of those would be ideal, but since I only need to do this once, I'm open to branching out to other, less familiar languages (perhaps something in SQL?) for this particular task. | 0 | 1 | 963 |
0 | 27,385,088 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2014-12-09T16:26:00.000 | 1 | 2 | 0 | fuzzy matching lots of strings | 27,383,896 | 0.099668 | python,sql,r,fuzzy-search,fuzzy-logic | You can also use agrep() in R for fuzzy name matching, by giving a percentage of allowed mismatches. If you pass it a fixed dataset, then you can grep for matches out of your database. | I've got a database with property owners; I would like to count the number of properties owned by each person, but am running into standard mismatch problems:
REDEVELOPMENT AUTHORITY vs. REDEVELOPMENT AUTHORITY O vs. PHILADELPHIA REDEVELOPMEN vs. PHILA. REDEVELOPMENT AUTH
COMMONWEALTH OF PENNA vs. COMMONWEALTH OF PENNSYLVA vs. COMMONWEALTH OF PA
TRS UNIV OF PENN vs. TRUSTEES OF THE UNIVERSIT
From what I've seen, this is a pretty common problem, but my problem differs from those with solutions I've seen for two reasons:
1) I've got a large number of strings (~570,000), so computing the 570000 x 570000 matrix of edit distances (or other pairwise match metrics) seems like a daunting use of resources
2) I'm not focused on one-off comparisons--e.g., as is most common for what I've seen from big data fuzzy matching questions, matching user input to a database on file. I have one fixed data set that I want to condense once and for all.
Are there any well-established routines for such an exercise? I'm most familiar with Python and R, so an approach in either of those would be ideal, but since I only need to do this once, I'm open to branching out to other, less familiar languages (perhaps something in SQL?) for this particular task. | 0 | 1 | 963 |
0 | 27,387,097 | 0 | 0 | 1 | 0 | 1 | true | 0 | 2014-12-09T16:53:00.000 | 3 | 1 | 0 | OpenCV python on raspberry | 27,384,395 | 1.2 | python,opencv,raspberry-pi | Check the API docs for 3.0. Some python functions return more parameters or in a different order.
example: cv2.cv.CV_HAAR_SCALE_IMAGE was replaced with cv2.CASCADE_SCALE_IMAGE
or
(cnts, _) = cv2.findContours(...) now returning the modified image as well
(modImage, cnts, _) = cv2.findContours(...) | I've installed on my raspberry opencv python module and everything was working fine. Today I've compiled a C++ version of OpenCV and now when I want to run my python script i get this error:
Traceback (most recent call last):
File "wiz.py", line 2, in
import cv2.cv as cv
ImportError: No module named cv | 0 | 1 | 1,523 |
0 | 27,412,604 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2014-12-10T04:58:00.000 | 2 | 3 | 0 | binary document classifcation | 27,393,613 | 0.132549 | python,machine-learning,nlp,nltk | I generally recommend using Scikit as Slater suggested. Its more scalable than NLTK. For this task using Naive Bayes Classifier or Support Vector Machine is your best bet. You are dealing with binary classification so you don't have multi classes.
As for the features that you should extract, try unigrams, bigrams, trigrams, and TFIDF features. Also, LDA might turn out useful, but start with the easier ones such as unigrams.
This also depends on the type and length of texts you are dealing with. Document classification has been around for more than a decade and there are so many good papers that you could find useful.
Let me know if you have any further questions. | I know this is a very vague question but I'm trying to figure out the best way to do document classification. I have two sets training and testing. The training set is a set of documents each labeled 1 or 0. The documents are labeled 1 if is it a informative summary and a 0 if it is not. I'm trying to create a supervised classifier. I can't decide which nlp toolkit to use. I'm thinking nltk. Do you have any suggestions? I have to write the classifier in python. Also any specific types of classifiers. I've been doing research but can't seem to get a good answer. | 0 | 1 | 211 |
0 | 38,023,538 | 0 | 1 | 0 | 0 | 2 | false | 10 | 2014-12-10T16:43:00.000 | 5 | 3 | 0 | import sklearn not working in PyCharm | 27,406,345 | 0.321513 | python,scikit-learn,pycharm | This worked for me:
In my PyCharm Community Edition 5.0.4, Preference -> Project Interpreter -> check whether sklearn package is installed for the current project interpreter, if not, install it. | I installed numpy, scipy and scikit-learn using pip on mac os. However in PyCharm, all imports work except when i try importing sklearn. I tried doing it in the Python shell and it worked fine. Any ideas as to what is causing this?
Also, not sure if it is relevant, but i installed scikit-learn last.
The error I receive is unresolved reference | 0 | 1 | 10,664 |
0 | 27,422,973 | 0 | 1 | 0 | 0 | 2 | false | 10 | 2014-12-10T16:43:00.000 | 6 | 3 | 0 | import sklearn not working in PyCharm | 27,406,345 | 1 | python,scikit-learn,pycharm | I managed to figure it out, i had to go to the project interpreter and change the python distribution as it had defaulted the OS installed Python rather than my own installed distribution. | I installed numpy, scipy and scikit-learn using pip on mac os. However in PyCharm, all imports work except when i try importing sklearn. I tried doing it in the Python shell and it worked fine. Any ideas as to what is causing this?
Also, not sure if it is relevant, but i installed scikit-learn last.
The error I receive is unresolved reference | 0 | 1 | 10,664 |
0 | 27,453,175 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2014-12-11T02:35:00.000 | 1 | 1 | 0 | Stop Spyder from importing modules like `numpy`, `pandas`, etc | 27,414,466 | 1.2 | python,spyder | (Spyder dev here) This is not possible. If Pandas is installed on the same Python installation where Spyder is, then Spyder will import Pandas to: a) report to its users the minimal version needed to view DataFrames in the Variable Explorer and b) import csv files as DataFrames.
The only solution I can suggest you is this:
Create a new virtualenv or conda environment
Install there Spyder and its dependencies, but not Pandas. Spyder dependencies can be checked under the menu Help > Optional dependencies
Start your virtualenv/conda env Spyder
Go to
Tools > Preferences > Console > Advanced Settings > Python executable
select the option Use the following Python interpreter and write (or select) there the path to the interpreter where you have Pandas installed (e.g. /usr/bin/python)
Start a new Python/IPython console and import pandas there. | When I start Spyder, it automatically imports pandas and numpy. Is it possible to have Spyder ignore these modules?
I see these are imported in multiple Spyderlib files. For example, pandas gets imported in spyderlib/widgets/importwizard.py, spyderlib/baseconfig.py, etc.
(I'm trying to debug something in pandas and I'd like to import it for the first time in a debugging session in Spyder) | 0 | 1 | 651 |
0 | 30,337,118 | 0 | 0 | 0 | 0 | 1 | false | 44 | 2014-12-14T15:13:00.000 | 25 | 4 | 0 | How to use Gensim doc2vec with pre-trained word vectors? | 27,470,670 | 1 | python,nlp,gensim,word2vec,doc2vec | Note that the "DBOW" (dm=0) training mode doesn't require or even create word-vectors as part of the training. It merely learns document vectors that are good at predicting each word in turn (much like the word2vec skip-gram training mode).
(Before gensim 0.12.0, there was the parameter train_words mentioned in another comment, which some documentation suggested will co-train words. However, I don't believe this ever actually worked. Starting in gensim 0.12.0, there is the parameter dbow_words, which works to skip-gram train words simultaneous with DBOW doc-vectors. Note that this makes training take longer – by a factor related to window. So if you don't need word-vectors, you may still leave this off.)
In the "DM" training method (dm=1), word-vectors are inherently trained during the process along with doc-vectors, and are likely to also affect the quality of the doc-vectors. It's theoretically possible to pre-initialize the word-vectors from prior data. But I don't know any strong theoretical or experimental reason to be confident this would improve the doc-vectors.
One fragmentary experiment I ran along these lines suggested the doc-vector training got off to a faster start – better predictive qualities after the first few passes – but this advantage faded with more passes. Whether you hold the word vectors constant or let them continue to adjust with the new training is also likely an important consideration... but which choice is better may depend on your goals, data set, and the quality/relevance of the pre-existing word-vectors.
(You could repeat my experiment with the intersect_word2vec_format() method available in gensim 0.12.0, and try different levels of making pre-loaded vectors resistant-to-new-training via the syn0_lockf values. But remember this is experimental territory: the basic doc2vec results don't rely on, or even necessarily improve with, reused word vectors.) | I recently came across the doc2vec addition to Gensim. How can I use pre-trained word vectors (e.g. found in word2vec original website) with doc2vec?
Or is doc2vec getting the word vectors from the same sentences it uses for paragraph-vector training?
Thanks. | 0 | 1 | 40,470 |
0 | 27,522,080 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2014-12-17T09:09:00.000 | 1 | 1 | 0 | Which object in Numpy Python is good for matrix manipulation? numpy.array or numpy.matrix? | 27,521,836 | 0.197375 | python,numpy | Objects of type numpy.array are n-dimensional, meaning they can represent 2-dimensional matrices, as well as 3D, 4D, 5D, etc.
The numpy.matrix, however, is designed specifically for the purpose of 2-dimensional matrices. As part of this specialisation, some of the operators are modified, for example * refers to matrix multiplication.
Use whichever is most sensible for your work, but make sure you remain consistent. If you'll occasionally have to deal with higher-dimensional data then it makes sense to use numpy.array all the time (you can still do matrix multiplication with 2D numpy.array, but you have to use a method as opposed to the * operator). | It seesms like we can have n dimensional array by numpy.array
also numpy.matrix is exact matrix I want.
which one is generally used? | 0 | 1 | 83 |
0 | 27,580,194 | 0 | 1 | 0 | 0 | 1 | false | 4 | 2014-12-19T15:18:00.000 | 1 | 2 | 0 | Install Python 2.7.9 over 2.7.6 | 27,568,886 | 0.099668 | python,python-2.7,opencv,numpy,upgrade | Upgrading to new version can give you more stable and featured version. Usually this is the case - version 2.7 is mature and stable. I think you do not need to re-install/reconfigure the packages again because of this stability (2.7.6 and 2.7.9 are 2.7 anyway). Problems are hardly possible, although they may be in very small number of cases. And folder with the subversion X.X will be overwrited, because there are no any folders for minor versions X.X.X
Unfortunately, I can not give more precise answer. | I'm using Python for my research. I have both version of Python on my system: 3.3.2 and 2.7.6. However due to the compatibility with the required packages (openCV, Numpy, Scipy, etc.) and the legacy code, I work most of the time with Python 2.7.6.
It took me quite a lot of effort at the beginning to set up the environment ready for my works. E.g. I didn't follow the "easy" way of installing all-in-one Anaconda or Enthought Canopy software, instead I installed packages one by one (using pip..), some packages,(scipy, sympy, pandas, lxml) could not be installed by pip and I had to installed using a MSI file.
Now all of them are working fine. I see that Python released the newer version: 2.7.9. My questions are:
(1) is it worth upgrading from 2.7.6 to 2.7.9, any benefit in performance, security, stability,...?
(2) will it break/overwrite the current setup of my 2.7.6 environment? I notice there are 2 folders on my C:\, which are Python27 and Python33. As the logic, Python 2.7.9 will also be in the same folder Python27 (as 2.7.6). Do I need to re-install/reconfigure the packages again?
(If there will be a lot of hassles, then perhaps I'll follow the mantra: "if it is not broken, don't fix it"...)
EDIT:
Thanks for the comments, but as my understanding, this site is about Q&A: got question & get answered.
It's a simple and direct question, let me make it clearer: e.g. Python has Lib/site-packages folder with my packages inside. Will the new installation overwrite that folder, etc.
People may know or not know about this fact, based on their knowledge or experience. I don't want to experiment myself so I asked, just for my curiosity.
I know there's a trend to reform SO to get better question and answer quality, but I don't know since when the people can be so ridiculously sensitive :) If this one is asked in "Stack Exchange Programming" site, then I can understand that it's not well-suited for that site. Now I understand another effect of the trolls: they make a community become over-sensitive and drive the new/naive newbie away. | 0 | 1 | 11,082 |
0 | 27,592,508 | 0 | 1 | 0 | 0 | 1 | true | 92 | 2014-12-21T18:32:00.000 | 119 | 5 | 0 | Floor or ceiling of a pandas series in python? | 27,592,456 | 1.2 | python,pandas,series,floor,ceil | You can use NumPy's built in methods to do this: np.ceil(series) or np.floor(series).
Both return a Series object (not an array) so the index information is preserved. | I have a pandas series series. If I want to get the element-wise floor or ceiling, is there a built in method or do I have to write the function and use apply? I ask because the data is big so I appreciate efficiency. Also this question has not been asked with respect to the Pandas package. | 0 | 1 | 103,550 |
0 | 27,604,701 | 0 | 1 | 0 | 0 | 2 | false | 0 | 2014-12-22T14:16:00.000 | 0 | 3 | 0 | how to print finite number of digits USING the scientific notation | 27,604,441 | 0 | python | %f stands for Fixed Point and will force the number to show relative to the number 1 (1e-3 is shown as 0.001). %e stands for Exponential Notation and will give you what you want (1e-3 is shown as 1e-3). | I have some values that I need to print in scientific notation (values of the order of 10^-8, -9)
But I would like to don't print a long number, only two digits after the .
something as:
9.84e-08
and not
9.84389879870496809597e-08
How can I do it? I tried to use
"%.2f" % a
where 'a' is the number containing the value but these numbers appear as 0.00 | 0 | 1 | 93 |
0 | 27,604,491 | 0 | 1 | 0 | 0 | 2 | true | 0 | 2014-12-22T14:16:00.000 | 2 | 3 | 0 | how to print finite number of digits USING the scientific notation | 27,604,441 | 1.2 | python | try with this :
print "%.2e"%9.84389879870496809597e-08 #'9.84e-08' | I have some values that I need to print in scientific notation (values of the order of 10^-8, -9)
But I would like to don't print a long number, only two digits after the .
something as:
9.84e-08
and not
9.84389879870496809597e-08
How can I do it? I tried to use
"%.2f" % a
where 'a' is the number containing the value but these numbers appear as 0.00 | 0 | 1 | 93 |
0 | 27,689,079 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2014-12-22T18:02:00.000 | 1 | 1 | 0 | OpenCV: how to get image format if reading from buffer? | 27,608,053 | 1.2 | python,opencv,image-processing | There is a standard Python functionimghdr.what. It rulez!
^__^ | I read an image (of unknown format, most frequent are PNGs or JPGs) from a buffer.
I can decode it with cv2.imdecode, I can even check if it is valid (imdecode returns non-None).
But how can I reveal the image type (PNG, JPG, something else) of the buffer I've just read? | 0 | 1 | 2,355 |
0 | 45,621,086 | 0 | 0 | 0 | 0 | 1 | false | 6 | 2014-12-23T22:57:00.000 | -1 | 1 | 0 | Intel MKL Error with Gaussian Fitting in Python? | 27,629,227 | -0.197375 | python,scipy,least-squares,intel-mkl | You could try Intel's python distribution. It includes a pre-built scipy optimized with MKL. | I'm doing a Monte Carlo simulation in Python in which I obtain a set of intensities at certain 2D coordinates and then fit a 2D Gaussian to them. I'm using the scipy.optimize.leastsq function and it all seems to work well except for the following error:
Intel MKL ERROR: Parameter 6 was incorrect on entry to DGELSD.
The problem occurs multiple times in a simulation. I have looked around and understand it is something to do with a bug in Intel's MKL library. I can't seem to find a solution to the problem and so I was looking at an alternative fitting function In could use. If someone does know how to get rid of the problem that would be good also. | 0 | 1 | 860 |
0 | 27,637,837 | 0 | 0 | 0 | 0 | 1 | true | 69 | 2014-12-24T12:53:00.000 | 27 | 8 | 0 | What are Python pandas equivalents for R functions like str(), summary(), and head()? | 27,637,281 | 1.2 | python,r,pandas | summary() ~ describe()
head() ~ head()
I'm not sure about the str() equivalent. | I'm only aware of the describe() function. Are there any other functions similar to str(), summary(), and head()? | 0 | 1 | 53,740 |
0 | 27,641,772 | 0 | 0 | 1 | 0 | 1 | true | 0 | 2014-12-24T19:51:00.000 | 1 | 1 | 0 | Better way to store a set of files with arrays? | 27,641,616 | 1.2 | python,database,numpy,dataset,storage | Reading 500 files in python should not take much time, as the overall file size is around few MB. Your data-structure is plain and simple in your file chunks, it ll not even take much time to parse I guess.
Is the actual slowness is bcoz of opening and closing file, then there may be OS related issue (it may have very poor I/O.)
Did you timed it like how much time it is taking to read all the files.?
You can also try using small database structures like sqllite. Where you can store your file data and access the required data in a fly. | I've accumulated a set of 500 or so files, each of which has an array and header that stores metadata. Something like:
2,.25,.9,26 #<-- header, which is actually cryptic metadata
1.7331,0
1.7163,0
1.7042,0
1.6951,0
1.6881,0
1.6825,0
1.678,0
1.6743,0
1.6713,0
I'd like to read these arrays into memory selectively. We've built a GUI that lets users select one or multiple files from disk, then each are read in to the program. If users want to read in all 500 files, the program is slow opening and closing each file. Therefore, my question is: will it speed up my program to store all of these in a single structure? Something like hdf5? Ideally, this would have faster access than the individual files. What is the best way to go about this? I haven't ever dealt with these types of considerations. What's the best way to speed up this bottleneck in Python? The total data is only a few MegaBytes, I'd even be amenable to storing it in the program somewhere, not just on disk (but don't know how to do this) | 0 | 1 | 66 |
0 | 27,688,141 | 0 | 0 | 0 | 1 | 1 | false | 0 | 2014-12-28T17:47:00.000 | 0 | 1 | 0 | Python/Cassandra: insert vs. CSV import | 27,678,990 | 0 | python,cassandra,load-testing | For a few million, I'd say just use CSV (assuming rows aren't huge); and see if it works. If not, inserts it is :)
For more heavy duty stuff, you might want to create sstables and use sstable loader. | I am generating load test data in a Python script for Cassandra.
Is it better to insert directly into Cassandra from the script, or to write a CSV file and then load that via Cassandra?
This is for a couple million rows. | 0 | 1 | 364 |
0 | 27,717,883 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2014-12-31T00:22:00.000 | 3 | 2 | 0 | how to do the sum of pixels with Python and OpenCV | 27,714,535 | 0.291313 | python,opencv,pixel,integral | sumElems function in OpenCV will help you to find out the sum of the pixels of the whole of the image in python. If you want to find only the sum of a particular portion of an image, you will have to select the ROI of the image on the sum is to be calculated.
As a side note, if you had found out the integral image, the very last pixel represents the sum of all the pixels of the image. | I have an image and want to find the sum of a part of it and then compared to a threshold.
I have a rectangle drawn on the image and this is the area I need to apply the sum.
I know the cv2.integral function, but this gives me a matrix as a result. Do you have any suggestion? | 0 | 1 | 17,785 |
0 | 27,738,842 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2014-12-31T00:22:00.000 | 5 | 2 | 0 | how to do the sum of pixels with Python and OpenCV | 27,714,535 | 0.462117 | python,opencv,pixel,integral | np.sum(img[y1:y2, x1:x2, c1:c2]) Where c1 and c2 are the channels. | I have an image and want to find the sum of a part of it and then compared to a threshold.
I have a rectangle drawn on the image and this is the area I need to apply the sum.
I know the cv2.integral function, but this gives me a matrix as a result. Do you have any suggestion? | 0 | 1 | 17,785 |
0 | 27,810,170 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2015-01-05T21:58:00.000 | 0 | 1 | 0 | opencv_traincascade.exe error, "Please empty the data folder"? | 27,788,609 | 1.2 | python,opencv,classification,cascade | I solved it!
I downloaded opencv and all other required programs on another computer and tried running train classifier on another set of pictures. After I verified that it worked in the other computer I copied all files back to my computer and used them. | I have been successful at training a classifier before but today I started getting errors.
Problem:
When I try to train a classifier using opencv_traincascade.exe I get the following message:
"Training parameters are loaded from the parameter file in data folder!
Please empty the data folder if you want to use your own set of parameters."
The trainer then stops midway in stage 0 with the following message:
===== TRAINING 0-stage =====
BEGIN
POS count : consumed 2 : 2
Train dataset for temp stage can not be filled. Branch training terminated.
Cascade classifier can't be trained. Check the used training parameters.
Here is how I got to the problem:
I had a parameters file inside the classifier folder where my trainer would usually train classifiers to. I forgot to delete this parameters file before running the traincascade.exe file. Even though I erased the parameter file I still got the same error.
Thanks for helping. | 0 | 1 | 359 |
0 | 55,962,515 | 0 | 0 | 0 | 0 | 1 | false | 10 | 2015-01-06T16:50:00.000 | 0 | 2 | 0 | Is there a way to get a numpy-style view to a slice of an array stored in a hdf5 file? | 27,803,331 | 0 | python,hdf5,pytables,h5py | It is unavoidable to not copy that section of the dataset to memory.
Reason for that is simply because you are requesting the entire section, not just a small part of it.
Therefore, it must be copied completely.
So, as h5py already allows you to use HDF5 datasets in the same way as NumPy arrays, you will have to change your code to only request the values in the dataset that you currently need. | I have to work on large 3D cubes of data. I want to store them in HDF5 files (using h5py or maybe pytables). I often want to perform analysis on just a section of these cubes. This section is too large to hold in memory. I would like to have a numpy style view to my slice of interest, without copying the data to memory (similar to what you could do with a numpy memmap). Is this possible? As far as I know, performing a slice using h5py, you get a numpy array in memory.
It has been asked why I would want to do this, since the data has to enter memory at some point anyway. My code, out of necessity, already run piecemeal over data from these cubes, pulling small bits into memory at a time. These functions are simplest if they simply iterate over the entirety of the datasets passed to them. If I could have a view to the data on disk, I simply could pass this view to these functions unchanged. If I cannot have a view, I need to write all my functions to only iterate over the slice of interest. This will add complexity to the code, and make it more likely for human error during analysis.
Is there any way to get a view to the data on disk, without copying to memory? | 0 | 1 | 568 |
0 | 27,820,207 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2015-01-07T11:58:00.000 | 2 | 1 | 0 | Distinct 0 and 1 on histogram with logscale | 27,819,021 | 0.379949 | python,matplotlib,scale,histogram,logarithm | So I assume that you want to have a logscale on the y axis from what you have written.
Obviously, what you want to achieve won't be possible. log(0) ist NaN because log(0) is not defined mathematically. You could, in theory, set ylim to a very small number close to 0, but that wouldn't help you either. Your y axis would become larger and larger as you approach 0, so you couldn't display whatever you want to show in a way that would make any sense. | Is there any way to plot a histogram in matplot lib with log scale that include 0?
plt.ylim( ymin = 0 ) doesn't work because log(0) is NaN and matplot lib removes is... :( | 0 | 1 | 45 |
0 | 27,829,029 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2015-01-07T14:24:00.000 | 5 | 2 | 0 | Working with rasters in file geodatabase (.gdb) with GDAL | 27,821,571 | 1.2 | python,gdal | Currently both FileGDB and OpenFileGDB drivers handle only vector datasets. Raster support is not part of Esri's FGDB API.
You will need to use Esri tools to export the rasters to another format, such as GeoTIFF. | I'm working on a tool that converts raster layers to arrays for processing with NumPy, and ideally I would like to be able to work with rasters that come packaged in a .gdb without exporting them all (especially if this requires engaging ArcGIS or ArcPy).
Is this possible with the OpenFileGDB driver? From what I can tell this driver seems to treat raster layers the same as vector layers, which gives you access to some data about the layer but doesn't give you the ReadAsArray functionality. | 0 | 1 | 2,796 |
0 | 27,961,586 | 0 | 1 | 0 | 0 | 1 | true | 4 | 2015-01-11T03:49:00.000 | 2 | 1 | 0 | Do scipy.sparse functions release the GIL? | 27,883,769 | 1.2 | python,numpy,scipy,sparse-matrix,gil | They do, for Scipy versions >= 0.14.0 | Question
Do scipy.sparse functions, like csr._mul_matvec release the GIL?
Context
Python functions that wrap foreign code (like C) often release the GIL during execution, enabling parallelism with multi-threading. This is common in the numpy codebase. Is it also common in scipy.sparse? If so which operations release the GIL? If they don't release the GIL then is there a fundamental issue here why not or is it just lack of man-power? | 0 | 1 | 209 |
0 | 27,961,800 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2015-01-15T10:39:00.000 | 0 | 3 | 0 | Loading data file with too many commas in Python | 27,961,552 | 0 | python,numpy,comma,data-files | I don't know if this is an option but you could pre-process it using tr -s ',' file.txt. This is a shell command so you'd have to do it either before calling python or using system. The latter might not be the best way since dragon2fly solved the issue using a python function. | I am trying to collect some data from a .txt file into my python script. The problem is that when the data was collected, it could not collect data in one of the columns, which has given me more commas than normally. It looks like this:
0,0,,-2235
1,100,,-2209
2,200,,-2209
All I want is to load the data and remove the commas but when I try with numpy.loadtxt it gives me a value error. What do I do? | 0 | 1 | 192 |
1 | 27,987,246 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2015-01-16T05:07:00.000 | 0 | 1 | 0 | Paraview glyph uniform distribution does not work on my dataset | 27,977,626 | 0 | python,paraview | Uniform distribution works by picking a set of random locations in space and finding data points closet to those locations to glyph. Try playing with the Seed to see if that helps pick different random locations that yield better results.
If you could share the data, that'd make it easier to figure out what could be going on here, as well. | I'm running Paraview 4.2 on Linux. Here's what's happening:
I load my XDMF/hdf5 data into PV, which contains vector data.
I apply a glyph filter to the loaded data, and hit apply (thereby using the default mode of Uniform Spatial Distribution).
No glyphs appear on screen, and the information tab shows that the filter has no data (0 points, etc.).
If I switch to All Points, or Every Nth Point, it works fine and displays the glyphs oriented correctly.
Annoyingly, if I then make a cone source, and change the input of the glyph to the cone, Uniform Spatial Distribution works fine for the cone.
No errors come up anywhere when I do this, whether in the PV gui or through pvpython.
Any ideas? | 0 | 1 | 653 |
0 | 28,009,442 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2015-01-18T11:56:00.000 | 6 | 4 | 0 | randomly select 3 numbers whose sum is 356 and each of these 3 is more than 30 | 28,009,390 | 1 | python,python-2.7 | This question is rather subjective to the definition of random, and the distribution you wish to replicate.
The simplest solution:
Choose a one random number, rand1 : [30,296]
Choose a second random number, rand2 : [30, (326-Rand1)]
Then the third cannot be random due to the constraint so calc via 356-(rand1+rand2) | please how can I randomly select 3 numbers whose sum is 356 and each of these 3 is more than 30?
So output should be for example [100, 34, 222]
(but not [1,5,350])
I would like to use random module to do this. thank you! | 0 | 1 | 131 |
0 | 28,024,414 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2015-01-19T12:00:00.000 | 0 | 1 | 0 | Using precomputed Gram matrix in sklearn linear models (Lasso, Lars, etc) | 28,024,191 | 0 | python,machine-learning,scikit-learn | (My answer is based on the usage of svm.SVC, Lasso may be different.)
I think that you are supposed pass the Gram matrix instead of X to the fit method.
Also, the Gram matrix has shape (n_samples, n_samples) so it should also be too large for memory in your case, right? | I'm trying to train a linear model on a very large dataset.
The feature space is small but there are too many samples to hold in memory.
I'm calculating the Gram matrix on-the-fly and trying to pass it as an argument to sklearn Lasso (or other algorithms) but, when I call fit, it needs the actual X and y matrices.
Any idea how to use the 'precompute' feature without storing the original matrices? | 0 | 1 | 1,183 |
0 | 28,057,921 | 0 | 0 | 0 | 0 | 1 | false | 5 | 2015-01-20T23:59:00.000 | 0 | 2 | 0 | Store large dictionary to file in Python | 28,057,407 | 0 | python,dictionary,storage,store,pickle | With 60,000 dimensions do you mean 60,000 elements? if this is the case and the numbers are 1..10 then a reasonably compact but still efficient approach is to use a dictionary of Python array.array objects with 1 byte per element (type 'B').
The size in memory should be about 60,000 entries x 60,000 bytes, totaling 3.35Gb of data.
That data structure is pickled to about the same size to disk too. | I have a dictionary with many entries and a huge vector as values. These vectors can be 60.000 dimensions large and I have about 60.000 entries in the dictionary. To save time, I want to store this after computation. However, using a pickle led to a huge file. I have tried storing to JSON, but the file remains extremely large (like 10.5 MB on a sample of 50 entries with less dimensions). I have also read about sparse matrices. As most entries will be 0, this is a possibility. Will this reduce the filesize? Is there any other way to store this information? Or am I just unlucky?
Update:
Thank you all for the replies. I want to store this data as these are word counts. For example, when given sentences, I store the amount of times word 0 (at location 0 in the array) appears in the sentence. There are obviously more words in all sentences than appear in one sentence, hence the many zeros. Then, I want to use this array tot train at least three, maybe six classifiers. It seemed easier to create the arrays with word counts and then run the classifiers over night to train and test. I use sklearn for this. This format was chosen to be consistent with other feature vector formats, which is why I am approaching the problem this way. If this is not the way to go, in this case, please let me know. I am very much aware that I have much to learn in coding efficiently!
I also started implementing sparse matrices. The file is even bigger now (testing with a sample set of 300 sentences).
Update 2:
Thank you all for the tips. John Mee was right by not needing to store the data. Both he and Mike McKerns told me to use sparse matrices, which sped up calculation significantly! So thank you for your input. Now I have a new tool in my arsenal! | 0 | 1 | 6,975 |
0 | 57,910,696 | 0 | 0 | 0 | 0 | 2 | false | 189 | 2015-01-21T10:17:00.000 | 2 | 7 | 0 | Random state (Pseudo-random number) in Scikit learn | 28,064,634 | 0.057081 | python,random,scikit-learn | If there is no randomstate provided the system will use a randomstate that is generated internally. So, when you run the program multiple times you might see different train/test data points and the behavior will be unpredictable. In case, you have an issue with your model you will not be able to recreate it as you do not know the random number that was generated when you ran the program.
If you see the Tree Classifiers - either DT or RF, they try to build a try using an optimal plan. Though most of the times this plan might be the same there could be instances where the tree might be different and so the predictions. When you try to debug your model you may not be able to recreate the same instance for which a Tree was built. So, to avoid all this hassle we use a random_state while building a DecisionTreeClassifier or RandomForestClassifier.
PS: You can go a bit in depth on how the Tree is built in DecisionTree to understand this better.
randomstate is basically used for reproducing your problem the same every time it is run. If you do not use a randomstate in traintestsplit, every time you make the split you might get a different set of train and test data points and will not help you in debugging in case you get an issue.
From Doc:
If int, randomstate is the seed used by the random number generator; If RandomState instance, randomstate is the random number generator; If None, the random number generator is the RandomState instance used by np.random. | I want to implement a machine learning algorithm in scikit learn, but I don't understand what this parameter random_state does? Why should I use it?
I also could not understand what is a Pseudo-random number. | 0 | 1 | 236,863 |
0 | 50,672,222 | 0 | 0 | 0 | 0 | 2 | false | 189 | 2015-01-21T10:17:00.000 | 23 | 7 | 0 | Random state (Pseudo-random number) in Scikit learn | 28,064,634 | 1 | python,random,scikit-learn | If you don't specify the random_state in your code, then every time you run(execute) your code a new random value is generated and the train and test datasets would have different values each time.
However, if a fixed value is assigned like random_state = 42 then no matter how many times you execute your code the result would be the same .i.e, same values in train and test datasets. | I want to implement a machine learning algorithm in scikit learn, but I don't understand what this parameter random_state does? Why should I use it?
I also could not understand what is a Pseudo-random number. | 0 | 1 | 236,863 |
0 | 28,078,626 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2015-01-21T23:07:00.000 | 0 | 1 | 0 | Detecting similar objects using OpenCV | 28,078,555 | 0 | python,opencv,image-processing | Very open question, but OpenCV is where to be looking. Your best bet would probably be building Haar cascade classifiers. Plenty of reading material on the topic, somewhat overwhelming at first but that is what I would be looking into. | I've been looking into this for a while and was wondering the feasibility of using something like feature detection in OpenCV to do the following:
I'm working on a project that requires identifying items within a grocery store that do not have barcodes (i.e. produce). I want to build a local database of the various items to be identified and, using images of the items, compare them to the database and tell what the item is.
This doesn't need to be perfectly accurate (it doesn't need to tell between different types of apples, for example), but I want it to be able to tell between something like a peach and an orange, but still be able to tell a banana is a banana even though its color is slightly different.
My question is, is what I'm trying to do possible using OpenCV? From what I've been reading, identical objects can be tracked with relative ease, but I'm having trouble stumbling upon anything more like what I'm attempting to do.
Any nudges in the right direction would be immensely appreciated. | 0 | 1 | 243 |
0 | 28,083,896 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2015-01-22T07:19:00.000 | 0 | 1 | 0 | How to install matplotlib on windows | 28,083,203 | 0 | python,matplotlib | Yes the matplotlib site above will do the job!
The same procedure you will have to follow to install numpy, which I guess you will also need. | I just started using python and definitely need the matplotlib. I'm confused by the fact that there is even not a clear explanation for the basic ideas behind install a lib/package in python generally. Anyway, I'm using windows and have installed Python 3.4.2 downloaded from the offical website, how should I install the matplotlib?
Thanks in advance! | 0 | 1 | 986 |
0 | 28,105,601 | 0 | 1 | 0 | 0 | 1 | false | 2 | 2015-01-22T15:27:00.000 | 0 | 1 | 0 | How to display a matplotlib figure object | 28,092,518 | 0 | python,matplotlib,figures | Figures need a canvas to draw on.
Try fig.draw() | I am working with a matplotlib-based routine that returns a figure and, as separate objects, the axes that it contains. Is there any way, that I can display these things and edit them (annotate, change some font sizes, things like that)? "fig.show()" doesn't work, just returns an error. Thanks. | 0 | 1 | 1,408 |
0 | 47,223,192 | 0 | 1 | 0 | 0 | 1 | false | 5 | 2015-01-23T11:49:00.000 | 0 | 4 | 0 | How do I install Numpy for Python 2.7 on Windows? | 28,109,268 | 0 | python,windows,numpy | Wasted a lot of time trying to install on Windows from various binaries and installers, which all seemed to install a broken version, until I found that this worked: navigate to the python install directory and do python .\site-packages\pip install numpy | I am trying to install numpy for python 2.7, I've downloaded the zip, unzipped it and was expecting a Windows download file (.exe), but there isn't one.
Which of these files do I use to install it?
I tried running the setup.py file but don't seem to be getting anywhere.
Thanks!!! | 0 | 1 | 21,104 |
0 | 28,249,829 | 0 | 0 | 0 | 0 | 2 | true | 6 | 2015-01-27T10:00:00.000 | 10 | 4 | 0 | seeking convergence with optimize.fmin on scipy | 28,167,648 | 1.2 | python,optimization,scipy | There is actually no need to see your code to explain what is happening. I will answer point by point quoting you.
My problem is, when I start the minimization, the value printed decreases
untill it reaches a certain point (the value 46700222.800). There it
continues to decrease by very small bites, e.g.,
46700222.797,46700222.765,46700222.745,46700222.699,46700222.688,46700222.678
Notice that the difference between the last 2 values is -0.009999997913837433, i.e. about 1e-2. In the convention of minimization algorithm, what you call values is usually labelled x. The algorithm stops if these 2 conditions are respected AT THE SAME TIME at the n-th iteration:
convergence on x: the absolute value of the difference between x[n] and the next iteration x[n+1] is smaller than xtol
convergence on f(x): the absolute value of the difference between f[n] and f[n+1] is smaller than ftol.
Moreover, the algorithm stops also if the maximum number of iterations is reached.
Now notice that xtol defaults to a value of 1e-4, about 100 times smaller than the value 1e-2 that appears for your case. The algorithm then does not stop, because the first condition on xtol is not respected, until it reaches the maximum number of iterations.
I read that the options ftol could be used but it has absolutely no
effect on my code. In fact, I don't even know what value to put for
ftol. I tried everything from 0.00001 to 10000 and there is still no
convergence.
This helped you respecting the second condition on ftol, but again the first condition was never reached.
To reach your aim, increase also xtol.
The following methods will also help you more in general when debugging the convergence of an optimization routine.
inside the function you want to minimize, print the value of x and the value of f(x) before returning it. Then run the optimization routine. From these prints you can decide sensible values for xtol and ftol.
consider nondimensionalizing the problem. There is a reason if ftol and xtol default both to 1e-4. They expect you to formulate the problem so that x and f(x) are of order O(1) or O(10), say numbers between -100 and +100. If you carry out the nondimensionalization you handle a simpler problem, in the way that you often know what values to expect and what tolerances you are after.
if you are interested just in a rough calculation and can't estimate typical values for xtol and ftol, and you know (or you hope) that your problem is well behaved, i.e. that it will converge, you can run fmin in a try block, pass to fmin only maxiter=20 (say), and catch the error regarding the Maximum number of function evaluations has been exceeded. | I have a function I want to minimize with scipy.optimize.fmin. Note that I force a print when my function is evaluated.
My problem is, when I start the minimization, the value printed decreases untill it reaches a certain point (the value 46700222.800). There it continues to decrease by very small bites, e.g., 46700222.797,46700222.765,46700222.745,46700222.699,46700222.688,46700222.678
So intuitively, I feel I have reached the minimum, since the length of each step are minus then 1. But the algorithm keeps running untill I get a "Maximum number of function evaluations has been exceeded" error.
My question is: how can I force my algorithm to accept the value of the parameter when the function evaluation reaches a value from where it does not really evolve anymore (let say, I don't gain more than 1 after an iteration). I read that the options ftol could be used but it has absolutely no effect on my code. In fact, I don't even know what value to put for ftol. I tried everything from 0.00001 to 10000 and there is still no convergence. | 0 | 1 | 13,104 |
0 | 28,219,470 | 0 | 0 | 0 | 0 | 2 | false | 6 | 2015-01-27T10:00:00.000 | 0 | 4 | 0 | seeking convergence with optimize.fmin on scipy | 28,167,648 | 0 | python,optimization,scipy | Your question is a bit ambiguous. Are you printing the value of your function, or the point where it is evaluated?
My understanding of xtol and ftol is as follows. The iteration stops
when the change in the value of the function between iterations is less than ftol
AND
when the change in x between successive iterations is less than xtol
When you say "...accept the value of the parameter...", this suggests you should change xtol. | I have a function I want to minimize with scipy.optimize.fmin. Note that I force a print when my function is evaluated.
My problem is, when I start the minimization, the value printed decreases untill it reaches a certain point (the value 46700222.800). There it continues to decrease by very small bites, e.g., 46700222.797,46700222.765,46700222.745,46700222.699,46700222.688,46700222.678
So intuitively, I feel I have reached the minimum, since the length of each step are minus then 1. But the algorithm keeps running untill I get a "Maximum number of function evaluations has been exceeded" error.
My question is: how can I force my algorithm to accept the value of the parameter when the function evaluation reaches a value from where it does not really evolve anymore (let say, I don't gain more than 1 after an iteration). I read that the options ftol could be used but it has absolutely no effect on my code. In fact, I don't even know what value to put for ftol. I tried everything from 0.00001 to 10000 and there is still no convergence. | 0 | 1 | 13,104 |
0 | 28,189,659 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2015-01-28T07:56:00.000 | 2 | 2 | 0 | Rotated Paraboloid Surface Fitting | 28,187,233 | 0.197375 | python,matlab,curve-fitting,least-squares,surface | Dont use any toolboxes, GUIs or special functions for this problem. Your problem is very common and the equation you provided may be solved in a very straight-forward manner. The solution to the linear least squares problem can be outlined as:
The basis of the vector space is x^2, y^2, z^2, xy, yz, zx, x, y, z, 1. Therefore your vector has 10 dimensions.
Your problem may be expressed as Ap=b, where p = [A B C D E F G H I J K L]^T is the vector containing your parameters. The right hand side b should be all zeros, but will contain some residual due to model errors, uncertainty in the data or for numerical reasons. This residual has to be minimized.
The matrix A has a dimension of N by 10, where N denotes the number of known points on surface of the parabola.
A = [x(1)^2 y(1)^2 ... y(1) z(1) 1
...
x(N)^2 y(N)^2 ... y(N) z(N) 1]
Solve the overdetermined system of linear equations by computing p = A\b. | I have a set of experimentally determined (x, y, z) points which correspond to a parabola. Unfortunately, the data is not aligned along any particular axis, and hence corresponds to a rotated parabola.
I have the following general surface:
Ax^2 + By^2 + Cz^2 + Dxy + Gyz + Hzx + Ix + Jy + Kz + L = 0
I need to produce a model that can represent the parabola accurately using (I'm assuming) least squares fitting. I cannot seem to figure out how this works. I have though of rotating the parabola until its central axis lines up with z-axis but I do not know what this axis is. Matlab's cftool only seems to fit equations of the form z = f(x, y) and I am not aware of anything in python that can solve this.
I also tried solving for the parameters numerically. When I tried making this into a matrix equation and solving by least squares, the matrix turned out to be invertible and hence my parameters were just all zero. I also am stuck on this and any help would be appreciated. I don't really mind the method as I am familiar with matlab, python and linear algebra if need be.
Thanks | 0 | 1 | 1,508 |
0 | 28,188,683 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2015-01-28T07:56:00.000 | 0 | 2 | 0 | Rotated Paraboloid Surface Fitting | 28,187,233 | 0 | python,matlab,curve-fitting,least-squares,surface | Do you have enough data points to fit all 10 parameters - you will need at least 10?
I also suspect that 10 parameters are to many to describe a general paraboloid, meaning that some of the parameters are dependent. My fealing is that a translated and rotated paraboloid needs 7 parameters (although I'm not really sure) | I have a set of experimentally determined (x, y, z) points which correspond to a parabola. Unfortunately, the data is not aligned along any particular axis, and hence corresponds to a rotated parabola.
I have the following general surface:
Ax^2 + By^2 + Cz^2 + Dxy + Gyz + Hzx + Ix + Jy + Kz + L = 0
I need to produce a model that can represent the parabola accurately using (I'm assuming) least squares fitting. I cannot seem to figure out how this works. I have though of rotating the parabola until its central axis lines up with z-axis but I do not know what this axis is. Matlab's cftool only seems to fit equations of the form z = f(x, y) and I am not aware of anything in python that can solve this.
I also tried solving for the parameters numerically. When I tried making this into a matrix equation and solving by least squares, the matrix turned out to be invertible and hence my parameters were just all zero. I also am stuck on this and any help would be appreciated. I don't really mind the method as I am familiar with matlab, python and linear algebra if need be.
Thanks | 0 | 1 | 1,508 |
0 | 28,198,700 | 0 | 0 | 0 | 0 | 2 | true | 1 | 2015-01-28T16:31:00.000 | 3 | 2 | 0 | Python predict_proba class identification | 28,197,444 | 1.2 | python,machine-learning,scikit-learn | Column 0 corresponds to the class 0, column 1 corresponds to the class 1. | Suppose my labeled data has two classes 1 and 0. When I run predict_proba on the test set it returns an array with two columns. Which column corresponds to which class ? | 0 | 1 | 655 |
0 | 56,207,791 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2015-01-28T16:31:00.000 | 0 | 2 | 0 | Python predict_proba class identification | 28,197,444 | 0 | python,machine-learning,scikit-learn | You can check that by printing the classes with print(estimator.classes_). The array will have the same order like the output. | Suppose my labeled data has two classes 1 and 0. When I run predict_proba on the test set it returns an array with two columns. Which column corresponds to which class ? | 0 | 1 | 655 |
0 | 28,202,075 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2015-01-28T16:50:00.000 | 2 | 1 | 0 | statsmodels: Method used to generate condifence intervals for quantile regression coefficients? | 28,197,813 | 1.2 | python,statsmodels | Inference for parameters is the same across models and is mostly inherited from the base classes.
Quantile regression has a model specific covariance matrix of the parameters.
tvalues, pvalues, confidence intervals, t_test and wald_test are all based on the assumption of an asymptotic normal distribution of the estimated parameters with the given covariance, and are "generic".
Linear models like OLS and WLS, and optionally some other models can use the t and F distribution instead of normal and chisquare distribution for the Wald test based inference.
specifically conf_int is defined in statsmodels.base.models.LikelihoodModelResults
partial correction:
QuantReg uses t and F distributions for inference, since it is currently treated as a linear regression model, and not normal and chisquare distributions as the related M-estimators, RLM, in statsmodels.robust.
Most models have now a use_t option to choose the inference distributions, but it hasn't been added to QuantReg. | I am using the statsmodels.formulas.api.quantreg() for quantile regression in Python. I see that when fitting the quantile regression model, there is an option to specify the significance level for confidence intervals of the regression coefficients, and the confidence interval result appears in the summary of the fit.
What statistical method is being used to generate confidence intervals about the regression coefficients? It does not appear to be documented and I've dug through the source code for quantile_regression.py and summary.py to find this with no luck. Can anyone shed some light on this? | 0 | 1 | 843 |
0 | 28,225,707 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2015-01-29T22:10:00.000 | 3 | 1 | 0 | skimage.io.imsave "destroys" grayscale image? | 28,225,600 | 1.2 | python,image,image-processing,matplotlib,scipy | I think I've figured out why. By convention, floats in skimage are supposed to be in the range [0, 1]. | I have an array of graysale image read in from a color one. If I use matplotlib to imshow the grayscale image, it looks just fine. But when I io.imsave it, it's ruined (by an outrageous amount of noise). However, if I numpy.around it first before io.imsave-ing, then it's significantly better, but black and white are still all swapped (dark regions appear white, and bright regions appear dark)
scipy.misc.imsave, on the other hand, works perfectly.
Thank you. | 0 | 1 | 998 |
0 | 28,232,764 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2015-01-30T09:13:00.000 | 5 | 1 | 0 | RandomForestClassifier differ from BaggingClassifier | 28,232,551 | 0.761594 | python-3.x,scikit-learn,random-forest | The RandomForestClassifier introduces randomness externally (relative to the individual tree fitting) via bagging as BaggingClassifier does.
However it injects randomness also deep inside the tree construction procedure by sub-sampling the list of features that are candidate for splitting: a new random set of features is considered at each new split. This randomness is controlled via the max_features parameter of RandomForestClassifier that has no equivalent in BaggingClassifier(base_estimator=DecisionTreeClassifier()). | How is using a BaggingClassifier with baseestimator=RandomForestClassifier differ from a RandomForestClassifier in sklearn? | 0 | 1 | 668 |
0 | 28,238,935 | 0 | 0 | 0 | 1 | 1 | false | 1 | 2015-01-30T15:11:00.000 | 0 | 1 | 0 | Retain Excel Settings When Adding New CSV | 28,238,830 | 0 | python,excel,csv | Try importing it as a csv file, instead of opening it directly on excel. | I've written a python/webdriver script that scrapes a table online, dumps it into a list and then exports it to a CSV. It does this daily.
When I open the CSV in Excel, it is unformatted, and there are fifteen (comma-delimited) columns of data in each row of column A.
Of course, I then run 'Text to Columns' and get everything in order. It looks and works great.
But tomorrow, when I run the script and open the CSV, I've got to reformat it.
Here is my question: "How can I open this CSV file with the data already spread across the columns in Excel?" | 0 | 1 | 24 |
0 | 30,613,008 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2015-02-01T01:23:00.000 | 1 | 1 | 0 | Color Perceptual Image Hashing | 28,258,468 | 0.197375 | python,image-processing,hash | I found a couple of ways to do this.
I ended up using a Mean Squared Error function that I wrote myself:
def mse(reference, query):
return (((reference).astype("double")-(query).astype("double"))**2).mean()
Until, upon later tinkering I found a function that seemed to do something similar (compare image similarity, bit by bit), but a good amount faster:
def linalg_norm(reference, query):
return np.linalg.norm(reference-query)
I have no theoretical knowledge of what the second function does, however practically it doesn't matter. I am not averse to learning how it works. | I've been trying to write on a fast (ish) image matching program which doesn't match rotated or scale deformed image, in Python.
The goal is to be able to find small sections of an image that are similar to other images in color features, but dissimilar if rotated or warped.
I found out about perceptual image hashing, and I've had a look at the ImageHash module for Python and SSIM, however most of the things I've looked at do not have in color as a major factor, ie they average the color and only work in one channel, and phash in particular doesn't care if images are rotated.
I would like to be able to have an algorithm which would match images which at a distance would appear the same (but which would not necessarily need to be the same image).
Can anyone suggest how I would structure and write such an algorithm in python? or suggest a function which would be able to compare images in this manner? | 0 | 1 | 852 |
0 | 28,270,527 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2015-02-02T02:42:00.000 | 1 | 1 | 0 | pyplot - Is there a way to explicitly specify the x and y axis numbering? | 28,270,435 | 0.197375 | python,matplotlib | Aha, one needs to use the "extent" argument, as in:
plt.imshow(H, cmap=plt.gray(), extent=[-5, 3, 6, 9]) | I'm displaying an image and want to specify the x and y axis numbering rather than having row and column numbers show up there. Any ideas? | 0 | 1 | 31 |
0 | 28,287,768 | 0 | 0 | 0 | 0 | 1 | true | 4 | 2015-02-02T04:06:00.000 | 6 | 2 | 0 | Scikitlearn - order of fit and predict inputs, does it matter? | 28,270,967 | 1.2 | python,scikit-learn | Yes, you need to reorder them. Imagine a simpler case, Linear Regression. The algorithm will calculate the weights for each of the features, so for example if feature 1 is unimportant, it will get assigned a close to 0 weight.
If at prediction time the order is different, an important feature will be multiplied by this almost null weight, and the prediction will be totally off. | Just getting started with this library... having some issues (i've read the docs but didn't get clarity) with RandomForestClassifiers
My question is pretty simple, say i have a train data set like
A B C
1 2 3
Where A is the independent variable (y) and B-C are the dependent variables (x). Let's say the test set looks the same, however the order is
B A C
1 2 3
When I call forest.fit(train_data[0:,1:],train_data[0:,0])
do I then need to reorder the test set to match this order before running? (Ignoring the fact that I need to remove the already predicted y value (a), so lets just say B and C are out of order... ) | 0 | 1 | 2,722 |
0 | 28,315,175 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2015-02-02T16:54:00.000 | 0 | 2 | 0 | Scikit-learn RandomForestClassifier output of predict_proba | 28,282,706 | 0 | python,scikit-learn,random-forest | classifier.predict_proba() returns the class probabilities. The n dimension of the array will vary depending on how many classes there are in the subset you train on | I have a dataset that I split in two for training and testing a random forest classifier with scikit learn.
I have 87 classes and 344 samples. The output of predict_proba is, most of the times, a 3-dimensional array (87, 344, 2) (it's actually a list of 87 numpy.ndarrays of (344, 2) elements).
Sometimes, when I pick a different subset of samples for training and testing, I only get a 2-dimensional array (87, 344) (though I can't work out in which cases).
My two questions are:
what do these dimensions represent? I worked out that to get a ROC AUC score, I have to take one half of the output (that is (87, 344, 2)[:,:,1], transpose it, and then compare it with my ground truth (roc_auc_score(ground_truth, output_of_predict_proba[:,:,1].T) essentially) . But I don't understand what it really means.
why does the output change with different subsets of the data? I can't understand in which cases it returns a 3D array and in which cases a 2D one. | 0 | 1 | 2,825 |
0 | 63,671,324 | 0 | 1 | 0 | 0 | 1 | false | 2 | 2015-02-06T04:03:00.000 | 3 | 2 | 0 | Is it possible to mask an image in Python Imaging Library (PIL)? | 28,358,379 | 0.291313 | python,image,image-processing,python-imaging-library,mask | You can use the PIL library to mask the images. Add in the alpha parameter to img2, As you can't just paste this image over img1. Otherwise, you won't see what is underneath, you need to add an alpha value.
img2.putalpha(128) #if you put 0 it will be completly transparent, keep image opaque
Then you can mask both the images
img1.paste(im=img2, box=(0, 0), mask=img2) | I have some traffic camera images, and I want to extract only the pixels on the road. I have used remote sensing software before where one could specify an operation like
img1 * img2 = img3
where img1 is the original image and img2 is a straight black-and-white mask. Essentially, the white parts of the image would evaluate to
img1 * 1 = img3
and the black parts would evaluate to
img1 * 0 = img3
And so one could take a slice of the image and let all of the non-important areas go to black.
Is there a way to do this using PIL? I can't find anything similar to image algebra like I'm used to seeing. I have experimented with the blend function but that just fades them together. I've read up a bit on numpy and it seems like it might be capable of it but I'd like to know for sure that there is no straightforward way of doing it in PIL before I go diving in.
Thank you. | 0 | 1 | 10,471 |
0 | 53,429,718 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2015-02-07T16:26:00.000 | 1 | 6 | 0 | Create a numpy array (10x1) with zeros and fives | 28,384,481 | 0.033321 | python,arrays,numpy | Just do the following.
import numpy as np
arr = np.zeros(10)
arr[:3] = 5 | I'm having trouble figuring out how to create a 10x1 numpy array with the number 5 in the first 3 elements and the other 7 elements with the number 0. Any thoughts on how to do this efficiently? | 0 | 1 | 5,701 |
Subsets and Splits