GUI and Desktop Applications
int64 0
1
| A_Id
int64 5.3k
72.5M
| Networking and APIs
int64 0
1
| Python Basics and Environment
int64 0
1
| Other
int64 0
1
| Database and SQL
int64 0
1
| Available Count
int64 1
13
| is_accepted
bool 2
classes | Q_Score
int64 0
1.72k
| CreationDate
stringlengths 23
23
| Users Score
int64 -11
327
| AnswerCount
int64 1
31
| System Administration and DevOps
int64 0
1
| Title
stringlengths 15
149
| Q_Id
int64 5.14k
60M
| Score
float64 -1
1.2
| Tags
stringlengths 6
90
| Answer
stringlengths 18
5.54k
| Question
stringlengths 49
9.42k
| Web Development
int64 0
1
| Data Science and Machine Learning
int64 1
1
| ViewCount
int64 7
3.27M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 53,956,744 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2016-07-05T19:04:00.000 | 0 | 1 | 0 | What is the output of Spark MLLIB LDA topicsmatrix? | 38,210,820 | 0 | python,apache-spark-mllib,bayesian,lda | i think the matrix is m*n m is the words number and n is the topic number | The output of LDAModel.topicsMatrix() is unclear to me.
I think I understand the concept of LDA and that each topic is represented by a distribution over terms.
In the LDAModel.describeTopics() it is clear (I think):
The highest sum of likelihoods of words of a sentence per topic, indicates the evidence of this tweet belonging to a topic.
With n topics, the output of describeTopics() is a n times m matrix where m stands for the size of the vocabulary. The values in this matrix are smaller or equal to 1.
However in the LDAModel.topicsMatrix(), I have no idea what I am looking at. The same holds when reading the documentation.
The matrix is a m times n matrix, the dimensions have changed and the values in this matrix are larger than zero (and thus can take the value 2, which is not a probability value). What are these values? The occurrence of this word in the topic perhaps?
How do I use these values do calculate the distance of a sentence to a topic? | 0 | 1 | 391 |
0 | 43,283,828 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-07-06T12:04:00.000 | 1 | 2 | 0 | jep for using scikit model in java | 38,223,546 | 0.099668 | java,python-2.7,machine-learning,scikit-learn,jepp | The _PyThreadState_Current error implies that it's using the wrong Python. You should be able to fix it by setting PATH and LD_LIBRARY_PATH to the python/bin and python/lib directories you want to use (and built Jep and sklearn against) before launching the process. That will ensure that Python, Jep, and sklearn are all using the same libraries.
If that doesn't work, it's possible that Jep or sklearn were built with different versions of Python than you're running. | I am using jep for running python script in java, I basically need to run the script that uses scikit package. But it shows me error when I try to run, which I couldn't understand.
This is the piece of code in my program,
Jep jep = new Jep();
jep.eval("import sklearn");
It shows the below error,but sklearn works perfectly well in python.
Jul 06, 2016 5:31:50 PM JepEx main
SEVERE: null
jep.JepException: jep.JepException: : /usr/local/lib/python2.7/dist-packages/sklearn/__check_build/_check_build.so: undefined symbol: _PyThreadState_Current
Contents of /usr/local/lib/python2.7/dist-packages/sklearn/check_build:
setup.py __init.pyc _check_build.so
build init.py setup.pyc
It seems that scikit-learn has not been built correctly.
If you have installed scikit-learn from source, please do not forget
to build the package before using it: run python setup.py install or
make in the source directory.
If you have used an installer, please check that it is suited for your
Python version, your operating system and your platform.
at jep.Jep.eval(Jep.java:485)
at JepEx.executeCommand(JepEx.java:26)
at JepEx.main(JepEx.java:38)
Caused by: jep.JepException: : /usr/local/lib/python2.7/dist-packages/sklearn/__check_build/_check_build.so: undefined symbol: _PyThreadState_Current
Contents of /usr/local/lib/python2.7/dist-packages/sklearn/check_build:
setup.py __init.pyc _check_build.so
build init.py setup.pyc
It seems that scikit-learn has not been built correctly.
If you have installed scikit-learn from source, please do not forget
to build the package before using it: run python setup.py install or
make in the source directory.
If you have used an installer, please check that it is suited for your
Python version, your operating system and your platform.
at /usr/local/lib/python2.7/dist-packages/sklearn/check_build/__init.raise_build_error(init.py:41)
at /usr/local/lib/python2.7/dist-packages/sklearn/check_build/__init.(init.py:46)
at /usr/local/lib/python2.7/dist-packages/sklearn/init.(init.py:56) | 0 | 1 | 860 |
0 | 38,223,850 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2016-07-06T12:10:00.000 | 1 | 2 | 0 | Difference between Matlab spectrogram and matplotlib specgram? | 38,223,687 | 0.099668 | python,matlab,matplotlib,spectrogram | The value in Matlab is a scalar as it represents the size of the window, and Matlab uses a Hamming window by default. The Window argument also accepts a vector, so you can pass in any windowing function you want. | I am trying to incorporate a preexisting spectrogram from Matlab into my python code, using Matplotlib. However, when I enter the window value, there is an issue: in Matlab, the value is a scalar, but Matplotlib requires a vector. Why is this so? | 0 | 1 | 865 |
0 | 38,225,574 | 0 | 0 | 0 | 0 | 2 | true | 1 | 2016-07-06T12:10:00.000 | 1 | 2 | 0 | Difference between Matlab spectrogram and matplotlib specgram? | 38,223,687 | 1.2 | python,matlab,matplotlib,spectrogram | The arguments are just organized differently.
In matplotlib, the window size is specified using the NFFT argument. The window argument, on the other hand, is only for specifying the window itself, rather than the size. So, like MATLAB, the window argument accepts a vector. However, unlike MATLAB, it also accepts a function that should take an arbitrary-length vector and return another vector of the same size. This allows you to use functions for windows instead of just vectors.
So to put it in MATLAB terms, the MATLAB window argument is split into the window and NFFT arguments in matplotlib, while the MATLAB NFFT argument is equivalent to the matplotlib pad_to argument.
As for the reason, specifying the window and window size independently allows you to use a function as the argument for window (which, in fact, is the default). This is impossible with the MATLAB arguments.
In Python, functions are first-class objects, which isn't the case in MATLAB. So it tends to be much more common to use functions as arguments to other functions in Python compared to MATLAB. Python also allows you to specify arguments by name, something MATLAB really doesn't. So in MATLAB it is much more common to have arguments that do different things depending on the inputs, while similar functions in Python tend to split those into multiple independent arguments. | I am trying to incorporate a preexisting spectrogram from Matlab into my python code, using Matplotlib. However, when I enter the window value, there is an issue: in Matlab, the value is a scalar, but Matplotlib requires a vector. Why is this so? | 0 | 1 | 865 |
0 | 38,233,222 | 0 | 0 | 0 | 0 | 1 | false | 10 | 2016-07-06T15:40:00.000 | 2 | 2 | 0 | Same Python code, same data, different results on different machines | 38,228,088 | 0.197375 | python,numpy,scipy,scikit-learn,anaconda | If your code uses linear algebra, check it. Generally, roundoff errors are not deterministic, and if you have badly conditioned matrices, it can be it. | I have a very strange problem that I get different results on the same code and same data on different machines.
I have a python code based on numpy/scipy/sklearn and I use anaconda as my base python distribution. Even when I copy the entire project directory (which includes all the data and code) from my main machine to another machine and run it, the results I get are different. Specifically, I'm doing a classification task and I get 3 percent difference in accuracy. I am using the same version of python and anaconda on the two machines. My main machine is ubuntu 16.04 and the results on it are lower than several other machines with various OS on which I tried (OSX, ubuntu 14.04 and Centos). So, there should be something wrong with my current system configuration because all other machines show consistent results. Since the version of my anaconda is consistent among all machines, I have no idea what else could be the problem. Any ideas what else I should check or what could be the source of the problem?
I also removed and reinstalled anaconda from scratch but it didn't help. | 0 | 1 | 6,808 |
0 | 38,230,601 | 0 | 0 | 0 | 0 | 2 | true | 0 | 2016-07-06T17:46:00.000 | 2 | 2 | 0 | Why use matplotlib instead of some existing software/grapher | 38,230,462 | 1.2 | python,matplotlib,graph,data-science | Matplotlib gives you a nice level of access:
you can change all details of the plots, modify ticks, labels, spacing, ...
it has many sensible defaults, so a oneliner plot(mydata) produces fairly nice plots
it plays well with numpy and other numerical tools, so you can pass your data science objects directly to the plotting tool without going through some intermediate io | Hi, I feel like this question might be completely stupid, but I am still going to ask it, as I have been thinking about it.
What are the advantages, of using a plotter like matplotlib, instead of an existing software, or grapher.
For now, I have guessed that although it takes a lot more time to use such a library, you have more possibilities?
Please, let me know what your opinion is. I am just starting to learn about data science with Python, so I would like to make things clear. | 0 | 1 | 146 |
0 | 38,230,638 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2016-07-06T17:46:00.000 | 3 | 2 | 0 | Why use matplotlib instead of some existing software/grapher | 38,230,462 | 0.291313 | python,matplotlib,graph,data-science | Adding to Robin's answer, I think reproducibility is key.
When you make your graphs with matplotlib, since you are coding everything rather than using an interface, all of you work is reproducible, you can just run your script again. Using other software, specifically programs with user interfaces, means that each time you want to remake your graphs, you have to start from scratch, and if someone asks you the specifics of your graph (ie what scale an axis used, what units something is in that might not be labeled) it is difficult for you to go back and figure it out, since there isn't code to examine. | Hi, I feel like this question might be completely stupid, but I am still going to ask it, as I have been thinking about it.
What are the advantages, of using a plotter like matplotlib, instead of an existing software, or grapher.
For now, I have guessed that although it takes a lot more time to use such a library, you have more possibilities?
Please, let me know what your opinion is. I am just starting to learn about data science with Python, so I would like to make things clear. | 0 | 1 | 146 |
0 | 39,263,619 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-07-07T18:35:00.000 | 0 | 1 | 0 | Dump Python sklearn model in Windows and read it in Linux | 38,252,931 | 0 | python,linux,windows,scikit-learn,pickle | Python pickle should run between windows/linux. There may be incompatibilities if:
python versions on the two hosts are different (If so, try installing same version of python on both hosts); AND/OR
if one machine is 32-bit and another is 64-bit (I dont know any fix so far for this problem) | I am trying to save a sklearn model on a Windows server using sklearn.joblib.dump and then joblib.load the same file on a linux server (centOS71). I get the error below:
ValueError: non-string names in Numpy dtype unpickling
This is what I have tried:
Tried both python27 and python35
Tried the built in open() with 'wb' and 'rb' arguments
I really don't care how the file is moved, I just need to be able to move and load it in a reasonable amount of time. | 0 | 1 | 610 |
0 | 65,132,518 | 0 | 0 | 0 | 0 | 2 | false | 141 | 2016-07-07T22:12:00.000 | 0 | 7 | 0 | Difference(s) between merge() and concat() in pandas | 38,256,104 | 0 | python,pandas,join,merge,concat | Only concat function has axis parameter. Merge is used to combine dataframes side-by-side based on values in shared columns so there is no need for axis parameter. | What's the essential difference(s) between pd.DataFrame.merge() and pd.concat()?
So far, this is what I found, please comment on how complete and accurate my understanding is:
.merge() can only use columns (plus row-indices) and it is semantically suitable for database-style operations. .concat() can be used with either axis, using only indices, and gives the option for adding a hierarchical index.
Incidentally, this allows for the following redundancy: both can combine two dataframes using the rows indices.
pd.DataFrame.join() merely offers a shorthand for a subset of the use cases of .merge()
(Pandas is great at addressing a very wide spectrum of use cases in data analysis. It can be a bit daunting exploring the documentation to figure out what is the best way to perform a particular task. ) | 0 | 1 | 114,669 |
0 | 49,564,930 | 0 | 0 | 0 | 0 | 2 | false | 141 | 2016-07-07T22:12:00.000 | 14 | 7 | 0 | Difference(s) between merge() and concat() in pandas | 38,256,104 | 1 | python,pandas,join,merge,concat | pd.concat takes an Iterable as its argument. Hence, it cannot take DataFrames directly as its argument. Also Dimensions of the DataFrame should match along axis while concatenating.
pd.merge can take DataFrames as its argument, and is used to combine two DataFrames with same columns or index, which can't be done with pd.concat since it will show the repeated column in the DataFrame.
Whereas join can be used to join two DataFrames with different indices. | What's the essential difference(s) between pd.DataFrame.merge() and pd.concat()?
So far, this is what I found, please comment on how complete and accurate my understanding is:
.merge() can only use columns (plus row-indices) and it is semantically suitable for database-style operations. .concat() can be used with either axis, using only indices, and gives the option for adding a hierarchical index.
Incidentally, this allows for the following redundancy: both can combine two dataframes using the rows indices.
pd.DataFrame.join() merely offers a shorthand for a subset of the use cases of .merge()
(Pandas is great at addressing a very wide spectrum of use cases in data analysis. It can be a bit daunting exploring the documentation to figure out what is the best way to perform a particular task. ) | 0 | 1 | 114,669 |
0 | 38,291,737 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-07-10T12:07:00.000 | 1 | 2 | 0 | Using pandas over csv library for manipulating CSV files in Python3 | 38,291,701 | 0.099668 | python,csv | You should always try to use as much as possible the work that other people have already been doing for you (such as programming the pandas library). This saves you a lot of time. Pandas has a lot to offer when you want to process such files so this seems to me to be the the best way to deal with such files. Since the question is very general, I can also only give a general answer... When you use pandas, you will however need to read more in the documentation. But I would not say that this is a downside. | Forgive me if my questions is too general, or if its been asked before. I've been tasked to manipulate (e.g. copy and paste several range of entries, perform calculations on them, and then save them all to a new csv file) several large datasets in Python3.
What are the pros/cons of using the aforementioned libraries?
Thanks in advance. | 0 | 1 | 114 |
0 | 38,297,891 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2016-07-11T00:52:00.000 | 2 | 1 | 0 | Grey Level Co-Occurrence Matrix // Python | 38,297,765 | 1.2 | python,image-processing,scikit-image,glcm | The simplest way for binning 8-bits images is to divide each value by 32. Then each pixel value is going to be in [0,8[.
Btw, more than avoiding sparse matrices (which are not really an issue), binning makes the GLCM more robust to noise. | I am trying to find the GLCM of an image using greycomatrix from skimage library. I am having issues with the selection of levels. Since it's an 8-bit image, the obvious selection should be 256; however, if I select values such as 8 (for the purpose of binning and to prevent sparse matrices from forming), I am getting errors.
QUESTIONS:
Does anyone know why?
Can anyone suggest any ideas of binning these values into a 8x8 matrix instead of a 256x256 one? | 0 | 1 | 2,092 |
0 | 53,769,151 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-07-11T10:42:00.000 | 0 | 1 | 0 | Ensembling with dynamic weights | 38,304,942 | 0 | python,scikit-learn,classification,multilabel-classification,voting | I thing Voting Classifier only accepts different static weights for each estimator. However you may solve the problem by assigning class weights with the class_weight parameter of the random forest estimator by calculating the class weights on your train set. | I was wondering if it is possible to use dynamic weights in sklearn's VotingClassifier. Overall i have 3 labels 0 = Other, 1 = Spam, 2 = Emotion. By dynamic weights I mean the following:
I have 2 classifiers. First one is a Random Forest which performs best on Spam detection. Other one is a CNN which is superior for topic detection (good distinction between Other and Emotion). What I would like is a VotingClassifier that gives a higher weight to RF when it assigns the label "Spam/1".
Is VotingClassifier the right way to go?
Best regards,
Stefan | 0 | 1 | 139 |
0 | 38,317,151 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-07-11T19:43:00.000 | 1 | 1 | 0 | Speeding up TensorFlow Cifar10 Example for Experimentation | 38,314,964 | 0.197375 | python,tensorflow | Note that this exercise only speeds up the first step time by skipping the prefetching of a larger from of the data. This exercise does not speed up the overall training
That said, the tutorial text needs to be updated. It should read
Search for min_fraction_of_examples_in_queue in cifar10_input.py.
If you lower this number, the first step should be much quicker because the model will not attempt to prefetch the input. | The TensorFlow tutorial for using CNN for the cifar10 data set has the following advice:
EXERCISE: When experimenting, it is sometimes annoying that the first training step can take so long. Try decreasing the number of images that initially fill up the queue. Search for NUM_EXAMPLES_PER_EPOCH_FOR_TRAIN in cifar10.py.
In order to play around with it, I tried decreasing this number by a lot but it doesn't seem to change the training time. Is there anything I can do? I tried even changing it to something as low as 5 and the training session still continued very slowly.
Any help would be appreciated! | 0 | 1 | 174 |
0 | 38,353,930 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-07-12T06:15:00.000 | 1 | 1 | 0 | how to use dot production on batch data? | 38,321,248 | 0.197375 | python,numpy,theano,deep-learning,keras | This expression should do the trick:
theano.tensor.tanh((x * y).sum(2))
The dot product is computed 'manually' by doing element-wise multiplication, then summing over the last dimension. | I am trying to apply tanh(dot(x,y));
x and y are batch data of my RNN.
x,y have shape (n_batch, n_length, n_dim) like (2,3,4) ; 2 samples with 3 sequences, each is 4 dimensions.
I want to do inner or dot production to last dimension. Then tanh(dot(x,y)) should have shape of (n_batch, n_length) = (2, 3)
Which function should I use? | 0 | 1 | 291 |
0 | 43,412,660 | 0 | 1 | 0 | 0 | 1 | false | 4 | 2016-07-12T11:52:00.000 | 0 | 1 | 0 | n_jobs don't work in sklearn-classes | 38,328,159 | 0 | python,scikit-learn | Several scikit-learn tools such as GridSearchCV and cross_val_score rely internally on Python’s multiprocessing module to parallelize execution onto several Python processes by passing n_jobs > 1 as argument.
Taken from Sklearn documentation:
The problem is that Python multiprocessing does a fork system call
without following it with an exec system call for performance reasons.
Many libraries like (some versions of) Accelerate / vecLib under OSX,
(some versions of) MKL, the OpenMP runtime of GCC, nvidia’s Cuda (and
probably many others), manage their own internal thread pool. Upon a
call to fork, the thread pool state in the child process is corrupted:
the thread pool believes it has many threads while only the main
thread state has been forked. It is possible to change the libraries
to make them detect when a fork happens and reinitialize the thread
pool in that case: we did that for OpenBLAS (merged upstream in master
since 0.2.10) and we contributed a patch to GCC’s OpenMP runtime (not
yet reviewed). | Does anybody use "n_jobs" of sklearn-classes? I am work with sklearn in Anaconda 3.4 64 bit. Spyder version is 2.3.8. My script can't finish its execution after setting "n_jobs" parameter of some sklearn-class to non-zero value.Why is this happening? | 0 | 1 | 1,402 |
0 | 38,437,943 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2016-07-13T07:03:00.000 | 0 | 1 | 0 | How to do semantic keyword search with nlp | 38,344,740 | 0 | java,python,search,nlp,semantics | Your questions is somewhat vague but I will try nonetheless...
If I understand you correctly then what you want to do (depending on the effort you want to spend) is the following:
Expand the keyword to a synonym list that you will use to search for in the topics (you can use WordNet for this).
Use collocations (n-gram model) to extend the keyword to the likely bi-, tri-grams and search for these in the texts.
Depending on the availability of the data you may also want to create a classifier (e.g. using good old SVM or CRF) that maps list of keywords into topics (where topic is a class).
Assuming that you have a number of documents per each topic, you may also want to create a list of most frequent words per topic (eliminating stop-words).
Most of the functionality is available via NLTK, Pandas, etc. for Python and OpenNLP, libsvm, LingPipe in Java. | I want to do SEMANTIC keyword search on list of topics with NLP(Natural Language Processing ). It would be very appreciable if you post any reference links or ideas. | 0 | 1 | 296 |
0 | 38,375,229 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2016-07-14T13:04:00.000 | 0 | 1 | 0 | Sample orientation in the class, clustered by k-means in Python | 38,375,062 | 1.2 | python,scikit-learn,k-means | The way you're defining the orientation to us seems like you've got the right idea. If you use the farthest distance from the center as the denominator, then you'll get 0 as your minimum (cluster center) and 1 as your maximum (the farthest distance) and a linear distance in-between. | I've got some clustered classes, and a sample with a prediction. Now, i want to know the "orientation" of the sample, which varies from 0 to 1, where 0 - right in the class center, 1 - right on the class border(radius). I guess, it's going to be
orientation=dist_from_center/class_radius
So, I'm struggled to find class radius. The first idea is to take the distance from a center to the most distant sample, but iwould like to use smth more 'academic' and less custom | 0 | 1 | 41 |
0 | 38,376,532 | 0 | 0 | 0 | 0 | 1 | false | 22 | 2016-07-14T14:08:00.000 | 5 | 6 | 0 | Changing the scale of a tensor in tensorflow | 38,376,478 | 0.16514 | python,tensorflow,conv-neural-network | sigmoid(tensor) * 255 should do it. | Sorry if I messed up the title, I didn't know how to phrase this. Anyways, I have a tensor of a set of values, but I want to make sure that every element in the tensor has a range from 0 - 255, (or 0 - 1 works too). However, I don't want to make all the values add up to 1 or 255 like softmax, I just want to down scale the values.
Is there any way to do this?
Thanks! | 0 | 1 | 25,931 |
0 | 38,389,853 | 0 | 0 | 0 | 1 | 1 | false | 5 | 2016-07-15T05:57:00.000 | 1 | 3 | 0 | Sort A list of Strings Based on certain field | 38,388,799 | 0.066568 | python,list,python-2.7,sorting | you can use string.split(),string.split(',')[1] | Overview: I have data something like this (each row is a string):
81:0A:D7:19:25:7B, 2016-07-14 14:29:13, 2016-07-14 14:29:15, -69, 22:22:22:22:22:23,null,^M
3B:3F:B9:0A:83:E6, 2016-07-14 01:28:59, 2016-07-14 01:29:01, -36, 33:33:33:33:33:31,null,^M
B3:C0:6E:77:E5:31, 2016-07-14 08:26:45, 2016-07-14 08:26:47, -65, 33:33:33:33:33:32,null,^M
61:01:55:16:B5:52, 2016-07-14 06:25:32, 2016-07-14 06:25:34, -56, 33:33:33:33:33:33,null,^M
And I want to sort each row based on the first timestamp that is present in the each String, which for these four records is:
2016-07-14 01:28:59
2016-07-14 06:25:32
2016-07-14 08:26:45
2016-07-14 14:29:13
Now I know the sort() method but I don't understand how can I use here to sort all the rows based on this (timestamp) quantity, and I do need to keep the final sorted data in the same format as some other service is going to use it.
I also understand I can make the key() but I am not clear how that can be made to sort on the timestamp field. | 0 | 1 | 358 |
0 | 42,047,026 | 0 | 0 | 0 | 1 | 1 | false | 32 | 2016-07-15T18:33:00.000 | 5 | 7 | 0 | How to write data to Redshift that is a result of a dataframe created in Python? | 38,402,995 | 0.141893 | python,pandas,dataframe,amazon-redshift,psycopg2 | Assuming you have access to S3, this approach should work:
Step 1: Write the DataFrame as a csv to S3 (I use AWS SDK boto3 for this)
Step 2: You know the columns, datatypes, and key/index for your Redshift table from your DataFrame, so you should be able to generate a create table script and push it to Redshift to create an empty table
Step 3: Send a copy command from your Python environment to Redshift to copy data from S3 into the empty table created in step 2
Works like a charm everytime.
Step 4: Before your cloud storage folks start yelling at you delete the csv from S3
If you see yourself doing this several times, wrapping all four steps in a function keeps it tidy. | I have a dataframe in Python. Can I write this data to Redshift as a new table?
I have successfully created a db connection to Redshift and am able to execute simple sql queries.
Now I need to write a dataframe to it. | 0 | 1 | 57,271 |
0 | 38,433,583 | 0 | 1 | 0 | 0 | 1 | false | 37 | 2016-07-16T21:25:00.000 | 86 | 2 | 0 | ipython : get access to current figure() | 38,415,774 | 1 | python,matplotlib,ipython,axis,figure | plt.gcf() to get current figure
plt.gca() to get current axis | I want to add more fine grained grid on a plotted graph. The problem is all of the examples require access to the axis object.
I want to add specific grid to already plotted graph (from inside ipython).
How do I gain access to the current figure and axis in ipython ? | 0 | 1 | 55,702 |
0 | 38,424,476 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2016-07-16T22:56:00.000 | 0 | 2 | 0 | Need to disable Sympy output of 'False' (0, False) in logical operator 'not' | 38,416,381 | 0 | python,sympy,logical-operators | If you use the operators &, |, and ~ for and, or, and not, respectively, you will get a symbolic boolean expression. I also recommend using sympy.true and sympy.false instead of 1 and 0. | I am using Sympy to process randomly generated expressions which may contain the boolean operators 'and', 'or', and 'not'.
'and' and 'or' work well:
a = 0
b = 1
a and b
0
a or b
1
But 'not' introduces a 2nd term 'False' in addition to the desired value:
a, not b
(0, False)
When processed by Sympy (where 'data' (below) provides realworld values to substitute for the variables a and b):
algo_raw = 'a, not b'
algo_sym = sympy.sympify(algo_raw)
algo_sym.subs(data)
It chokes on 'False'.
I need to disable the 2nd term 'False' such that I get only the desired output '0'. | 0 | 1 | 35 |
0 | 38,416,665 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2016-07-16T22:56:00.000 | 1 | 2 | 0 | Need to disable Sympy output of 'False' (0, False) in logical operator 'not' | 38,416,381 | 0.099668 | python,sympy,logical-operators | a, not b doesn't do what you think it does. You are actually asking for, and correctly receiving, a tuple of two items containing:
a
not b
As the result shows, a is 0 and not b is False, 1 being truthy and the not of truthy being False.
The fact that a happens to be the same value as the result you want doesn't mean it's giving you the result you want as the first item of the tuple and you just need to throw away the second item! That would be equivalent to just writing a.
What you want to do, I assume, is write your condition as a and not b. | I am using Sympy to process randomly generated expressions which may contain the boolean operators 'and', 'or', and 'not'.
'and' and 'or' work well:
a = 0
b = 1
a and b
0
a or b
1
But 'not' introduces a 2nd term 'False' in addition to the desired value:
a, not b
(0, False)
When processed by Sympy (where 'data' (below) provides realworld values to substitute for the variables a and b):
algo_raw = 'a, not b'
algo_sym = sympy.sympify(algo_raw)
algo_sym.subs(data)
It chokes on 'False'.
I need to disable the 2nd term 'False' such that I get only the desired output '0'. | 0 | 1 | 35 |
0 | 47,932,683 | 0 | 0 | 0 | 0 | 1 | false | 30 | 2016-07-17T22:01:00.000 | 4 | 7 | 0 | How to create random orthonormal matrix in python numpy | 38,426,349 | 0.113791 | python,numpy,linear-algebra,orthogonal | if you want a none Square Matrix with orthonormal column vectors you could create a square one with any of the mentioned method and drop some columns. | Is there a method that I can call to create a random orthonormal matrix in python? Possibly using numpy? Or is there a way to create a orthonormal matrix using multiple numpy methods? Thanks. | 0 | 1 | 28,284 |
0 | 38,433,637 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2016-07-18T09:49:00.000 | 3 | 3 | 0 | Python - Plotting vertical line | 38,433,584 | 1.2 | python,matplotlib | Assuming you know where the curve begins, you can just use:
plt.plot((x1, x2), (y1, y2), 'r-') to draw the line from the point (x1, y1) to the point (x2, y2)
Here in your case, x1 and x2 will be same, only y1 and y2 should change, as it is a straight vertical line that you want. | I have a curve of some data that I am plotting using matplotlib. The small value x-range of the data consists entirely of NaN values, so that my curve starts abruptly at some value of x>>0 (which is not necessarily the same value for different data sets I have). I would like to place a vertical dashed line where the curve begins, extending from the curve, to the x axis. Can anyone advise how I could do this? Thanks | 0 | 1 | 4,256 |
0 | 38,469,723 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-07-18T16:53:00.000 | 2 | 2 | 0 | How to print top ten topics using Gensim? | 38,442,161 | 0.197375 | python,lda,gensim,topic-modeling | Like the documentation says, there is no natural ordering between topics in LDA. If you have your own criterion for ordering the topics, such as frequency of appearance, you can always retrieve the entire list of topics from your model and sort them yourself.
However, even the notion of "top ten most frequent topics" is ambiguous, and one could reasonably come up with several different definitions of frequency. Do you mean the topic that has been assigned to the largest number of word tokens? Do you mean the topic with the highest average proportions among all documents? This ambiguity is the reason gensim has no built-in way to sort topics. | In the official explanation, there is no natural ordering between the topics in LDA.
As for the method show_topics(), if it returned num_topics <= self.num_topics subset of all topics is therefore arbitrary and may change between two LDA training runs.
But I tends to find the top ten frequent topics of corpus. Is there any other ways to achieve this?
Many thanks. | 0 | 1 | 836 |
0 | 38,447,138 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2016-07-18T22:05:00.000 | 2 | 1 | 0 | Print out summaries in console | 38,446,706 | 0.379949 | python,tensorflow,protocol-buffers,tensorboard | Overall, there isn't first class support for your use case in TensorFlow, so I would parse the merged summaries back into a tf.Summary() protocol buffer, and then filter / print data as you see fit.
If you come up with a nice pattern, you could then merge it back into TensorFlow itself. I could imagine making this an optional setting on the tf.train.SummaryWriter, but it is probably best to just have a separate class for console-printing out interesting summaries.
If you want to encode into the graph itself which items should be summarized and printed, and which items should only be summarized (or to setup a system of different verbosity levels) you could use the Collections argument to the summary op constructors to organize different summaries into different groups. E.g. the loss summary could be put in collections [GraphKeys.SUMMARIES, 'ALWAYS_PRINT'], but another summary could be in collection [GraphKeys.SUMMARIES, 'PRINT_IF_VERBOSE'], etc. Then you can have different merge_summary ops for the different types of printing, and control which ones are run via command line flags. | Tensorflow's scalar/histogram/image_summary functions are very useful for logging data for viewing with tensorboard. But I'd like that information printed to the console as well (e.g. if I'm a crazy person without a desktop environment).
Currently, I'm adding the information of interest to the fetch list before calling sess.run, but this seems redundant as I'm already fetching the merged summaries. Fetching the merged summaries returns a protobuf, so I imagine I could scrape it using some generic python protobuf library, but this seems like a common enough use case that there should be an easier way.
The main motivation here is encapsulation. Let's stay I have my model and training script in different files. My model has a bunch of calls to tf.scalar_summary for the information that useful to log. Ideally, I'd be able to specify whether or not to additionally print this information to console by changing something in the training script without changing the model file. Currently, I either pass all of the useful information to the training script (so I can fetch them), or I pepper the model file with calls to tf.Print | 0 | 1 | 1,812 |
0 | 38,460,641 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-07-19T12:55:00.000 | 0 | 1 | 0 | Python Glueviz - is there a way to replace ie update the imported data | 38,459,234 | 0 | python | As it turns out, the data is not stored in the Glueviz session file, but rather loaded fresh each time the saved session is opened from the original data source file.
Hence the solution is simple: Replace the data source file with a new file (of the same type) in with the updated data.
The updated data file must have the exact same name, be in the exact same location, and I assume, must have only values within the source data file changed, not the amount of data or columns titles or other aspects changed from the original file.
Having done that, reopen Glueviz, reload that session file, and the graphs in Glueviz should update with the updated data. | I am using Glueviz 0.7.2 as part of the Anaconda package, on OSX. Glueviz is a data visualization and exploration tool.
I am regularly regenerating an updated version of the same data set from an external model, then importing that data set into Glueviz.
Currently I can not find a way to have Glueviz refresh or update an existing imported data set.
I can add a new data set, ie a second more updated version of the data from the model as a new import data set, but this does not replace the original, and does not enable the new data to show in the graphs set up in Glueviz in a simple way.
It seems the only solution to plot the updated data, is to start a new session, and needing to take some time to set up all the plots again. Most tedious!
As a python running application, Glueviz must be storing the imported data set somewhere. Hence I thinking a work around would be to replace that existing data with the updated data. With a restart of Glueviz, and a reload of that saved session, I imagine it will not know the difference and simply graph the updated data set within the existing graphs. Problem solved.
I am not sure how Glueviz as a python package stores the data file, and what python application would be the best to use to update that data file. | 0 | 1 | 464 |
0 | 38,638,969 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2016-07-20T21:03:00.000 | 1 | 2 | 0 | Horizontally layering LSTM cells | 38,490,811 | 0.099668 | python,tensorflow,neural-network,recurrent-neural-network,lstm | However, I couldn't find anything about horizontal LSTM cells, in which the output of one cell is the input of another.
This is the definition of recurrence. All RNNs do this. | I am pretty new to the whole neural network scene, and I was just going through a couple of tutorials on LSTM cells, specifically, tensorflow.
In the tutorial, they have an object tf.nn.rnn_cell.MultiRNNCell, which from my understanding, is a vertical layering of LSTM cells, similar to layering convolutional networks. However, I couldn't find anything about horizontal LSTM cells, in which the output of one cell is the input of another.
I understand that because the cells are recurrent, they wouldn't need to do this, but I was just trying to see if this is straight out possible.
Cheers! | 0 | 1 | 301 |
0 | 71,838,557 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2016-07-20T21:03:00.000 | 0 | 2 | 0 | Horizontally layering LSTM cells | 38,490,811 | 0 | python,tensorflow,neural-network,recurrent-neural-network,lstm | Horizontally stacked is useless in any case I can think of. A common confusion is that there are multiple cells (with different parameters) due to the visualization of the process within an RNN.
RNNs loop over themselves so for every input they generate new input for the cell itself. So they use the same weights over and over. If you would like to separate these connected RNNs and train them on generated sequences (different time steps), I think the weights will descend towards approximately similar parameters. So it will be similar (or equal) to just using one RNN cell using its output as input.
You can use multiple cells kind of 'horizontal' when using it in an encoder decoder model. | I am pretty new to the whole neural network scene, and I was just going through a couple of tutorials on LSTM cells, specifically, tensorflow.
In the tutorial, they have an object tf.nn.rnn_cell.MultiRNNCell, which from my understanding, is a vertical layering of LSTM cells, similar to layering convolutional networks. However, I couldn't find anything about horizontal LSTM cells, in which the output of one cell is the input of another.
I understand that because the cells are recurrent, they wouldn't need to do this, but I was just trying to see if this is straight out possible.
Cheers! | 0 | 1 | 301 |
0 | 38,503,517 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2016-07-21T01:54:00.000 | 0 | 1 | 0 | Python Anaconda - no module named numpy | 38,493,608 | 0 | python,python-2.7,numpy,anaconda | The anaconda package in the AUR is broken. If anyone encounters this, simply install anaconda from their website. The AUR attempts to do a system-wide install, which gets rather screwy with the path. | I recently installed Anaconda on Arch Linux from the Arch repositories. By default, it was set to Python3, whereas I would like to use Python2.7. I followed the Anaconda documentation to create a new Python2 environment. Upon running my Python script which uses Numpy, I got the error No module named NumPy. I found this rather strange, as one of the major points of using Anaconda is easy installation of the NumPy/SciPy stack...
Nevertheless, I ran conda install numpy and it installed. Now, I still cannot import numpy, but when I run conda install numpy it says it is already installed. What gives?
Output of which conda: /opt/anaconda/envs/python2/bin/conda
Output of which python: /opt/anaconda/envs/python2/bin/python | 0 | 1 | 1,790 |
0 | 38,572,975 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-07-21T22:03:00.000 | 0 | 1 | 1 | module error in multi-node spark job on google cloud cluster | 38,515,096 | 0 | python-3.x,numpy,pyspark,google-cloud-platform,gcp | Not sure if this qualifies as a solution. I submitted the same job using dataproc on google platform and it worked without any problem. I believe the best way to run jobs on google cluster is via the utilities offered on google platform. The dataproc utility seems to iron out any issues related to the environment. | This code runs perfect when I set master to localhost. The problem occurs when I submit on a cluster with two worker nodes.
All the machines have same version of python and packages. I have also set the path to point to the desired python version i.e. 3.5.1. when I submit my spark job on the master ssh session. I get the following error -
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.runJob.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 2.0 failed 4 times, most recent failure: Lost task 0.3 in stage 2.0 (TID 5, .c..internal): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/hadoop/yarn/nm-local-dir/usercache//appcache/application_1469113139977_0011/container_1469113139977_0011_01_000004/pyspark.zip/pyspark/worker.py", line 98, in main
command = pickleSer._read_with_length(infile)
File "/hadoop/yarn/nm-local-dir/usercache//appcache/application_1469113139977_0011/container_1469113139977_0011_01_000004/pyspark.zip/pyspark/serializers.py", line 164, in _read_with_length
return self.loads(obj)
File "/hadoop/yarn/nm-local-dir/usercache//appcache/application_1469113139977_0011/container_1469113139977_0011_01_000004/pyspark.zip/pyspark/serializers.py", line 419, in loads
return pickle.loads(obj, encoding=encoding)
File "/hadoop/yarn/nm-local-dir/usercache//appcache/application_1469113139977_0011/container_1469113139977_0011_01_000004/pyspark.zip/pyspark/mllib/init.py", line 25, in
import numpy
ImportError: No module named 'numpy'
I saw other posts where people did not have access to their worker nodes. I do. I get the same message for the other worker node. not sure if I am missing some environment setting. Any help will be much appreciated. | 0 | 1 | 206 |
0 | 38,517,371 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2016-07-22T02:43:00.000 | 2 | 1 | 0 | Global dataframes - good or bad | 38,517,334 | 1.2 | python,pandas,global | Yes. Instead of using globals, you should wrap your data into an object and pass that object around to your functions instead (see dependency injection).
Wrapping it in an object instead of using a global will :
Allow you to unit test your code. This is absolutely the most important reason. Using globals will make it painfully difficult to test your code, since it is impossible to test any of your code in isolation due to its global nature.
Perform operations on your code safely without the fear of random mutability bugs
Stop awful concurrency bugs that happen because everything is global. | I have a program that i load millions of rows into dataframes, and i declare them as global so my functions (>50) can all use them like i use a database in the past. I read that using globals are a bad, and due to the memory mapping for it, it is slower to use globals.
I like to ask if globals are bad, how would the good practice be? passing > 10 dataframes around functions and nested functions dont seems to be very clean code as well.
Recently the program is getting unwieldy as different functions also update different cells, insert, delete data from the dataframes, so i am thinking of wrapping the dataframes in a class to make it more manageable. Is that a good idea? | 0 | 1 | 151 |
0 | 38,518,356 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-07-22T04:09:00.000 | 0 | 1 | 0 | Date field in SAS imported in Python pandas Dataframe | 38,518,000 | 0 | python,pandas,dataframe,import,sas | I don't know how python stores dates, but SAS stores dates as numbers, counting the number of days from Jan 1, 1960. Using that you should be able to convert it in python to a date variable somehow.
I'm fairly certain that when data is imported to python the formats aren't honoured so in this case it's easy to work around this, in others it may not be.
There's probably some sort of function in python to create a date of Jan 1, 1960 and then increment by the number of days you get from the imported dataset to get the correct date. | I have imported a SAS dataset in python dataframe using Pandas read_sas(path)
function. REPORT_MONTH is a column in sas dataset defined and saved as DATE9. format. This field is imported as float64 datatype in dataframe and having numbers which is basically a sas internal numbers for storing a date in a sas dataset. Now wondering how can I convert this originally a date field into a date field in dataframe? | 0 | 1 | 398 |
0 | 38,537,431 | 0 | 1 | 0 | 0 | 1 | true | 1 | 2016-07-23T00:58:00.000 | 2 | 2 | 0 | AttributeError: 'module' object has no attribute '__version__' | 38,537,125 | 1.2 | python,module,dataset,attributeerror,lda | Do you have a module named lda.py or lda.pyc in the current directory?
If so, then your import statement is finding that module instead of the "real" lda module. | I have installed LDA plibrary (using pip)
I have a very simple test code (the next two rows)
import lda
print lda.datasets.load_reuters()
But i keep getting the error
AttributeError: 'module' object has no attribute 'datasets'
in fact i get that each time i access any attribute/function under lda! | 0 | 1 | 4,833 |
0 | 38,548,024 | 0 | 1 | 0 | 0 | 3 | false | 1 | 2016-07-24T01:44:00.000 | 0 | 3 | 0 | Represent sparse matrix in Python without library usage | 38,547,996 | 0 | python,data-structures | Dict with tuples as keys might work. | I want to represent sparse matrix in Python in a data structure that does not waste space but in the same time preserves constant access time. Is there any easy/trivial way of doing it? I know that libraries such as scipy have it. | 0 | 1 | 331 |
0 | 38,548,640 | 0 | 1 | 0 | 0 | 3 | false | 1 | 2016-07-24T01:44:00.000 | 1 | 3 | 0 | Represent sparse matrix in Python without library usage | 38,547,996 | 0.066568 | python,data-structures | The scipy.sparse library uses different formats depending on the purpose. All implement a 2d matrix
dictionary of keys - the data structure is a dictionary, with a tuple of the coordinates as key. This is easiest to setup and use.
list of lists - has 2 lists of lists. One list has column coordinates, the other column data. One sublist per row of matrix.
coo - a classic design. 3 arrays, row coordinates, column coordinates and data values
compressed row (or column) - a more complex version of coo, optimized for mathematical operations; based on linear algebra mathematics decades old
diagonal - suitable for matrices were most values are on a few diagonals | I want to represent sparse matrix in Python in a data structure that does not waste space but in the same time preserves constant access time. Is there any easy/trivial way of doing it? I know that libraries such as scipy have it. | 0 | 1 | 331 |
0 | 38,548,006 | 0 | 1 | 0 | 0 | 3 | false | 1 | 2016-07-24T01:44:00.000 | 0 | 3 | 0 | Represent sparse matrix in Python without library usage | 38,547,996 | 0 | python,data-structures | Lots of ways to do it. For example you could keep a list where each list element is either one of your data objects, or an integer representing N blank items. | I want to represent sparse matrix in Python in a data structure that does not waste space but in the same time preserves constant access time. Is there any easy/trivial way of doing it? I know that libraries such as scipy have it. | 0 | 1 | 331 |
0 | 38,555,266 | 0 | 1 | 0 | 0 | 2 | false | 1 | 2016-07-24T18:03:00.000 | 4 | 3 | 0 | Hardware requirements to deal with a big matrix - python | 38,555,120 | 0.26052 | python,numpy,matrix | Well, the first question is, wich type of value will you store in your matrix?
Suposing it will be of integers (and suposing that every bytes uses the ISO specification for size, 4 bytes), you will have 4*10^12 bytes to store. That's a large amount of information (4 TB), so, in first place, I don't know from where you are taking all that information, and I suggest you to only load parts of it, that you can manage easily.
On the other side, as you can paralellize it, I will recommend you using CUDA, if you can afford a NVIDIA card, so you will have much better performance.
In summary, it's hard to have all that information only in RAM, and, use paralell languajes.
PD: You are using wrong the O() stimation about algorith time complexity. You should have said that you have a O(n), being n=size_of_the_matrix or O(nmt), being n, m and t, the dimensions of the matrix. | I am working on a python project where I will need to work with a matrix whose size is around 10000X10000X10000.
Considering that:
The matrix will be dense, and should be stored in the RAM.
I will need to perform linear algebra (with numpy, I think) on that matrix, with around O(n^3) where n=10000 operations (that are parallelizable).
Are my requirements realistic? Which will be the hardware requirements I would need to work in such way in a decent time?
I am also open to switch language (for example, performing the linear algebra operations in C) if this could improve the performances. | 0 | 1 | 230 |
0 | 38,555,206 | 0 | 1 | 0 | 0 | 2 | false | 1 | 2016-07-24T18:03:00.000 | 1 | 3 | 0 | Hardware requirements to deal with a big matrix - python | 38,555,120 | 0.066568 | python,numpy,matrix | Actually, the memory would be a big issue here. Depending on the type of the matrix elements. Each float takes 24 bytes for example as it is a boxed object. As your matrix is 10^12 you can do the math.
Switching to C would probably make it more memory-efficient, but not faster, as numpy is essentially written in C with lots of optimizations | I am working on a python project where I will need to work with a matrix whose size is around 10000X10000X10000.
Considering that:
The matrix will be dense, and should be stored in the RAM.
I will need to perform linear algebra (with numpy, I think) on that matrix, with around O(n^3) where n=10000 operations (that are parallelizable).
Are my requirements realistic? Which will be the hardware requirements I would need to work in such way in a decent time?
I am also open to switch language (for example, performing the linear algebra operations in C) if this could improve the performances. | 0 | 1 | 230 |
0 | 38,556,752 | 0 | 1 | 0 | 0 | 1 | true | 11 | 2016-07-24T19:47:00.000 | 15 | 1 | 0 | In Tensorflow, what is the difference between a Variable and a Tensor? | 38,556,078 | 1.2 | python,tensorflow | It's true that a Variable can be used any place a Tensor can, but the key differences between the two are that a Variable maintains its state across multiple calls to run() and a variable's value can be updated by backpropagation (it can also be saved, restored etc as per the documentation).
These differences mean that you should think of a variable as representing your model's trainable parameters (for example, the weights and biases of a neural network), while you can think of a Tensor as representing the data being fed into your model and the intermediate representations of that data as it passes through your model. | The Tensorflow documentation states that a Variable can be used any place a Tensor can be used, and they seem to be fairly interchangeable. For example, if v is a Variable, then x = 1.0 + v becomes a Tensor.
What is the difference between the two, and when would I use one over the other? | 0 | 1 | 3,747 |
0 | 45,135,108 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-07-25T06:48:00.000 | 1 | 2 | 0 | Feed Mxnet Rec to Tensorflow | 38,561,304 | 0.099668 | python,tensorflow,mxnet | You can probably feed the data.
You will need to use MXNet Iterators to get the data out of the records, and then each record you will need to cast to something that Tensorflow understands. | I have created Mxnet Rec data through Im2rec. I would like to feed this into Tensorflow. Is it possible ? and How would i do that? Any idea ? | 0 | 1 | 365 |
0 | 48,574,315 | 0 | 1 | 0 | 0 | 1 | false | 42 | 2016-07-25T20:45:00.000 | 16 | 3 | 0 | Convert Column Name from int to string in pandas | 38,577,126 | 1 | python,pandas | You can simply use df.columns = df.columns.map(str)
DSM's first answer df.columns = df.columns.astype(str) didn't work for my dataframe. (I got TypeError: Setting dtype to anything other than float64 or object is not supported) | I have a pandas dataframe with mixed column names:
1,2,3,4,5, 'Class'
When I save this dataframe to h5file, it says that the performance will be affected due to mixed types. How do I convert the integer to string in pandas? | 0 | 1 | 62,888 |
0 | 45,575,576 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2016-07-27T00:03:00.000 | 0 | 1 | 0 | Programming on PySpark (local) vs. Python on Jupyter Notebook | 38,601,730 | 0 | python,apache-spark,pyspark | I'm in a similar situation. We've done most of our development in Python (primarily Pandas) and now we're moving into Spark as our environment has matured to the point that we can use it.
The biggest disadvantage I see to PySpark is when we have to perform operations across an entire DataFrame but PySpark doesn't directly support the library or method. For example, when trying to use the Lifetimes library, this is not supported by PySpark so we either have to convert the PySpark Dataframe to a Pandas Dataframe (which takes a lot of time and loses the advantage of the cluster) or convert the code to something PySpark can consume and parallelize across the PySpark DataFrame. | Recently I've been working a lot with pySpark, so I've been getting used to it's syntax, the different APIs and the HiveContext functions. Many times when I start working on a project I'm not fully aware of what its scope will be, or the size of the input data, so sometimes I end up requiring the full power of distributed computing, while on others I end up with some scripts that will run just fine on my local machine.
My question is, is there a disadvantage to coding with pySpark as my main language as compared to regular Python/Pandas, even for just some exploratory analysis? I ask mainly because of the cognitive work of switching between languages, and the hassle of moving my code from Python to pySpark if I do en up requiring to distribute the work.
In term of libraries I know Python would have more capabilities, but on my current projects so far don't use any library not covered by Spark, so I'm mostly concerned about speed, memory and any other possible disadvantage; which would perform better on my local machine? | 0 | 1 | 1,310 |
0 | 38,612,762 | 0 | 0 | 0 | 0 | 1 | true | 3 | 2016-07-27T10:53:00.000 | 4 | 1 | 0 | Dealing with big data to perform random forest classification | 38,610,955 | 1.2 | python,pandas,scikit-learn,sparse-matrix,bigdata | I would suggest you give CloudxLab a try.
Though it is not free it is quite affordable ($25 for a month). It provides complete environment to experiment with various tools such as HDFS, Map-Reduce, Hive, Pig, Kafka, Spark, Scala, Sqoop, Oozie, Mahout, MLLib, Zookeeper, R, Scala etc. Many of the popular trainers are using CloudxLab. | I am currently working on my thesis, which involves dealing with quite a sizable dataset: ~4mln observations and ~260ths features. It is a dataset of chess games, where most of the features are player dummies (130k for each colour).
As for the hardware and the software, I have around 12GB of RAM on this computer. I am doing all my work in Python 3.5 and use mainly pandas and scikit-learn packages.
My problem is that obviously I can't load this amount of data to my RAM. What I would love to do is to generate the dummy variables, then slice the database into like a thousand or so chunks, apply the Random Forest and aggregate the results again.
However, to do that I would need to be able to first create the dummy variables, which I am not able to do due to memory error, even if I use sparse matrices. Theoretically, I could just slice up the database first, then create the dummy variables. However, the effect of that will be that I will have different features for different slices, so I'm not sure how to aggregate such results.
My questions:
1. How would you guys approach this problem? Is there a way to "merge" the results of my estimation despite having different features in different "chunks" of data?
2. Perhaps it is possible to avoid this problem altogether by renting a server. Are there any trial versions of such services? I'm not sure exactly how much CPU/RAM would I need to complete this task.
Thanks for your help, any kind of tips will be appreciated :) | 0 | 1 | 392 |
0 | 43,754,593 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2016-07-27T11:43:00.000 | 0 | 1 | 0 | Import Theano on Anaconda of platform windows10 | 38,611,999 | 0 | python,anaconda,theano | Just found the temperary solution , rename configparser.py to config_parser or any other name that are not confilct .
and change name of each module include it to config_parser . | I download the theano from github, and install it.
But when I try to import the theano in ipython, I meet this problem
In [1]: import theano
ImportError Traceback (most recent call last)
<ipython-input-1-3397704bd624> in <module>()
----> 1 import theano
C:\Anaconda3\lib\site-packages\theano\__init__.py in <module>()
40 from theano.version import version as version
41
---> 42 from theano.configdefaults import config
43
44 # This is the api version for ops that generate C code. External ops
C:\Anaconda3\lib\site-packages\theano\configdefaults.py in <module>()
14
15 import theano
---> 16 from theano.configparser import (AddConfigVar, BoolParam, ConfigParam, EnumStr,
17 FloatParam, IntParam, StrParam,
18 TheanoConfigParser, THEANO_FLAGS_DICT)
C:\Anaconda3\lib\site-packages\theano\configparser.py in <module>()
13
14 import theano
---> 15 from theano.compat import configparser as ConfigParser
16 from six import string_types
17
C:\Anaconda3\lib\site-packages\theano\compat\__init__.py in <module>()
4 # Python 3.x compatibility
5 from six import PY3, b, BytesIO, next
----> 6 from six.moves import configparser
7 from six.moves import reload_module as reload
8 import collections
C:\Anaconda3\lib\site-packages\six.py in __get__(self, obj, tp)
90
91 def __get__(self, obj, tp):
---> 92 result = self._resolve()
93 setattr(obj, self.name, result) # Invokes __set__.
94 try:
C:\Anaconda3\lib\site-packages\six.py in _resolve(self)
113
114 def _resolve(self):
--> 115 return _import_module(self.mod)
116
117 def __getattr__(self, attr):
C:\Anaconda3\lib\site-packages\six.py in _import_module(name)
80 def _import_module(name):
81 """Import module, returning the module after the last dot."""
---> 82 __import__(name)
83 return sys.modules[name]
84
C:\Anaconda3\Lib\site-packages\theano\configparser.py in <module>()
13
14 import theano
---> 15 from theano.compat import configparser as ConfigParser
16 from six import string_types
17
When I get into the files, I indeed can not find configparser.py in the directory, but the original file do not have it neither.
ImportError: cannot import name 'configparser' | 0 | 1 | 437 |
0 | 38,615,418 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2016-07-27T13:58:00.000 | 1 | 1 | 0 | Tf-Idf vectorizer analyze vectors from lines instead of words | 38,615,088 | 1.2 | python,scikit-learn,vectorization,tf-idf,text-analysis | You seem to be misunderstanding what the TF-IDF vectorization is doing. For each word (or N-gram), it assigns a weight to the word which is a function of both the frequency of the term (TF) and of its inverse frequency of the other terms in the document (IDF). It makes sense to use it for words (e.g. knowing how often the word "pizza" comes up) or for N-grams (e.g. "Cheese pizza" for a 2-gram)
Now, if you do it on lines, what will happen? Unless you happen to have a corpus in which lines are repeated exactly (e.g. "I need help in Python"), your TF-IDF transformation will be garbage, as each sentence will appear exactly once in the document. And if your sentences are indeed always similar to the punctuation mark, then for all intents and purposes they are not sentences in your corpus, but words. This is why there is no option to do TF-IDF with sentences: it makes zero practical or theoretical sense. | I'm trying to analyze a text which is given by lines, and I wish to vectorize the lines using sckit-learn package's TF-IDF-vectorization in python.
The problem is that the vectorization can be done either by words or n-grams but I want them to be done for lines, and I already ruled out a work around that just vectorize each line as a single word (since in that way the words and their meaning wont be considered).
Looking through the documentation I didnt find how to do that, so is there any such option? | 0 | 1 | 791 |
0 | 54,919,826 | 0 | 0 | 0 | 0 | 1 | false | 8 | 2016-07-28T21:52:00.000 | -3 | 3 | 0 | Tensorflow: Convert Tensor to numpy array WITHOUT .eval() or sess.run() | 38,647,353 | -0.197375 | python,numpy,tensorflow | .numpy() will convert tensor to an array. | How can you convert a tensor into a Numpy ndarray, without using eval or sess.run()?
I need to pass a tensor into a feed dictionary and I already have a session running. | 0 | 1 | 13,763 |
0 | 41,493,134 | 0 | 0 | 0 | 0 | 1 | false | 6 | 2016-07-29T10:55:00.000 | 3 | 2 | 0 | how to save jupyter output into a pdf file | 38,657,054 | 0.291313 | python-2.7,pdf,jupyter-notebook | When I want to save a Jupyter Notebook I right click the mouse, select print, then change Destination to Save as PDF. This does not save the analysis outputs though. So if I want to save a regression output, for example, I highlight the output in Jupyter Notebook, right click, print, Save as PDF. This process creates fairly nice looking documents with code, interpretation and graphics all-in-one. There are programs that allow you to save more directly but I haven't been able to get them to work. | I am doing some data science analysis on jupyter and I wonder how to get all the output of my cell saved into a pdf file ?
thanks | 1 | 1 | 19,046 |
0 | 38,720,955 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2016-08-01T18:50:00.000 | 1 | 1 | 0 | fastest format to load saved graph structure into python-igraph | 38,706,050 | 1.2 | python,profiling,igraph | If you don't have vertex or edge attributes, your best bet is a simple edge list, i.e. Graph.Read_Edgelist(). The disadvantage is that it assumes that vertex IDs are in the range [0; |V|-1], so you'll need to have an additional file next to it where line i contains the name of the vertex with ID=i. | I have a very large network structure which I am working with in igraph. There are many different file formats which igraph Graph objects can write to and then be loaded from. I ran into memory problems when using g.write_picklez, and Graph.Read_Lgl() takes about 5 minutes to finish. I was wondering if anyone had already profiled the numerous file format choices for write and load speed as well as memory footprint. FYI this network has ~5.7m nodes and ~130m edges. | 0 | 1 | 484 |
0 | 38,718,933 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2016-08-01T23:11:00.000 | 3 | 1 | 0 | Upgraded Seaborn 0.7.0 to 0.7.1, getting AttribueError for missing axlabel | 38,709,439 | 1.2 | python,seaborn | Changes were made in 0.7.1 to clean up the top-level namespace a bit. axlabel was not used anywhere in the documentation, so it was moved to make the main functions more discoverable. You can still access it with sns.utils.axlabel. Sorry for the inconvenience.
Note that it's usually just as easy to do ax.set(xlabel="...", ylabel="..."), though it won't get you exactly what you want here because you can't set the size to something different than the default in that line. | Having trouble with my upgrade to Seaborn 0.7.1. Conda only has 0.7.0 so I removed it and installed 0.7.1 with pip.
I am now getting this error:
AttributeError: module 'seaborn' has no attribute 'axlabel'
from this line of code
sns.axlabel(xlabel="SAMPLE GROUP", ylabel=y_label, fontsize=16)
I removed and reinstalled 0.7.0 and it fixed the issue. However, in 0.7.1, axlabel appears to still be there and I didn't see anything about changes to it in the release notes. What am I missing? | 0 | 1 | 853 |
0 | 38,721,746 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2016-08-02T04:48:00.000 | 0 | 2 | 0 | Function that depends on the row number | 38,711,966 | 0 | python,pandas,numbers,row | Sorry I couldnt add a code sample but Im on my phone. piRSquared confirmed my fears when he said the info is lost. I guess ill have to do a loop everytime or add a column with numbers ( that will get scrambled if i sort them : / ).
Thanks everyone. | In pandas, is it possible to reference the row number for a function. I am not talking about .iloc.
iloc takes a location i.e. a row number and returns a dataframe value.
I want to access the location number in the dataframe.
For instance, if the function is in the cell that is 3 rows down and 2 columns across, I want a way to return the integer 3. Not the entry that is in that location.
Thanks. | 0 | 1 | 118 |
0 | 38,724,313 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2016-08-02T15:10:00.000 | 0 | 2 | 0 | Sending pandas dataframe to java application | 38,724,255 | 0 | java,python,pandas,numpy,jython | Have you tried using xml to transfer the data between the two applications ?
My next suggestion would be to output the data in JSON format in a txt file and then call the java application which will read the JSON from the text file. | I have created a python script for predictive analytics using pandas,numpy etc. I want to send my result set to java application . Is their simple way to do it. I found we can use Jython for java python integration but it doesn't use many data analysis libraries. Any help will be great . Thank you . | 1 | 1 | 2,769 |
0 | 57,166,461 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2016-08-02T15:10:00.000 | 0 | 2 | 0 | Sending pandas dataframe to java application | 38,724,255 | 0 | java,python,pandas,numpy,jython | Better approach here is to use java pipe input like python pythonApp.py | java read. Output of python application can be used as an input for java application till the format of data is consitent and known. Above soultions of creating a file and then reading also works but is prone to more errors. | I have created a python script for predictive analytics using pandas,numpy etc. I want to send my result set to java application . Is their simple way to do it. I found we can use Jython for java python integration but it doesn't use many data analysis libraries. Any help will be great . Thank you . | 1 | 1 | 2,769 |
0 | 38,729,045 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2016-08-02T17:35:00.000 | 0 | 1 | 0 | How to turn off matplotlib inline function and install pygtk? | 38,727,035 | 0 | python,matplotlib,ipython | You need to install pyGTK. How to do so depends on what you're using to run Python. You could also not use '%matplotlib inline' and then it'll default to whatever is installed on your system. | I got two questions when I was plotting graph in ipython.
once, i implement %matplotlib inline, I don't know how to switch back to use floating windows.
when I search for the method to switch back, people told me to implement
%matplotlib osx or %matplotlib, however, I finally get an error, which is
Gtk* backend requires pygtk to be installed.
Can anyone help me, giving me some idea?
p.s. I am using windows 10 and python 2.7 | 0 | 1 | 336 |
0 | 38,740,100 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-08-02T19:00:00.000 | 4 | 1 | 0 | Inputs not a sequence wth RNNs and TensorFlow | 38,728,501 | 0.664037 | python,neural-network,tensorflow,recurrent-neural-network | I think when you use the tf.nn.rnn function it is expecting a list of tensors and not just a single tensor. You should unpack input in the time direction so that it is a list of tensors of shape [?, 22501]. You could also use tf.nn.dynamic_rnn which I think can handle this unpack for you. | I have some very basic lstm code with tensorflow and python, where my code is
output = tf.nn.rnn(tf.nn.rnn_cell.BasicLSTMCell(10), input_flattened, initial_state=tf.placeholder("float", [None, 20]))
where my input flattened is shape [?, 5, 22501]
I'm getting the error TypeError: inputs must be a sequence on the state parameter of the lstm, and I'm ripping my hair out trying to find out why it is giving me this error. Any help would be greatly appreciated. | 0 | 1 | 3,118 |
0 | 38,733,854 | 0 | 0 | 0 | 0 | 1 | true | 57 | 2016-08-03T02:07:00.000 | 59 | 2 | 0 | Difference between scikit-learn and sklearn | 38,733,220 | 1.2 | python,python-2.7,scikit-learn | You might need to reinstall numpy. It doesn't seem to have installed correctly.
sklearn is how you type the scikit-learn name in python.
Also, try running the standard tests in scikit-learn and check the output. You will have detailed error information there.
Do you have nosetests installed? Try: nosetests -v sklearn. You type this in bash, not in the python interpreter. | On OS X 10.11.6 and python 2.7.10 I need to import from sklearn manifold.
I have numpy 1.8 Orc1, scipy .13 Ob1 and scikit-learn 0.17.1 installed.
I used pip to install sklearn(0.0), but when I try to import from sklearn manifold I get the following:
Traceback (most recent call last): File "", line 1, in
File
"/Library/Python/2.7/site-packages/sklearn/init.py", line 57, in
from .base import clone File
"/Library/Python/2.7/site-packages/sklearn/base.py", line 11, in
from .utils.fixes import signature File
"/Library/Python/2.7/site-packages/sklearn/utils/init.py", line
10, in from .murmurhash import murmurhash3_32 File
"numpy.pxd", line 155, in init sklearn.utils.murmurhash
(sklearn/utils/murmurhash.c:5029) ValueError: numpy.dtype has the
wrong size, try recompiling.
What is the difference between scikit-learn and sklearn? Also,
I cant import scikit-learn because of a syntax error | 0 | 1 | 81,337 |
0 | 38,751,473 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2016-08-03T18:42:00.000 | 1 | 2 | 0 | MiniBatchKMeans gives different centroids after subsequent iterations | 38,751,364 | 0.099668 | python,statistics,scikit-learn,cluster-analysis,k-means | The behavior you are experiencing probably has to do with the under the hood implementation of k-means clustering that you are using. k-means clustering is an NP-hard problem, so all the implementations out there are heuristic methods. What this means practically is that for a given seed, it will converge toward a local optima that isn't necessarily consistent across multiple seeds. | I am using the MiniBatchKMeans model from the sklearn.cluster module in anaconda. I am clustering a data-set that contains approximately 75,000 points. It looks something like this:
data = np.array([8,3,1,17,5,21,1,7,1,26,323,16,2334,4,2,67,30,2936,2,16,12,28,1,4,190...])
I fit the data using the process below.
from sklearn.cluster import MiniBatchKMeans
kmeans = MiniBatchKMeans(batch_size=100)
kmeans.fit(data.reshape(-1,1)
This is all well and okay, and I proceed to find the centroids of the data:
centroids = kmeans.cluster_centers_
print centroids
Which gives me the following output:
array([[ 13.09716569],
[ 2908.30379747],
[ 46.05089228],
[ 725.83453237],
[ 95.39868475],
[ 1508.38356164],
[ 175.48099948],
[ 350.76287263]])
But, when I run the process again, using the same data, I get different values for the centroids, such as this:
array([[ 29.63143489],
[ 1766.7244898 ],
[ 171.04417206],
[ 2873.70454545],
[ 70.05295277],
[ 1074.50387597],
[ 501.36134454],
[ 8.30600975]])
Can anyone explain why this is? | 0 | 1 | 1,020 |
0 | 38,754,035 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2016-08-03T18:42:00.000 | 1 | 2 | 0 | MiniBatchKMeans gives different centroids after subsequent iterations | 38,751,364 | 0.099668 | python,statistics,scikit-learn,cluster-analysis,k-means | Read up on what mini-batch k-means is.
It will never even converge. Do one more iteration, the result will change again.
It is design for data sets so huge you cannot load them into memory at once. So you load a batch, pretend this were the full data set, do one iterarion. Repeat woth the next batch. If your batches are large enough and random, then the result will be "close enough" to be usable. While itjs never optimal.
Thus:
the minibatch results are even more random than regular k-means results. They change every iteration.
if you can load your data into memory, don't use minibatch. Instead use a fast k-means implementation. (most are surprisingly slow).
P.S. on one-dimensional data, sort your data set and then use an algorithm that benefits from the sorting instead of k-means. | I am using the MiniBatchKMeans model from the sklearn.cluster module in anaconda. I am clustering a data-set that contains approximately 75,000 points. It looks something like this:
data = np.array([8,3,1,17,5,21,1,7,1,26,323,16,2334,4,2,67,30,2936,2,16,12,28,1,4,190...])
I fit the data using the process below.
from sklearn.cluster import MiniBatchKMeans
kmeans = MiniBatchKMeans(batch_size=100)
kmeans.fit(data.reshape(-1,1)
This is all well and okay, and I proceed to find the centroids of the data:
centroids = kmeans.cluster_centers_
print centroids
Which gives me the following output:
array([[ 13.09716569],
[ 2908.30379747],
[ 46.05089228],
[ 725.83453237],
[ 95.39868475],
[ 1508.38356164],
[ 175.48099948],
[ 350.76287263]])
But, when I run the process again, using the same data, I get different values for the centroids, such as this:
array([[ 29.63143489],
[ 1766.7244898 ],
[ 171.04417206],
[ 2873.70454545],
[ 70.05295277],
[ 1074.50387597],
[ 501.36134454],
[ 8.30600975]])
Can anyone explain why this is? | 0 | 1 | 1,020 |
0 | 38,763,173 | 0 | 0 | 0 | 0 | 2 | true | 0 | 2016-08-04T06:08:00.000 | 2 | 3 | 0 | How should we set the number of the neurons in the hidden layer in neural network? | 38,759,647 | 1.2 | python-2.7,machine-learning,neural-network | Yes - this is a really important issue. Basically there are two ways to do that:
Try different topologies and choose best: due to the fact that number of neurons and layers are a discrete parameters you cannot differentiate your loss function with respect to this parameters in order to use a gradient descent methods. So the easiest way is to simply set up different topologies and compare them using either cross-validation or division of your training set to - training / testing / validating parts. You can also use a grid / random search schemas to do that. Libraries like scikit-learn have appropriate modules for that.
Dropout: the training framework called dropout could also help. In this case you are setting up relatively big number of nodes in your layers and trying to adjust a dropout parameter for each layer. In this scenario - e.g. assuming that you will have a two-layer network with 100 nodes in your hidden layer with dropout_parameter = 0.6 you are learning the mixture of models - where every model is a neural network with size 40 (approximately 60 nodes are turned off). This might be also considered as figuring out the best topology for your task. | In neural network theory - setting up the size of hidden layers seems to be a really important issue. Is there any criteria how to choose the number of neurons in a hidden layer? | 0 | 1 | 638 |
0 | 38,776,068 | 0 | 0 | 0 | 0 | 2 | false | 0 | 2016-08-04T06:08:00.000 | 1 | 3 | 0 | How should we set the number of the neurons in the hidden layer in neural network? | 38,759,647 | 0.066568 | python-2.7,machine-learning,neural-network | You have to set the number of neurons in hidden layer in such a way that it shouldn't be more than # of your training example. There are no thumb rule for number of neurons.
Ex: If you are using MINIST Dataset then you might have ~ 78K training example. So make sure that combination of Neural Network (784-30-10) = 784*30 + 30*10 which are less than training examples. but if you use like (784-100-10) then it exceeds the # of training example and highly probable to over-fit.
In short, make sure you are not over-fitting and hence you have good chances to get good result. | In neural network theory - setting up the size of hidden layers seems to be a really important issue. Is there any criteria how to choose the number of neurons in a hidden layer? | 0 | 1 | 638 |
0 | 38,788,040 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2016-08-04T12:34:00.000 | 1 | 1 | 0 | Scikit-Learn- How to add an 'unclassified' category? | 38,767,481 | 0.197375 | python,scikit-learn,text-classification | In the supervised learning approach as it is, you cannot add extra category.
Therefore I would use some heuristics. Try to predict probability for each category. Then, if all 4 or at least 3 probabilities are approximately equal, you can say that the sample is "unknown".
For this approach LinearSVC or other type of Support Vector Classifier is bad
suited, because it does not naturally gives you probabilities. Another classifier (Logistic Regression, Bayes, Trees, Forests) would be better | I am using Scikit-Learn to classify texts (in my case tweets) using LinearSVC. Is there a way to classify texts as unclassified when they are a poor fit with any of the categories defined in the training set? For example if I have categories for sport, politics and cinema and attempt to predict the classification on a tweet about computing it should remain unclassified. | 0 | 1 | 198 |
1 | 38,774,932 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-08-04T18:23:00.000 | 0 | 1 | 0 | Difference between Kivy camera and opencv camera | 38,774,748 | 0 | python,opencv,camera,kivy,motion-detection | opencv is a computer vision framework (hence the c-v) which can interact with device cameras. Kivy is a cross-platform development tool which can interact with device cameras. It makes sense that there are good motion detection tutorials for opencv but not kivy camera, since this isnt really what kivy is for. | What is the difference between Kivy Camera and opencv ? I am asking this because in Kivy Camera the image gets adjusted according to frame size but in opencv this does not happen. Also I am not able to do motion detection in kivy camera whereas I found a great tutorial for motion detection on opencv. If someone can clarify the difference it would be appreciated ! Thanks :) | 0 | 1 | 386 |
0 | 38,794,707 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-08-05T17:14:00.000 | 0 | 3 | 0 | How to get constant function to keep shape in NumPy | 38,794,622 | 0 | python,numpy,array-broadcasting | Use x.fill(1). Make sure to return it properly as fill doesn't return a new variable, it modifies x | I have a NumPy array A with shape (m,n) and want to run all the elements through some function f. For a non-constant function such as for example f(x) = x or f(x) = x**2 broadcasting works perfectly fine and returns the expected result. For f(x) = 1, applying the function to my array A however just returns the scalar 1.
Is there a way to force broadcasting to keep the shape, i.e. in this case to return an array of 1s? | 0 | 1 | 1,182 |
0 | 38,803,263 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-08-05T18:43:00.000 | 2 | 1 | 0 | Is it possible to increase the number of centroids in KMeans during fitting? | 38,795,912 | 0.379949 | python,scikit-learn,cluster-analysis,k-means | It is not a good idea to do this during optimization, because it changes the optimization procedure substantially. It will essentially reset the whole optimization. There are strategies such as bisecting k-means that try to learn the value of k during clustering, but they are a bit more tricky than increasing k by one - they decide upon one particular cluster to split, and try to choose good initial centroids for this cluster to keep things somewhat stable.
Furthermore, increasing k will not necessarily improve Silhouette. It will trivially improve SSQ, so you cannot use SSQ as a heuristic for choosing k, either.
Last but not least, computing the Silhouette is O(n^2). It is too expensive to run often. If you have large enough amount of data to require MiniBatchKMeans (which really is only for massive data), then you clearly cannot afford to compute Silhouette at all. | I am attempting to use MiniBatchKMeans to stream NLP data in and cluster it, but have no way of determining how many clusters I need. What I would like to do is periodically take the silhouette score and if it drops below a certain threshold, increase the number of centroids. But as far as I can tell, n_clusters is set when you initialize the clusterer and can't be changed without restarting. Am I wrong here? Is there another way to approach this problem that would avoid this issue? | 0 | 1 | 77 |
0 | 38,819,049 | 1 | 0 | 1 | 0 | 1 | false | 3 | 2016-08-07T22:03:00.000 | 0 | 1 | 0 | Most efficient way to check twitter friendship? (over 5000 check) | 38,818,981 | 0 | python,twitter,tweepy | I don't know much about the limits with Tweepy, but you can always write a basic web scraper with urllib and BeautifulSoup to do so.
You could take a website such as www.doesfollow.com which accomplishes what you are trying to do. (not sure about request limits with this page, but there are dozens of other websites that do the same thing) This website is interesting because the url is super simple.
For example, in order to check if Google and Twitter are "friends" on Twitter, the link is simply www.doesfollow.com/google/twitter.
This would make it very easy for you to run through the users as you can just append the users to the url such as 'www.doesfollow.com/'+ user1 + '/' + user2
The results page of doesfollow has this tag if the users are friends on Twitter:
<div class="yup">yup</div>,
and this tag if the users are not friends on Twitter:
<div class="nope">nope</div>
So you could parse the page source code and search to find which of those tags exist to determine if the users are friends on Twitter.
This might not be the way that you wanted to approach the problem, but it's a possibility. I'm not entirely sure how to approach the graphing part of your question though. I'd have to look into that. | I'm facing problem like this. I used tweepy to collect +10000 tweets, i use nltk naive-bayes classification and filtered the tweets into +5000.
I want to generate a graph of user friendship from that classified 5000 tweet. The problem is that I am able to check it with tweepy.api.show_frienship(), but it takes so much and much time and sometime ended up with endless ratelimit error.
is there any way i can check the friendship more eficiently? | 0 | 1 | 592 |
0 | 39,021,770 | 0 | 1 | 0 | 0 | 1 | true | 2 | 2016-08-08T12:08:00.000 | 1 | 3 | 0 | How to use Tensorflow and Sci-Kit Learn together in one environment in PyCharm? | 38,828,829 | 1.2 | python,pycharm,tensorflow,anaconda,ubuntu-16.04 | Anaconda defaults doesn't provide tensorflow yet, but conda-forge do, conda install -c conda-forge tensorflow should see you right, though (for others reading!) the installed tensorflow will not work on CentOS < 7 (or other Linux Distros of a similar vintage). | I am using Ubuntu 16.04 . I tried to install Tensorflow using Anaconda 2 . But it installed a Environment inside ubuntu . So i had to create a virtual environment and then use Tensorflow . Now how can i use both Tensorflow and Sci-kit learn together in a single environment . | 0 | 1 | 2,492 |
0 | 39,119,230 | 0 | 0 | 0 | 1 | 1 | true | 0 | 2016-08-09T10:03:00.000 | 1 | 1 | 0 | Exporting R data.frame/tbl to Google BigQuery table | 38,847,743 | 1.2 | python,r,dataframe,google-bigquery | It looks like bigrquery package does the job with insert_upload_job(). In the package documentation, it says this function
> is only suitable for relatively small datasets
but it doesn't specify any size limits. For me, it's been working for tens of thousands of rows. | I know it's possible to import Google BigQuery tables to R through bigrquery library. But is it possible to export tables/data frames created in R to Google BigQuery as new tables?
Basically, is there an R equivalent of Python's temptable.insert_data(df) or df.to_sql() ?
thanks for your help,
Kasia | 0 | 1 | 635 |
0 | 38,916,691 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2016-08-11T17:53:00.000 | 1 | 1 | 0 | Find the Number of Distinct Topics After LDA in Python/ R | 38,903,061 | 1.2 | python,r,lda,topic-modeling,text-analysis | First, your question kind of assumes that topics identified by LDA correspond to real semantic topics - I'd be very careful about that assumption and take a look at the documents and words assigned to topics you want to interpret that way, as LDA often have random extra words assigned, can merge two or more actual topics into one (especially with few topics overall) and may not be meaningful at all ("junk" topics).
In answer to you questions then: the idea of a "distinct number of topics" isn't clear at all. Most of the work I've seen uses a simple threshold to decide if a documents topic proportion is "significant".
A more principled way is to look at the proportion of words assigned to that topic that appear in the document - if it's "significantly" higher than average, the topic is significant in the document, but again, this is involves a somewhat arbitrary threshold. I don't think anything can beat close reading of some examples to make meaningful choices here.
I should note that, depending on how you set the document-topic prior (usually beta), you may not have each document focussed on just a few topics (as seems to be your case), but a much more even mix. In this case "distinct number of topics" starts to be less meaningful.
P.S. Using word lists that are meaningful in your application is not a bad way to identify candidate topics of interest. Especially useful if you have many topics in your model (:
P.P.S.: I hope you have a reasonable number of documents (at least some thousands), as LDA tends to be less meaningful with less, capturing chance word co-occurences rather than meaningful ones.
P.P.P.S.: I'd go for a larger number of topics with parameter optimisation (as provided by the Mallet LDA implementation) - this effectively chooses a reasonable number of topics for your model, with very few words assigned to the "extra" topics. | As far as I know, I need to fix the number of topics for LDA modeling in Python/ R. However, say I set topic=10 while the results show that, for a document, nine topics are all about 'health' and the distinct number of topics for this document is 2 indeed. How can I spot it without examining the key words of each topic and manually count the real distinct topics?
P.S. I googled and learned that there are Vocabulary Word Lists (Word Banks) by Theme, and I could pair each topic with a theme according to the word lists. If several topics fall into the same theme, then I can combine them into one distinct topic. I guess it's an approach worth trying and I'm looking for smarter ideas, thanks. | 0 | 1 | 618 |
0 | 38,924,162 | 0 | 1 | 0 | 0 | 1 | false | 3 | 2016-08-12T15:51:00.000 | 1 | 2 | 0 | How can I implement a dictionary with a NumPy array? | 38,921,975 | 0.099668 | python,arrays,numpy,dictionary,red-black-tree | The most basic form of a dictionary is a structure called a HashMap. Implementing a hashmap relies on turning your key into a value that can be quickly looked up. A pathological example would be using ints as keys: The value for key 1 would go in array[1], the value for key 2 would go in array[2], the Hash Function is simply the identity function. You can easily implement that using a numpy array.
If you want to use other types, it's just a case of writing a good hash function to turn those keys into unique indexes into your array. For example, if you know you've got a (int, int) tuple, and the first value will never be more than 100, you can do 100*key[1] + key[0].
The implementation of your hash function is what will make or break your dictionary replacement. | I need to write a huge amount number-number pairs into a NumPy array. Since a lot of these pairs have a second value of 0, I thought of making something akin to a dictionary. The problem is that I've read through the NumPy documentation on structured arrays and it seems like dictionaries built like those on the page can only use strings as keys.
Other than that, I need insertion and searching to have log(N) complexity. I thought of making my own Red-black tree structure using a regular NumPy array as storage, but I'm fairly certain there's an easier way to go about this.
Language is Python 2.7.12. | 0 | 1 | 4,918 |
0 | 38,945,813 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-08-14T19:11:00.000 | 0 | 2 | 0 | How does cv2.fitEllipse handle width/height with regards to rotation? | 38,945,695 | 0 | python,opencv,ellipse,data-fitting | Empirically, I ran code matching thousands of ellipses, and I never got one return value where the returned width was greater than the returned height. So it seems OpenCV normalizes the ellipse such that height >= width. | An ellipse of width 50, height 100, and angle 0, would be identical to an ellipse of width 100, height 50, and angle 90 - i.e. one is the rotation of the other.
How does cv2.fitEllipse handle this? Does it return ellipses in some normalized form (i.e. angle is picked such that width is always < height), or can it provide any output?
I ask as I'm trying to determine whether two fit ellipses are similar, and am unsure whether I have to account for these things. The documentation doesn't address this at all. | 0 | 1 | 1,827 |
0 | 52,064,081 | 0 | 0 | 0 | 0 | 6 | false | 57 | 2016-08-15T14:07:00.000 | 24 | 10 | 0 | Dataframe not showing in Pycharm | 38,956,660 | 1 | python,pandas,pycharm | I have faced the same problem with PyCharm 2018.2.2. The reason was having a special character in a column's name as mentioned by Yunzhao .
If your having a column name like 'R&D' changing it to 'RnD' will fix the problem. It's really strange JetBrains hasn't solved this problem for over 2 years. | I am using PyCharm 2016.2.1 . When I try to view a Pandas dataframe through the newly added feature 'View as DataFrame' in the debugger, this works as expected for a small (e.g. 4x4) DataFrame.
However when I try to view a DataFrame (generated by custom script) of ~10,000 rows x ~50 columns, I get the message: "Nothing to show".
When I run the same script (that generates the DataFrame) in Spyder, I am able to view it, so I am pretty sure it's not an error in my script.
Does anyone know if there is a maximum size to the DataFrames that can be viewed in PyCharm, and if there is a way to change this?
EDIT:
It seems that the maximum size allowed is 1000 x 15 , as in some cases it gets truncated to this size (when the number of rows is too large, but when there are too many columns pycharm just says 'nothing to show').
Still, I would like to know if there is a way to increase the maximum allowed rows and columns viewable through the DataFrame viewer. | 0 | 1 | 27,005 |
0 | 51,483,568 | 0 | 0 | 0 | 0 | 6 | false | 57 | 2016-08-15T14:07:00.000 | 9 | 10 | 0 | Dataframe not showing in Pycharm | 38,956,660 | 1 | python,pandas,pycharm | I have met the same problems.
I figured it was because of the special characters in column names (in my case)
In my case, I have "%" in the column name, then it doesn't show the data in View as DataFrame function. After I remove it, everything was correctly shown.
Please double check if you also have some special characters in the column names. | I am using PyCharm 2016.2.1 . When I try to view a Pandas dataframe through the newly added feature 'View as DataFrame' in the debugger, this works as expected for a small (e.g. 4x4) DataFrame.
However when I try to view a DataFrame (generated by custom script) of ~10,000 rows x ~50 columns, I get the message: "Nothing to show".
When I run the same script (that generates the DataFrame) in Spyder, I am able to view it, so I am pretty sure it's not an error in my script.
Does anyone know if there is a maximum size to the DataFrames that can be viewed in PyCharm, and if there is a way to change this?
EDIT:
It seems that the maximum size allowed is 1000 x 15 , as in some cases it gets truncated to this size (when the number of rows is too large, but when there are too many columns pycharm just says 'nothing to show').
Still, I would like to know if there is a way to increase the maximum allowed rows and columns viewable through the DataFrame viewer. | 0 | 1 | 27,005 |
0 | 57,003,355 | 0 | 0 | 0 | 0 | 6 | false | 57 | 2016-08-15T14:07:00.000 | 2 | 10 | 0 | Dataframe not showing in Pycharm | 38,956,660 | 0.039979 | python,pandas,pycharm | In my situation, the problem is caused by two same cloumn name in my dataframe.
Check it by:df.columns.shape[0] == len(set(df.columns)) | I am using PyCharm 2016.2.1 . When I try to view a Pandas dataframe through the newly added feature 'View as DataFrame' in the debugger, this works as expected for a small (e.g. 4x4) DataFrame.
However when I try to view a DataFrame (generated by custom script) of ~10,000 rows x ~50 columns, I get the message: "Nothing to show".
When I run the same script (that generates the DataFrame) in Spyder, I am able to view it, so I am pretty sure it's not an error in my script.
Does anyone know if there is a maximum size to the DataFrames that can be viewed in PyCharm, and if there is a way to change this?
EDIT:
It seems that the maximum size allowed is 1000 x 15 , as in some cases it gets truncated to this size (when the number of rows is too large, but when there are too many columns pycharm just says 'nothing to show').
Still, I would like to know if there is a way to increase the maximum allowed rows and columns viewable through the DataFrame viewer. | 0 | 1 | 27,005 |
0 | 55,593,342 | 0 | 0 | 0 | 0 | 6 | false | 57 | 2016-08-15T14:07:00.000 | 2 | 10 | 0 | Dataframe not showing in Pycharm | 38,956,660 | 0.039979 | python,pandas,pycharm | I use PyCharm 2019.1.1 (Community Edition) and I run Python 3.7.
When I first click on "View as DataFrame" there seems to be the same issue, but if I wait a few second the content pops up. For me it is a matter of loading. | I am using PyCharm 2016.2.1 . When I try to view a Pandas dataframe through the newly added feature 'View as DataFrame' in the debugger, this works as expected for a small (e.g. 4x4) DataFrame.
However when I try to view a DataFrame (generated by custom script) of ~10,000 rows x ~50 columns, I get the message: "Nothing to show".
When I run the same script (that generates the DataFrame) in Spyder, I am able to view it, so I am pretty sure it's not an error in my script.
Does anyone know if there is a maximum size to the DataFrames that can be viewed in PyCharm, and if there is a way to change this?
EDIT:
It seems that the maximum size allowed is 1000 x 15 , as in some cases it gets truncated to this size (when the number of rows is too large, but when there are too many columns pycharm just says 'nothing to show').
Still, I would like to know if there is a way to increase the maximum allowed rows and columns viewable through the DataFrame viewer. | 0 | 1 | 27,005 |
0 | 57,313,249 | 0 | 0 | 0 | 0 | 6 | false | 57 | 2016-08-15T14:07:00.000 | 2 | 10 | 0 | Dataframe not showing in Pycharm | 38,956,660 | 0.039979 | python,pandas,pycharm | For the sake of completeness: I face the same problem, due to the fact that some elements in the index of the dataframe contain a question mark '?'. One should avoid that too, if you still want to use the data viewer. Data viewer still worked, if the index strings contain hashes or less-than/greather-than signs though. | I am using PyCharm 2016.2.1 . When I try to view a Pandas dataframe through the newly added feature 'View as DataFrame' in the debugger, this works as expected for a small (e.g. 4x4) DataFrame.
However when I try to view a DataFrame (generated by custom script) of ~10,000 rows x ~50 columns, I get the message: "Nothing to show".
When I run the same script (that generates the DataFrame) in Spyder, I am able to view it, so I am pretty sure it's not an error in my script.
Does anyone know if there is a maximum size to the DataFrames that can be viewed in PyCharm, and if there is a way to change this?
EDIT:
It seems that the maximum size allowed is 1000 x 15 , as in some cases it gets truncated to this size (when the number of rows is too large, but when there are too many columns pycharm just says 'nothing to show').
Still, I would like to know if there is a way to increase the maximum allowed rows and columns viewable through the DataFrame viewer. | 0 | 1 | 27,005 |
0 | 64,172,890 | 0 | 0 | 0 | 0 | 6 | false | 57 | 2016-08-15T14:07:00.000 | 1 | 10 | 0 | Dataframe not showing in Pycharm | 38,956,660 | 0.019997 | python,pandas,pycharm | As of 2020-10-02, using PyCharm 2020.1.4, I found that this issue also occurs if the DataFrame contains a column containing a tuple. | I am using PyCharm 2016.2.1 . When I try to view a Pandas dataframe through the newly added feature 'View as DataFrame' in the debugger, this works as expected for a small (e.g. 4x4) DataFrame.
However when I try to view a DataFrame (generated by custom script) of ~10,000 rows x ~50 columns, I get the message: "Nothing to show".
When I run the same script (that generates the DataFrame) in Spyder, I am able to view it, so I am pretty sure it's not an error in my script.
Does anyone know if there is a maximum size to the DataFrames that can be viewed in PyCharm, and if there is a way to change this?
EDIT:
It seems that the maximum size allowed is 1000 x 15 , as in some cases it gets truncated to this size (when the number of rows is too large, but when there are too many columns pycharm just says 'nothing to show').
Still, I would like to know if there is a way to increase the maximum allowed rows and columns viewable through the DataFrame viewer. | 0 | 1 | 27,005 |
0 | 38,966,174 | 0 | 1 | 0 | 0 | 1 | true | 1 | 2016-08-15T17:49:00.000 | 1 | 1 | 0 | Mutable indexed heterogeneous data structure? | 38,960,221 | 1.2 | python,pandas,tuples | You could use a dict of dicts instead of a dict of namedtuples. Dicts are mutable, so you'll be able to modify the inner dicts.
Given what you said in the comments about the structures of each DataFrame-1 and -2 being comparable, you could also group all of each into one big DataFrame, by adding a column to each DataFrame containing the value of sample_info_1 repeated across all rows, and likewise for sample_info_2. Then you could concat all the DataFrame-1s into a big one, and likewise for the DataFrame-2s, getting all your data into two DataFrames. (Depending on the structure of those DataFrames, you could even join them into one.) | Is there a data class or type in Python that matches these criteria?
I am trying to build an object that looks something like this:
ExperimentData
ID 1
sample_info_1: character string
sample_info_2: character string
Dataframe_1: pandas data frame
Dataframe_2: pandas data frame
ID 2
(etc.)
Right now, I am using a dict to hold the object ('ExperimentData'), which containsnamedtuple's for each ID. Each of the namedtuple's has a named field for the corresponding data attached to the sample. This allows me to keep all the ID's indexed, and have all of the fields under each ID indexed as well.
However, I need to update and/or replace the entries under each ID during downstream analysis. Since a tuple is immutable, this does not seem to be possible.
Is there a better implementation of this? | 0 | 1 | 68 |
0 | 45,422,652 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2016-08-15T22:35:00.000 | 0 | 2 | 0 | How to load images in S3 for a deep learning model with EC2 instance (GPU) | 38,964,041 | 0 | python,amazon-web-services,amazon-s3,amazon-ec2,keras | you can do it using Jupyter notebook otherwise use:
Duck for MAC, Putty for windows.
I hope it helps | I'm trying to train a Keras model on AWS GPU.
How would you load images (training data) in S3 for a deep learning model with EC2 instance (GPU)? | 0 | 1 | 764 |
0 | 39,922,584 | 0 | 0 | 0 | 0 | 1 | false | 22 | 2016-08-16T10:22:00.000 | 0 | 3 | 0 | Keras: How to use fit_generator with multiple outputs of different type | 38,972,380 | 0 | python,deep-learning,keras | The best way to achieve this seems to be to create a new generator class expanding the one provided by Keras that parses the data augmenting only the images and yielding all the outputs. | In a Keras model with the Functional API I need to call fit_generator to train on augmented images data using an ImageDataGenerator.
The problem is my model has two outputs: the mask I'm trying to predict and a binary value.
I obviously only want to augment the input and the mask output and not the binary value.
How can I achieve this? | 0 | 1 | 21,904 |
0 | 38,976,616 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-08-16T13:35:00.000 | 0 | 1 | 0 | Initializing a very large pandas dataframe | 38,976,431 | 0 | python,pandas,numpy,large-data | Out of curiosity, is there a reason you want to use Pandas for this? Image analysis is typically handled in matrices making NumPy a clear favorite. If I'm not mistaken, both sk-learn and PIL/IMAGE use NumPy arrays to do their analysis and operations.
Another option: avoid the in-memory step! Do you need to access all 1K+ images at the same time? If not, and you're operating on each one individually, you can iterate over the files and perform your operations there. For an even more efficient step, break your files into lists of 200 or so images, then use Python's MultiProcessing capabilities to analyze in parallel.
JIC, do you have PIL or IMAGE installed, or sk-learn? Those packages have some nice image analysis algorithms already packaged in which may save you some time in not having to re-invent the wheel. | Background: I have a sequence of images. In each image, I map a single pixel to a number. Then I want to create a pandas dataframe where each pixel is in its own column and images are rows. The reason I want to do that is so that I can use things like forward fill.
Challenge: I have transformed each image into a one dimensional array of numbers, each of which is about 2 million entries and I have thousands of images. Simply doing pd.DataFrame(array) is very slow (testing it on a smaller number of images). Is there a faster solution for this? Other ideas how to do this efficiently are also welcome, but using non-core different libraries may be a challenge (corporate environment). | 0 | 1 | 310 |
0 | 38,980,686 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2016-08-16T16:57:00.000 | 0 | 1 | 0 | What does Random Forest do with unseen data? | 38,980,544 | 1.2 | python,machine-learning,scikit-learn,random-forest | They will be treated in the same manner as the minimal value already encountered in the training set. RF is just a bunch of voting decision trees, and (basic) DTs can only form decisions in form of "if feature X is > then T go left, otherwise go right". Consequently, if you fit it to data which, for a given feature, has only values in [0, inf], it will either not use this feature at all or use it in a form given above (as decision of form "if X is > than T", where T has to be from (0, inf) to make any sense for the training data). Consequently if you simply take your new data and change negative values to "0", the result will be identical. | When I built my random forest model using scikit learn in python, I set a condition (where clause in sql query) so that the training data only contain values whose value is greater than 0.
I am curious to know how random forest handles test data whose value is less than 0, which the random forest model has never seen before in the training data. | 0 | 1 | 709 |
0 | 38,984,364 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2016-08-16T20:39:00.000 | 3 | 1 | 0 | add training data to existing LinearSVC | 38,984,069 | 1.2 | python,machine-learning,scikit-learn | You cannot add data to SVM and achieve the same result as if you would add it to the original training set. You can either retrain with extended training set starting with the previous solution (should be faster) or train on new data only and completely diverge from the previous solution.
There are only few models that can do what you would like to achieve here - like for example Ridge Regression or Linear Discriminant Analysis (and their Kernelized - Kernel Ridge Regression or Kernel Fischer Discriminant, or "extreme"-counterparts - ELM or EEM), which have a property of being able to add new training data "on the fly". | I am scraping approximately 200,000 websites, looking for certain types of media posted on the websites of small businesses. I have a pickled linearSVC, which I've trained to predict the probability that a link found on a web page contains media of the type that I'm looking for, and it performs rather well (overall accuracy around 95%). However, I would like the scraper to periodically update the classifier with new data as it scrapes.
So my question is, if I have loaded a pickled sklearn LinearSVC, is there a way to add in new training data without re-training the whole model? Or do I have to load all of the previous training data, add the new data, and train an entirely new model? | 0 | 1 | 902 |
0 | 38,987,964 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2016-08-17T03:05:00.000 | 1 | 2 | 0 | How to detect ending location (x,y,z) of certain sequence in 3D domain | 38,987,464 | 0.099668 | python,algorithm,graph,analytics,d3dimage | One approach would be to choose a threshold density, convert all voxels below this threshold to 0 and all above it to 1, and then look for the pair of 1-voxels whose shortest path is longest among all pairs of 1-voxels. These two voxels should be near the ends of the longest "rope", regardless of the exact shape that rope takes.
You can define a graph where there is a vertex for each 1-voxel and an edge between each 1-voxel and its 6 (or possibly 14) neighbours. You can then compute the lengths of the shortest paths between some given vertex u and every other vertex in O(|V|) time and space using breadth first search (we don't need Dijkstra or Floyd-Warshall here since every edge has weight 1). Repeating this for each possible start vertex u gives an O(|V|^2)-time algorithm. As you do this, keep track of the furthest pair so far.
If your voxel space has w*h*d cells, there could be w*h*d vertices in the graph (if every single voxel is a 1-voxel), so this could take O(w^2*h^2*d^2) time in the worst case, which is probably quite a lot. Luckily there are many ways to speed this up if you can afford a slightly imprecise answer:
Only compute shortest paths from start vertices that are at the boundary -- i.e. those vertices that have fewer than 6 (or 14) neighbours. (I believe this won't sacrifice an optimal solution.)
Alternatively, first "skeletonise" the graph by repeatedly getting rid of all such boundary vertices whose removal will not disconnect the graph.
A good order for choosing starting vertices is to first choose any vertex, and then always choose a vertex that was found to be at maximum possible distance from the last one (and which has not yet been tried, of course). This should get you a very good approximation to the longest shortest path after just 3 iterations: the furthest vertex from the start vertex will be near one of the two rope ends, and the furthest vertex from that vertex will be near the other end!
Note: If there is no full-voxel gap between distant points on the rope that are near each other due to bending, then the shortest paths will "short-circuit" through these false connections and possibly reduce the accuracy. You might be able to ameliorate this by increasing the threshold. OTOH, if the threshold is too high then the rope can become disconnected. I expect you want to choose the highest threshold that results in only 1 connected component. | I have protein 3D creo-EM scan, such that it contains a chain which bends and twists around itself - and has in 3-dimension space 2 chain endings (like continuous rope). I need to detect (x,y,z) location within given cube space of two or possibly multiplier of 2 endings. Cube space of scan is presented by densities in each voxel (in range 0 till 1) provided by scanning EM microscope, such that "existing matter" gives values closer to 1, and "no matter" gives density values closer to 0. I need a method to detect protein "rope" edges (possible "rope ending" definition is lack of continuation in certain tangled direction. Intuitively, I think there could be at least 2 methods: 1) Certain method in graph theory (I can't specify precisely - if you know one - please name or describe it. 2) Derivatives from analytic algebra - but again I can't specify specific attitude - so please name or explain one. Please specify computation complexity of suggested method. My project is implemented in Python. Please help. Thanks in advance. | 0 | 1 | 97 |
0 | 46,249,521 | 0 | 1 | 0 | 0 | 2 | false | 1 | 2016-08-17T06:49:00.000 | 0 | 2 | 0 | How to install scikit-learn | 38,989,896 | 0 | python,windows,scikit-learn | Old post, but right answer is,
'sudo pip install -U numpy matplotlib --upgrade' for python2 or 'sudo pip3 install -U numpy matplotlib --upgrade' for python3 | I know how to install external modules using the pip command but for Scikit-learn I need to install NumPy and Matplotlib as well.
How can I install these modules using the pip command? | 0 | 1 | 2,435 |
0 | 38,990,089 | 0 | 1 | 0 | 0 | 2 | false | 1 | 2016-08-17T06:49:00.000 | -1 | 2 | 0 | How to install scikit-learn | 38,989,896 | -0.099668 | python,windows,scikit-learn | Using Python 3.4, I run the following from the command line:
c:\python34\python.exe -m pip install package_name
So you would substitute "numpy" and "matplotlib" for 'package_name' | I know how to install external modules using the pip command but for Scikit-learn I need to install NumPy and Matplotlib as well.
How can I install these modules using the pip command? | 0 | 1 | 2,435 |
0 | 38,994,681 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-08-17T08:36:00.000 | 0 | 1 | 0 | Scikit-learn and pyspark integration | 38,991,799 | 0 | python,apache-spark,scikit-learn,pyspark | The fact that you are using spark shouldn't hold you from using external python libraries.
You can import sklearn library in your spark-python code, and use sklearn logistic regression model with the saved pkl file. | I have trained a logistic regression model in sklearn and saved the model to .pkl files. Is there a method of using this pkl file from within spark? | 0 | 1 | 374 |
0 | 39,093,328 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2016-08-17T15:48:00.000 | 3 | 1 | 0 | In Keras, If samples_per_epoch is less than the 'end' of the generator when it (loops back on itself) will this negatively affect result? | 39,001,104 | 0.53705 | python,machine-learning,deep-learning,theano,keras | I'm dealing some something similar right now. I want to make my epochs shorter so I can record more information about the loss or adjust my learning rate more often.
Without diving into the code, I think the fact that .fit_generator works with the randomly augmented/shuffled data produced by the keras builtin ImageDataGenerator supports your suspicion that it doesn't reset the generator per epoch. So I believe you should be fine, as long as the model is exposed to your whole training set it shouldn't matter if some of it is trained in a separate epoch.
If you're still worried you could try writing a generator that randomly samples your training set. | I'm using Keras with Theano to train a basic logistic regression model.
Say I've got a training set of 1 million entries, it's too large for my system to use the standard model.fit() without blowing away memory.
I decide to use a python generator function and fit my model using model.fit_generator().
My generator function returns batch sized chunks of the 1M training examples (they come from a DB table, so I only pull enough records at a time to satisfy each batch request, keeping memory usage in check).
It's an endlessly looping generator, once it reaches the end of the 1 million, it loops and continues over the set
There is a mandatory argument in fit_generator() to specify samples_per_epoch. The documentation indicates
samples_per_epoch: integer, number of samples to process before going to the next epoch.
I'm assuming the fit_generator() doesn't reset the generator each time an epoch runs, hence the need for a infinitely running generator.
I typically set the samples_per_epoch to be the size of the training set the generator is looping over.
However, if samples_per_epoch this is smaller than the size of the training set the generator is working from and the nb_epoch > 1:
Will you get odd/adverse/unexpected training resulting as it seems the epochs will have differing sets training examples to fit to?
If so, do you 'fastforward' you generator somehow? | 0 | 1 | 1,961 |
0 | 69,125,753 | 0 | 0 | 0 | 0 | 1 | false | 47 | 2016-08-18T00:56:00.000 | 0 | 3 | 0 | How to transform Dask.DataFrame to pd.DataFrame? | 39,008,391 | 0 | python,pandas,dask | MRocklin's answer is correct and this answer gives more details on when it's appropriate to convert from a Dask DataFrame to and Pandas DataFrame (and how to predict when it'll cause problems).
Each partition in a Dask DataFrame is a Pandas DataFrame. Running df.compute() will coalesce all the underlying partitions in the Dask DataFrame into a single Pandas DataFrame. That'll cause problems if the size of the Pandas DataFrame is bigger than the RAM on your machine.
If df has 30 GB of data and your computer has 16 GB of RAM, then df.compute() will blow up with a memory error. If df only has 1 GB of data, then you'll be fine.
You can run df.memory_usage(deep=True).sum() to compute the amount of memory that your DataFrame is using. This'll let you know if your DataFrame is sufficiently small to be coalesced into a single Pandas DataFrame.
Repartioning changes the number of underlying partitions in a Dask DataFrame. df.repartition(1).partitions[0] is conceptually similar to df.compute().
Converting to a Pandas DataFrame is especially possible after performing a big filtering operation. If you filter a 100 billion row dataset down to 10 thousand rows, then you can probably just switch to the Pandas API. | How can I transform my resulting dask.DataFrame into pandas.DataFrame (let's say I am done with heavy lifting, and just want to apply sklearn to my aggregate result)? | 0 | 1 | 32,382 |
0 | 39,018,076 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2016-08-18T12:15:00.000 | 2 | 1 | 0 | Deploy caffe regression model | 39,017,998 | 1.2 | python,neural-network,deep-learning,caffe,conv-neural-network | For deploy you only need to discard the loss layer, in your case the "EuclideanLoss" layer. The output of your net is the "bottom" you fed the loss layer.
For "SoftmaxWithLoss" layer (and "SigmoidCrossEntropy") you need to replace the loss layer, since the loss layer includes an extra layer inside it (for computational reasons). | I have trained a regression network with caffe. I use "EuclideanLoss" layer in both the train and test phase. I have plotted these and the results look promising.
Now I want to deploy the model and use it. I know that if SoftmaxLoss is used, the final layer must be Softmax in the deploy file. What should this be in the case of Euclidean loss? | 0 | 1 | 594 |
0 | 39,046,078 | 0 | 0 | 1 | 0 | 1 | true | 0 | 2016-08-19T18:42:00.000 | 0 | 1 | 0 | Tweepy import Error on HDFS running on Centos 7 | 39,045,825 | 1.2 | python-2.7,hadoop,hdfs,tweepy,centos7 | It looks like you're using Anaconda's Python to run your script, but you installed tweepy into CentOS's system installation of Python using pip. Either use conda to install tweepy, or use Anaconda's pip executable to install tweepy onto your Hadoop cluster. | I have a Hadoop Cluster running on Centos 7. I am running a program (sitting on HDFS) to extract tweets and I need to import tweepy for that. I did pip install tweepy as root on all the nodes of the cluster but i still get an import error when I run the program.
Error says: ImportError: No module named tweepy
I am sure Tweepy is installed because, pip freeze | grep "tweepy" returns tweepy==3.5.0.
I created another file x.py with just one line import tweepy in the /tmp folder and that runs without an error. Error occurs only on HDFS.
Also, my default python is Python 2.7.12 which I installed using Anaconda. Can someone help me with this issue? The same code is running without any such errors on another cluster running on Centos 6.6. Is it an OS issue? Or do I have to look into the Cluster? | 0 | 1 | 192 |
0 | 39,119,151 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-08-24T07:22:00.000 | 0 | 1 | 0 | python2.7 histogram comparison - white background anomaly | 39,116,877 | 0 | python,python-2.7,opencv,image-processing,histogram | You can remove the white color, rebin the histogra and then compare:
Compute a histrogram with 256 bins.
Remove the white bin (or make it zero).
Regroup the bins to have 64 bins by adding the values of 4 consecutive bins.
Perform the compareHist().
This would work for any "predominant color". To generalize, you can do the following:
Compare full histrograms. If they are different, then finish.
If they are similar, look for the predominant color (with a 256-bin histogram), and perform the procedure described above, to remove the predominant color from the comparisson. | my program's purpose is to take 2 images and decide how similar they are.
im not talking here about identical, but similarity. for example, if i take 2 screenshots of 2 different pages of the same website, their theme colors would probably be very similar and therefor i want the program to declare that they are similar.
my problem starts when both images have a white background that pretty much takes over the histogram calculation (over then 30% of the image is white and the rest is distributed).
in that case, the cv2.compareHist (using correlation method, which works for the other cases) gives very bad results, that is, the grade is very high even though they look very different.
i have thought about taking the white (255) off the histogram before comparing, but that requires me to calculate the histogram with 256 bins, which is not good when i want to check similarity (i thought that using 32 or 64 bins would be best)
unfortunately i cant add images im working with due to legal reasons
if anyone can help with an idea, or code that solves it i would be very grateful
thank you very much | 0 | 1 | 161 |
0 | 39,177,157 | 0 | 0 | 0 | 0 | 1 | false | 5 | 2016-08-26T13:54:00.000 | 2 | 1 | 0 | Tensorflow: show or save forget gate values in LSTM | 39,168,025 | 0.379949 | python,neural-network,tensorflow,lstm | If you are using tf.rnn_cell.BasicLSTMCell , the variable you are looking for will have the following suffix in its name : <parent_variable_scope>/BasicLSTMCell/Linear/Matrix . This is a concatenated matrix for all the four gates. Its first dimension matches the sum of the second dimensions of the input matrix and the state matrix (or output of the cell to be exact). The second dimension is 4 times the number of cell size.
The other complementary variable is <parent_variable_scope>/BasicLSTMCell/Linear/Bias that is a vector of the same size as the second dimension of the abovementioned tensor (for obvious reasons).
You can retrieve the parameters for the four gates by using tf.split() along dimension 1. The split matrices would be in the order [input], [new input], [forget], [output]. I am referring to the code here form rnn_cell.py.
Keep in mind that the variable represents the parameters of the Cell and not the output of the respective gates. But with the above info, I am sure you can get that too, if you so desire.
Edit:
Added more specific information about the actual tensors Matrix and Bias | I am using the LSTM model that comes by default in tensorflow. I would like to check or to know how to save or show the values of the forget gate in each step, has anyone done this before or at least something similar to this?
Till now I have tried with tf.print but many values appear (even more than the ones I was expecting) I would try plotting something with tensorboard but I think those gates are just variables and not extra layers that I can print (also cause they are inside the TF script)
Any help will be well received | 0 | 1 | 1,695 |
0 | 39,174,418 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-08-26T18:23:00.000 | 0 | 1 | 0 | Python: How to interpolate errors using scipy interpolate.interp1d | 39,172,559 | 0 | python,scipy,interpolation | As long as you can assume that your errors represent one-sigma intervals of normal distributions, you can always generate synthetic datasets, resample and interpolate those, and compute the 1-sigma errors of the results.
Or just interpolate values+err and values-err, if all you need is a quick and dirty rough estimate. | I have a number of data sets, each containing x, y, and y_error values, and I'm simply trying to calculate the average value of y at each x across these data sets. However the data sets are not quite the same length. I thought the best way to get them to an equal length would be to use scipy's interoplate.interp1d for each data set. However, I still need to be able to calculate the error on each of these averaged values, and I'm quite lost on how to accomplish that after doing an interpolation.
I'm pretty new to Python and coding in general, so I appreciate your help! | 0 | 1 | 403 |
0 | 39,229,405 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2016-08-30T01:45:00.000 | 0 | 1 | 0 | Convert Python dict files into MATLAB struct | 39,217,618 | 0 | python,matlab,dictionary,struct | So python -> MATLAB is a bit tricky with dictionaries/structs because the type of object that MATLAB is expecting is a dictionary object where each key is a single variable you want from python as a simple data type (array,int, etc). It doesn't like having nested dictionaries.
I recommend
1: Store each dictionary separately instead of as part of a higher level object.
or 2: even though it is not very nice converting the structs to individual variables.
MATLAB should be able to handle simple non-nested structures like that. | I have a function in Python that outputs a dict. I run this function into MATLAB and save the output to a parameter (say tmp) which is a dict of nested other dicts itself. Now I want to convert this file into a useful format such as structure.
To elaborate: tmp is a dict. data = struct(tmp) is a structure but the fields are other dicts. I tried to go through every field and convert it individually, but this is not very efficient.
Another option: I have the output saved in a JSON file and can load it into MATLAB. However, it is still not useable. | 0 | 1 | 1,329 |
0 | 39,255,667 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2016-08-30T19:05:00.000 | 1 | 1 | 0 | In python apache beam, is it possible to write elements in a specific order? | 39,235,274 | 0.197375 | python,google-cloud-dataflow,apache-beam | While this isn't part of the base distribution, this is something you could implement by processing these elements and sorting them as part of a global window before writing out to a file, with the following caveats:
The entire contents of the window would need to fit in memory, or you would need to chunk up the file into smaller global windows.
If you are doing the second option, you'd need to have a strategy for writing the smaller windows in order to the file. | I'm using beam to process time series data over overlapping windows. At the end of my pipeline I am writing each element to a file. Each element represents a csv row and one of the fields is a timestamp of the associated window. I would like to write the elements in order of that timestamp. Is there a way to do this using the python beam library? | 0 | 1 | 998 |
0 | 40,301,138 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2016-08-31T12:26:00.000 | 1 | 2 | 0 | python 3D numpy array time index | 39,249,639 | 0.099668 | python,arrays,datetime,numpy,multidimensional-array | Posting the psuedo solution I used:
The problem here is the lack of date-time indexing for 3d array data (i.e. satillite, radar). Whilst there is time series functions in pandas there is not for arrays (as far as i'm aware).
This solution was possible because the data files I use have date-time in the name e.g. '200401010000' is 'yyyymmddhhMM'.
construct a 3d array with all the data (missing times in places).
Using the list of data file (os.listdir), create a list of timestamps (length matches 3d array length)
create dfa using timestamps (2) as dfa index and create a column 'inx' of running integers (range(0, len(array) = integers = index of 3d array)
create a datetime index using data start and end times and known frequency of data (no missing datetimes). Create a new dfb using this as the index.
dfb in (4) left merge with dfa in (3). Aka dfa now has accurate datetime index and 'inx' column containg the 3d array index postition and with nan's at missing data.
Using this you can then resmaple the df, to for example 1 day, taking the Min and Max of 'inx'. This gives you the start - end position for your array functions.
You can also insert arrays of nans at mising datetimes (i.e. 'inx' min max = nan) so that your 3d array matches the length of actual datetimes.
Comment if you have Q's or if you know of a better solution / package to this problem. | Is there a way to index a 3 dimensional array using some form of time index (datetime etc.) on the 3rd dimension?
My problem is that I am doing time series analysis on several thousand radar images and I need to get, for example, monthly averages. However if i simply average over every 31 arrays in the 3rd dimension it becomes inacurate due to shorter months and missing data etc. | 0 | 1 | 949 |
0 | 56,981,228 | 0 | 1 | 0 | 0 | 1 | false | 8 | 2016-08-31T18:47:00.000 | 0 | 3 | 0 | Plotly + iPython Notebook - Plots Disappear on Reopen | 39,256,913 | 0 | python,ipython,jupyter-notebook,plotly | I also meet this annoying issue. I find no way to reveal them on the notebook, but I find a compromise way to display them on an html page that is File -> Print Preview. | When I create a notebook with plotly plots, save and reopen the notebook, the plots fail to render upon reopening - there are just blank blocks where the plots should be. Is this expected behavior? If not, is there a known fix? | 0 | 1 | 1,438 |
0 | 39,273,086 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2016-09-01T13:34:00.000 | 1 | 2 | 0 | Suggestions to handle multiple python pandas scripts | 39,273,012 | 0.099668 | python,pandas | Instead of writing a CSV output which you have to re-parse, you can write and read the pandas.DataFrame in efficient binary format with the methods pandas.DataFrame.to_pickle() and pandas.read_pickle(), respectively. | I currently have several python pandas scripts that I keep separate because of 1) readability, and 2) sometimes I am interested in the output of these partial individual scripts.
However, generally, the CSV file output of one of these scripts is the CSV input of the next and in each I have to re-read datetimes which is inconvenient.
What best practices do you suggest for this task? Is it better to just combine all the scripts into one for when I'm interested in running the whole program or is there a more Python/Pandas way to deal with this?
thank you and I appreciate all your comments, | 0 | 1 | 66 |
Subsets and Splits