GUI and Desktop Applications
int64
0
1
A_Id
int64
5.3k
72.5M
Networking and APIs
int64
0
1
Python Basics and Environment
int64
0
1
Other
int64
0
1
Database and SQL
int64
0
1
Available Count
int64
1
13
is_accepted
bool
2 classes
Q_Score
int64
0
1.72k
CreationDate
stringlengths
23
23
Users Score
int64
-11
327
AnswerCount
int64
1
31
System Administration and DevOps
int64
0
1
Title
stringlengths
15
149
Q_Id
int64
5.14k
60M
Score
float64
-1
1.2
Tags
stringlengths
6
90
Answer
stringlengths
18
5.54k
Question
stringlengths
49
9.42k
Web Development
int64
0
1
Data Science and Machine Learning
int64
1
1
ViewCount
int64
7
3.27M
0
59,199,255
0
0
0
0
2
false
26
2015-09-25T03:54:00.000
-1
5
0
why matplotlib give the error []?
32,774,520
-0.039979
python,matplotlib
Had this problem. You just have to use show() function to show it in a window. Use pyplot.show()
I am using python 2.7.9 on win8. When I tried to plot using matplotlib, the following error showed up: from pylab import * plot([1,2,3,4]) [matplotlib.lines.Line2D object at 0x0392A9D0] I tried the test code "python simple_plot.py --verbose-helpful", and the following warning showed up: $HOME=C:\Users\XX matplotlib data path C:\Python27\lib\site-packages\matplotlib\mpl-data You have the following UNSUPPORTED LaTeX preamble customizations: Please do not ask for support with these customizations active. loaded rc file C:\Python27\lib\site-packages\matplotlib\mpl-data\matplotlibrc matplotlib version 1.4.3 verbose.level helpful interactive is False platform is win32 CACHEDIR=C:\Users\XX.matplotlib Using fontManager instance from C:\Users\XX.matplotlib\fontList.cache backend TkAgg version 8.5 findfont: Matching :family=sans-serif:style=normal:variant=normal:weight=normal:stretch=normal:size=medium to Bitstream Vera Sans (u'C:\Python27\lib\site-packages\matplotlib\mpl-data\fonts\ttf\Vera.ttf') with score of 0.000000 What does this mean? How could I get matplotlib working?
0
1
61,800
0
62,611,119
0
0
0
0
2
false
26
2015-09-25T03:54:00.000
0
5
0
why matplotlib give the error []?
32,774,520
0
python,matplotlib
When you run plt.plot() on Spider, you will now receive the following notification: Figures now render in the Plots pane by default. To make them also appear inline in the Console, uncheck "Mute Inline Plotting" under the Plots pane options menu. I followed this instruction, and it works.
I am using python 2.7.9 on win8. When I tried to plot using matplotlib, the following error showed up: from pylab import * plot([1,2,3,4]) [matplotlib.lines.Line2D object at 0x0392A9D0] I tried the test code "python simple_plot.py --verbose-helpful", and the following warning showed up: $HOME=C:\Users\XX matplotlib data path C:\Python27\lib\site-packages\matplotlib\mpl-data You have the following UNSUPPORTED LaTeX preamble customizations: Please do not ask for support with these customizations active. loaded rc file C:\Python27\lib\site-packages\matplotlib\mpl-data\matplotlibrc matplotlib version 1.4.3 verbose.level helpful interactive is False platform is win32 CACHEDIR=C:\Users\XX.matplotlib Using fontManager instance from C:\Users\XX.matplotlib\fontList.cache backend TkAgg version 8.5 findfont: Matching :family=sans-serif:style=normal:variant=normal:weight=normal:stretch=normal:size=medium to Bitstream Vera Sans (u'C:\Python27\lib\site-packages\matplotlib\mpl-data\fonts\ttf\Vera.ttf') with score of 0.000000 What does this mean? How could I get matplotlib working?
0
1
61,800
0
32,776,321
0
0
0
0
1
true
0
2015-09-25T06:31:00.000
5
1
0
Why Numpy and Pandas arrays consuming more memory than source data?
32,776,134
1.2
python,numpy,pandas,bigdata
Memory consumption depends very much on the way data is stored. For example 1 as string takes only one byte, as an int it takes two bytes and eight bytes as double. Then there is the overhead of creating it as in Object of DaataFrame and Series. All this is done for efficient processing. As a general rule of thumb data respresentation in memory will have larger size than in storage. BigData means data which is too large to fit in the memory (or process in a single machine). So it makes no sense to parse the whole data and load it in memory. All BigData processing engines depend splitting the data into chunks and processing the chunks individually (and parallely), then combining these intermediate results into one.
I am new to bigdata, I want to parse the whole data, so I cant split it when i try to use numpy array for processing 1 GB data it takes 4GB memory (In real time I am dealing with huge data). Is there any optimized way to use these array for this much data or any special function to handle huge data.
0
1
1,066
0
32,809,283
0
1
0
0
1
true
4
2015-09-27T13:58:00.000
6
1
0
Storing a Random state
32,808,686
1.2
python,random
You can save the state of the PRNG using random.getstate() (then, e.g., use pickle to save it to disk. Later, a random.setstate(state) will return your PRNG to exactly the state it was in.
I'm designing a program which: Includes randomness Can stop executing and save its state at certain points (in XML) Can start executing starting from a saved state Is deterministic (so the program can run from the same state twice and produces the same result) The problem here is saving the randomness. I can initialize it at start, but from state to state I may generate anywhere from 0 to 1000 random numbers. Therefore, I have 3 options I can see: Store the seed, and number of times a number has been randomly generated, then when loading the state, run the random number generator that many times. On state save, increment the seed by N On state save, randomly generate the next seed The problem with option 1 is the run time, and is pretty infeasible. However, I'm unsure whether 2 or 3 will produce good random results. If I run two random generators, one seeded with X, the other seeded with X+1, how different will their results be? What if the first is seeded with X, and the second is seeded with X.random()? In case it makes a difference, I'm using Python 3.
0
1
2,249
0
32,859,613
0
1
0
0
1
true
0
2015-09-30T06:41:00.000
0
1
0
How to combine multiple feature sets in bag of words
32,859,460
1.2
python-2.7,machine-learning,scikit-learn,text-mining,text-classification
You can train individual classifiers for descriptions and merchants, and obtain a final score using score = w1 * predictions + w2 * components. The values of w1 and w2 should be obtained using cross validation. Alternatively, you can train a single multiclass classifier by combining the training dataset. You will now have 4 classes: Neither 'predictions' nor 'components' 'predictions' but not 'components' not 'predictions' but 'components' 'predictions' and 'components' And you can go ahead and train as usual.
I have text classification data with predictions depending on categories, 'descriptions' and 'components'. I could do the classification using bag of words in python with scikit on 'descriptions'. But I want to get predictions using both categories in bag of words with weights to individual feature sets x = descriptions + 2* components How should I proceed?
0
1
713
0
33,103,479
0
0
0
0
1
false
0
2015-09-30T14:56:00.000
0
1
0
Generate orphan mesh in abaqus python
32,869,355
0
python,mesh,abaqus,orphan
In Abaqus you can only edit Native Meshes. This time, as you said, you have an Orphan Mesh. The only way to edit this kind of mesh is doing it by yourself with an external script.
I am trying to generate an orphan mesh on a part with python. I have already defined the nodes by using a code giving by Tim in another post. However, the with the following command: ListElem.append(myTrabPart.Element(nodes=tup,elemShape=HEX8) I ended up by the message "there is no mesh to edit". It seems that the ListElem is empty in my case. The list lengths are correct. Do you have any advice which could help me? Thanks, Romain
0
1
424
0
32,900,703
0
0
0
0
1
false
2
2015-10-01T05:58:00.000
0
2
0
PySpark - Combining Session Data without Explicit Session Key / Iterating over All Rows
32,880,370
0
python,apache-spark,pyspark,mapreduce,apache-spark-sql
Zero323's solution works great but wanted to post an rdd implementation as well. I think this will be helpful for people trying to translate streaming MapReduce to pyspark. My implementation basically maps keys (individuals in this case) to a list of list for the streaming values that would associate with that key (areas and times) and then iterates over the list to satisfy the iterative component - and the rest is just normal reducing by keys and mapping. from pyspark import SparkContext, SparkFiles, SparkConf from datetime import datetime conf = SparkConf() sc = SparkContext(conf=conf) rdd = sc.parallelize(["IndividualX|AreaQ|1/7/2015 0:00", "IndividualX|AreaQ|1/7/2015 1:00", "IndividualX|AreaW|1/7/2015 3:00", "IndividualX|AreaQ|1/7/2015 4:00", "IndividualY|AreaZ|2/7/2015 4:00", "IndividualY|AreaZ|2/7/2015 5:00", "IndividualY|AreaW|2/7/2015 6:00", "IndividualY|AreaT|2/7/2015 7:00"]) def splitReduce(x): y = x.split('|') return (str(y[0]),[[str(y[2]),str(y[1])]]) def resultSet(x): processlist = sorted(x[1], key=lambda x: x[0]) result = [] start_area = processlist[0][1] start_date = datetime.strptime(processlist[0][0], '%d/%m/%Y %H:%M') dur = 0 if len(processlist) > 1: for datearea in processlist[1::]: end_date = datetime.strptime(datearea[0],'%d/%m/%Y %H:%M') end_area = datearea[1] dur = (end_date-start_date).total_seconds()/60 if start_area != end_area: result.append([start_area,start_date,end_date,dur]) start_date = datetime.strptime(datearea[0], '%d/%m/%Y %H:%M') start_area = datearea[1] dur = 0 return (x[0],result) def finalOut(x): return str(x[0]) + '|' + str(x[1][0]) + '|' + str(x[1][1]) + '|' + str(x[1][2]) + '|' + str(x[1][3]) footfall = rdd\ .map(lambda x: splitReduce(x))\ .reduceByKey(lambda a, b: a + b)\ .map(lambda x: resultSet(x))\ .flatMapValues(lambda x: x)\ .map(lambda x: finalOut(x))\ .collect() print footfall Provides output of: ['IndividualX|AreaQ|2015-07-01 00:00:00|2015-07-01 03:00:00|180.0', 'IndividualX|AreaW|2015-07-01 03:00:00|2015-07-01 04:00:00|60.0', 'IndividualY|AreaZ|2015-07-02 04:00:00|2015-07-02 06:00:00|120.0', 'IndividualY|AreaW|2015-07-02 06:00:00|2015-07-02 07:00:00|60.0']
I am trying to aggregate session data without a true session "key" in PySpark. I have data where an individual is detected in an area at a specific time, and I want to aggregate that into a duration spent in each area during a specific visit (see below). The tricky part here is that I want to infer the time someone exits each area as the time they are detected in the next area. This means that I will need to use the start time of the next area ID as the end time for any given area ID. Area IDs can also show up more than once for the same individual. I had an implementation of this in MapReduce where I iterate over all rows and aggregate the time until a new AreaID or Individual is detected, then output the record. Is there a way to do something similar in Spark? Is there a better way to approach the problem? Also of note, I do not want to output a record unless the individual has been detected in another area (e.g. IndividualY, AreaT below) I have a dataset in the following format: Individual AreaID Datetime of Detection IndividualX AreaQ 1/7/2015 0:00 IndividualX AreaQ 1/7/2015 1:00 IndividualX AreaW 1/7/2015 3:00 IndividualX AreaQ 1/7/2015 4:00 IndividualY AreaZ 2/7/2015 4:00 IndividualY AreaZ 2/7/2015 5:00 IndividualY AreaW 2/7/2015 6:00 IndividualY AreaT 2/7/2015 7:00 I would like the desired output of: Individual AreaID Start_Time End_Time Duration (minutes) IndividualX AreaQ 1/7/2015 0:00 1/7/2015 3:00 180 IndividualX AreaW 1/7/2015 3:00 1/7/2015 4:00 60 IndividualY AreaZ 2/7/2015 4:00 2/7/2015 6:00 120 IndividualY AreaW 2/7/2015 6:00 2/7/2015 7:00 60
0
1
445
0
32,909,946
0
0
0
0
1
false
2
2015-10-02T13:14:00.000
5
1
0
Caffe: train, validation and test split
32,908,025
0.761594
python,machine-learning,neural-network,caffe,conv-neural-network
Differentiating between validation and testing is made to imply that hyperparameters may be tuned to the validation set while nothing is fitted to the test set in any way. caffe doesn't optimize anything but the weights, and since the test is only there for evaluation, it does exactly as expected. Assuming you're tuning hyper parameters between solver optimization runs. The lmdb passed to caffe for testing is really the validation set. If you're done with tuning your hyperparameters and do one more solver optimization with an lmdb for testing that holds data never used in previous runs. That last lmdb is your test set. Since caffe doesn't optimize hyperparameters, its test set is what it is, a test set. It's possible to come up with a some python code around the solver optimization calls that iterates through hyperparameter values. After it's done it can swap in a new lmdb with unseen data to tell you about how well the network generalizes with it. I don't recommend modifying caffe for an explicit val/test distinction. You don't even have to do anything elaborate with setting up the prototxt file for the solver and network definition. You can do the val/test swap at the end by simply moving the val lmdb somewhere else and moving the test lmdb in its place using shutil.copy(src, dst)
I've been using caffe for a while, with some success, but I have noticed in examples given that there is only ever a two way split on the data set with TRAIN and TEST phases, where the TEST set seems to act as a validation set. Ideally I would like to have three sets, so that once the model is trained, I can save it and test it on a completely new test set - stored in a completed separate lmdb folder. Does anyone have any experience of this? Thanks.
0
1
4,078
0
33,026,758
0
0
0
0
1
false
1
2015-10-02T17:23:00.000
0
1
0
inserting training instances to scikit-learn dataset
32,912,567
0
python-2.7,numpy,scipy,scikit-learn
FIRST: I'm guessing the reason sparse data is giving a different answer than the same data converted to dense, is that my representation of sparse was starting feature indices from one rather than zero (because oll library that I used previously required so). So my first column was all zero, when converted to dense it was not preserved and that's the reason for slightly better results when using dense representation. SECOND: adding new rows to the sparse matrix in that scale is not efficient. Not even if you reserve a large matrix at the beginning (with padded zeros) to replace later. This can be because of the structure sparse matrix is stored in (It uses three arrays, in case of crs one for row number, one for non-zero column indices in rows and one for values themselves; check the documentation). SOLUTION: best way I found is to use dense representations from the beginning (if that's an option of course). Collect all the instances that need to be added to the training set. Instantiate a new matrix to the size of aggregated data and then start adding instances "randomly" from both last training set and also from new instances that you want to add. To make it random I generate a sorted list of random positions that tell me when I should add data from new instances otherwise copy from the older ones.
I have a dataset of 15M+ training instances in form of svmlight dataset. I read these data using sklearn.datasets.load_svmlight_file(). The data itself is not sparse, so I don't mind converting it to any other dense representation (I will prefer that). At some point in my program I need to add millions of new data records (instances) to my training data (in random positions). I used vstack and also tried converting to dense matrices but was either inefficient or failed to give correct results (details below). Is there any way to do this task efficiently? I'm implementing DAgger algorithm and in the first iteration it is trying to add about 7M new training instances. I want to add these new instances in random positions. I tried vstack (given my data was in csr format I was expecting it not to be too inefficient at least). However after 24hours it's not done yet. I tried converting my data to numpy.matrix format just after loading them in svmlight format. A sampling showed it can help me speed things up but interestingly the results I get from training on the converted dataset and the original dataset seem not to match with each other. It appears sklearn does not work with numpy matrix in the way I thought. I couldn't find anything in the sklearn documentation. Another approach I thought was to define a larger dataset from the beginning so that it will "reserve" enough space in memory, but when I'm using sklearn train or test features I'll index my dataset to the last "true" record. In this way, I presume, vstack will not require opening up a new large space in memory which can make the whole operation take longer. Any thoughts on this?
0
1
162
0
33,177,976
0
0
0
0
1
false
3
2015-10-04T17:18:00.000
3
1
0
How do you free up gpu memory?
32,936,166
0.53705
python-2.7,gpu,gpgpu,theano
If borrow is set to true garbage collection is on (default true: config.allow_gc=True) and the video card is not currently being used as a display device (doubtful, since you're using a mobile gpu), the only other options are to reduce the parameters of the network or possibly the batch size of the model. The latter will be especially effective if the model uses dropout or noise-based masks (these will be equal to the number of examples in the batch x the number of parameters dropped out or noised). Otherwise maybe you could boot to the command prompt to save a few mbs? :/
When running theano, I get an error: not enough memory. See below. What are some possible actions that can be taken to free up memory? I know I can close applications etc, but I just want see if anyone has other ideas. For example, is it possible to reserve memory? THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32 python conv_exp.py Using gpu device 0: GeForce GT 650M Trying to run under a GPU. If this is not desired, then modify network3.py to set the GPU flag to False. Error allocating 156800000 bytes of device memory (out of memory). Driver report 64192512 bytes free and 1073414144 bytes total Traceback (most recent call last): File "conv_exp.py", line 25, in training_data, validation_data, test_data = network3.load_data_shared() File "/Users/xr/courses/deep_learning/con_nn/neural-networks-and-deep-learning/src/network3.py", line 78, in load_data_shared return [shared(training_data), shared(validation_data), shared(test_data)] File "/Users/xr/courses/deep_learning/con_nn/neural-networks-and-deep-learning/src/network3.py", line 74, in shared np.asarray(data[0], dtype=theano.config.floatX), borrow=True) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/theano/compile/sharedvalue.py", line 208, in shared allow_downcast=allow_downcast, **kwargs) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/theano/sandbox/cuda/var.py", line 203, in float32_shared_constructor deviceval = type_support_filter(value, type.broadcastable, False, None) MemoryError: ('Error allocating 156800000 bytes of device memory (out of memory).', "you might consider using 'theano.shared(..., borrow=True)'")
0
1
3,994
0
71,779,306
0
0
0
0
1
false
4
2015-10-04T21:15:00.000
0
2
0
DBSCAN (with metric only) in scikit-learn
32,938,494
0
python,scikit-learn,cluster-analysis,data-mining,dbscan
I wrote my own distance code ref the top answer, just as it says, it was extremely slow, the built-in distance code was much better. I'm wondering how to speed up.
I have objects and a distance function, and want to cluster these using DBSCAN method in scikit-learn. My objects don't have a representation in Euclidean space. I know, that it is possible to useprecomputed metric, but in my case it's very impractical, due to large size of distance matrix. Is there any way to overcome this in scikit-learn? Maybe, are there another python implementations of DBSCAN that can do so?
0
1
6,418
0
64,074,702
0
0
0
0
1
false
6
2015-10-07T00:50:00.000
0
7
0
Which columns are binary in a Pandas DataFrame?
32,982,034
0
python,numpy,pandas
You can just use the unique() function from pandas on each column in your dataset. ex: df["colname"].unique() This will return a list of all unique values in the specified column. You can also use for loop to traverse through all the columns in the dataset. ex: [df[cols].unique() for cols in df]
I have a pandas dataframe with a large number of columns and I need to find which columns are binary (with values 0 or 1 only) without looking at the data. Which function should be used?
0
1
9,639
0
32,994,584
0
1
0
0
1
false
0
2015-10-07T09:18:00.000
1
1
0
NLP - Find which Verb is talking about the Noun in a sentence
32,988,413
0.197375
python,nlp,nltk
That's a good suggestion, I will try it with anaphora too. For now, my problem is solved by the concept of noun phrase & verb phrase. I extracted clause(s) from the sentence identified verbs & nouns in each, and related them through an iterative technique. Thank you for the help.
Given a sentence, Using python NLTK how can I know which Verb is talking about which Noun. Eg: Cat sat on the mat. Here "sat(verb)" is talking about "Cat(noun)". Consider a complex sentence which has more nouns & verbs Thank You.
0
1
1,077
0
33,027,650
0
1
0
0
1
true
0
2015-10-08T22:37:00.000
2
1
0
What's the purpose of Series instead of lists in Pandas and Python?
33,027,086
1.2
python,pandas
This isn't going to be a very complete answer, but hopefully is an intuitive "general" answer. Pandas doesn't use a list as the "core" unit that makes up a DataFrame because Series objects make assumptions that lists do not. A list in python makes very little assumptions about what is inside, it could be pretty much anything, which makes it great as a core component of python. However, if you want to build a more specialized package that gives you extra functionality liked Pandas, then you want to create your own "core" data object and start building extra functionality on top of that. Compared with lists, you can do a lot more with a custom Series object (as witnessed by pulling a single column from a DataFrame and seeing what methods are available to the output).
Why doesn't Pandas build DataFrames directly from lists? Why was such a thing as a series created in the first place? Or: If the data in a DataFrame is actually stored in memory as a collection of Series, why not just use a collection of lists? Yet another way to ask the same question: what's the purpose of Series over lists?
0
1
221
0
33,040,012
0
0
0
0
1
true
1
2015-10-09T13:43:00.000
2
1
0
MiniBatchKMeans Python
33,039,884
1.2
python,machine-learning,scikit-learn,cluster-computing
The batch size is defined by batch_size, period. Furthermore you can define init_size which is the size of samples taken to initiallize the process, and by default it is 3*batch_size. You can simply set bath_size=100 and init_size=10 and then 10 samples are used to perform initialization (kmeans is not globaly convergent, there are many techniques to deal with it onthe initialization stage) and later on batch of 100 will be used during the algorithm execution.
I am using the function MiniBatchKMeans() from scikitlearn. Well, in its documentation there are: batch_size : int, optional, default: 100 Size of the mini batches. init_size : int, optional, default: 3 * batch_size Number of samples to randomly sample for speeding up the initialization (sometimes at the expense of accuracy): the only algorithm is initialized by running a batch KMeans on a random subset of the data. This needs to be larger than n_clusters. I didn't understand it very well, because it seems that the final dimension of the mini batch is 3*batch_size and not the one specified by batch_size argument. Am I misunderstanding something. If so, some one can explain those two arguments. It I am right, why there are these two arguments since they seems to be redundant. Thanks!!!
0
1
879
0
33,043,867
0
0
0
0
2
false
0
2015-10-09T17:06:00.000
1
2
0
How to load big datasets like million song dataset into BigData HDFS or Hbase or Hive?
33,043,704
0.099668
python,hadoop,hive,hbase,bigdata
If it's already in the CSV or any format on the linux file system, that PIG can understand, just do a hadoop fs -copyFromLocal to If you want to read/process the raw H5 File format using Python on HDFS, look at hadoop-streaming (map/reduce) Python can handle 2GB on a decent linux system- not sure if you need hadoop for it.
I have downloaded a subset of million song data set which is about 2GB. However, the data is broken down into folders and sub folders. In the sub-folder they are all in several 'H5 file' format. I understand it can be read using Python. But I do not know how to extract and load then into HDFS so I can run some data analysis in Pig. Do I extract them as CSV and load to Hbase or Hive ? It would help if someone can point me to right resource.
0
1
726
0
50,411,499
0
0
0
0
2
false
0
2015-10-09T17:06:00.000
0
2
0
How to load big datasets like million song dataset into BigData HDFS or Hbase or Hive?
33,043,704
0
python,hadoop,hive,hbase,bigdata
Don't load such amount of small files into HDFS. Hadoop doesn't handle well lots of small files. Each small file will incur in overhead because the block size (usually 64MB) is much bigger. I want to do it myself, so I'm thinking of solutions. The million song dataset files don't have more than 1MB. My approach will be to aggregate data somehow before importing into HDFS. The blog post "The Small Files Problem" from Cloudera may shed some light.
I have downloaded a subset of million song data set which is about 2GB. However, the data is broken down into folders and sub folders. In the sub-folder they are all in several 'H5 file' format. I understand it can be read using Python. But I do not know how to extract and load then into HDFS so I can run some data analysis in Pig. Do I extract them as CSV and load to Hbase or Hive ? It would help if someone can point me to right resource.
0
1
726
0
67,288,557
0
0
0
0
1
false
1
2015-10-10T13:51:00.000
0
3
0
Subtracting Background From Image using Opencv in Python
33,054,711
0
python-2.7,opencv
replace foreground = np.absolute(frame - background) with foreground = cv2.absdiff(frame, background)
The following program displays 'foreground' completely black and not 'frame'. I also checked that all the values in 'frame' is equal to the values in 'foreground'. They have same channels,data type etc. I am using python 2.7.6 and OpenCV version 2.4.8 import cv2 import numpy as np def subtractBackground(frame,background): foreground = np.absolute(frame - background) foreground = foreground >= 0 foreground = foreground.astype(int) foreground = foreground * frame cv2.imshow("foreground",foreground) return foreground def main(): cap = cv2.VideoCapture(0) dump,background = cap.read() while cap.isOpened(): dump,frame = cap.read() frameCopy = subtractBackground(frame,background) cv2.imshow('Live',frame) k = cv2.waitKey(10) if k == 32: break if __name__ == '__main__': main()
0
1
3,991
0
38,753,037
0
1
0
0
1
false
0
2015-10-10T15:35:00.000
0
1
0
Cannot connect to Jupyter Notebook server in Azure HDInsight
33,055,691
0
python,azure,apache-spark,azure-hdinsight,jupyter
Just saw this question way too late, but I will venture that you are using an unsupported browser. Please use Chrome to connect to Jupyter.
I am trying to run a Python module using a Jupyter Notebook on Azure HDInsight, but I continue to get the following error message: A connection to the notebook server could not be established. The notebook will continue trying to reconnect, but until it does, you will NOT be able to run code. Check your network connection or notebook server configuration. I have an Azure subscription, created a cluster, created a storage blob, and have created a Jupyter Notebook. I am successfully logged into the cluster, so I am not sure why I cannot connect to the notebook. Any insight into this problem would be hugely appreciated.
0
1
2,411
0
40,980,117
0
1
0
0
1
false
0
2015-10-12T12:10:00.000
0
3
0
importing csv file in python
33,080,794
0
python,csv,python-3.x
First of all, at the top of the code do import csv After that you need to set a variable name; this is so you can open the CSV file. For example, data=open('CSV name', 'rt') You will need to fill in where it says CSV name. That's how you open it. To read a CSV file, you set another variable. For example, data2=csv.reader(data)
I want to import CSV files in a python script. Column and row numbers are not fixed , first row contains name of the variables and next rows are values of those variables. I am new to Python, any help is appreciated. thanks.
0
1
4,532
0
33,101,046
0
0
0
0
1
false
0
2015-10-12T13:19:00.000
0
1
0
OpenCV - using digital cameras.
33,082,220
0
python,opencv
Install Drivers for required camera, connect it, and use cv2.VideoCapture(int). Here, instead of 0, use a different integer according to the camera. By default, 0 is for the inbuilt webcam. e.g.: cv2.VideoCapture(1)
The quality of video recording that is required for our project is not met by the webcams. Is it possible to use high megapixel digital cameras (Sony, Canon, Olympus) with OpenCV ? How to talk to the digital cameras using OpenCV (and specifically using Python)
0
1
1,805
0
33,087,239
0
0
0
0
1
true
3
2015-10-12T17:16:00.000
3
1
0
print a pandas dataframe to text with lines longer than 80 chars
33,086,758
1.2
python,pandas
Try changing pandas.options.display.width. (It's 80 by default)
I want to print a Dataframe to a text file, Say I have a table with 4 lines, and 12 columns. It looks quite nice when I just use print df, with all the values of a column aligned to the right, however, when there are too many columns (8 in my case) it breaks the table down so that the last 4 columns are printed after 4 lines of 8 values. Probably as Pandas tries to make the table fit in a 80 chars line. I tried df.to_csv().replace(',','\t'), but then entries longer than a tab cause a jump in the line, and the lines are no longer aligned. How can I get the nice, orderly, aligned to right, fashion, but not enforcing 80 characters per line?
0
1
259
0
33,094,494
0
0
0
0
2
false
3
2015-10-13T00:55:00.000
6
2
0
What is meant by PCA preserving only large pairwise distances?
33,092,493
1
python,matplotlib,machine-learning,visualization,pca
Don't confuse PCA with dimensionality reduction. PCA is a rotation transformation that aligns the data with the axes in such a way that the first dimension has maximum variance, the second maximum variance among the remainder, etc. Rotations preserve pairwise distances. When you use PCA for dimensionality reduction, you discard dimensions of your rotated data that have the least variance. High variance is achieved when points are spread far from the mean. Low-variance dimensions are those, in which the values are mostly the same, so their absence is presumed to have the least effect on pairwise distances.
I am currently reading up on t-SNE visualization technique and it was mentioned that one of the drawback of using PCA for visualizing high dimension data is that it only preserves large pairwise distances between the points. Meaning points which are far apart in high dimension would also appear far apart in low dimensions but other than that all other points distances get screwed up. Could someone help me understand why is that and what does it mean graphically?. Thanks a lot!
0
1
1,245
0
66,142,281
0
0
0
0
2
false
3
2015-10-13T00:55:00.000
0
2
0
What is meant by PCA preserving only large pairwise distances?
33,092,493
0
python,matplotlib,machine-learning,visualization,pca
If I can re-phrase @Don Reba's comment: The PCA transformation itself does not alter distances. The 2-dimensional plot often used to visualise the PCA results takes into account only two dimensions, disregards all the other dimensions, and as such this visualisation provides a distorted representation of distances.
I am currently reading up on t-SNE visualization technique and it was mentioned that one of the drawback of using PCA for visualizing high dimension data is that it only preserves large pairwise distances between the points. Meaning points which are far apart in high dimension would also appear far apart in low dimensions but other than that all other points distances get screwed up. Could someone help me understand why is that and what does it mean graphically?. Thanks a lot!
0
1
1,245
0
33,124,530
0
1
0
0
1
false
0
2015-10-13T21:23:00.000
0
1
0
Can Orange read in IF...THEN text format rule file and use it to score another dataset?
33,112,874
0
python,orange
No, there is no function to do this. Apparently nobody ever needed it. You can do it yourself, but if you know some Python it should be easier to test a list of rules without using Orange.
I am wondering if Orange can read in a text format rule file and use it to score another dataset. For example, a rule.txt file was previously created in Orange through rule_to_string function and contains rules in this IF...THEN format: "IF sex=['female'] AND status=['first'] THEN survived=yes". Can Orange read in the rule.txt file and use it to score a test.csv dataset? Thank you very much for helping!
0
1
86
0
33,152,516
0
0
0
0
1
false
0
2015-10-15T14:05:00.000
2
1
0
Caching Pandas Dataframe by Serialization or In-memory KV Store
33,150,684
0.379949
python,caching,pandas,redis
I have a DF of ~ 1 GB of plain text data. Assuming the dumping to disk is always slower than reading I compared HDF5 write performance with pickle. HDF5 took 35 sec while pickle did 190 sec. So, you could consider using HDF5 instead of pickle
Which method of caching pandas DataFrame objcts will provide the highest performance? By storing it to a flat file on disk using pickle, or by storing it in a key-value store like Redis?
0
1
1,886
0
44,307,542
0
0
0
0
2
false
4
2015-10-16T01:47:00.000
1
2
0
Seaborn Restore marker edges
33,161,270
0.099668
python,matplotlib,seaborn
A solution to this is after importing seaborn do the following: matplotlib.rcParams['lines.markeredgewidth'] = 1
Apparently, importing seaborn sets the marker edges in a matplotlib.pyplot.plot to zero or deletes them. e.g. plt.plot(x,y,maker='s',markerfacecolor='none') results in a plot without markers. Is there a way to get the edges back? markeredgecolor='k' has no effect.
0
1
3,009
0
43,644,522
0
0
0
0
2
false
4
2015-10-16T01:47:00.000
1
2
0
Seaborn Restore marker edges
33,161,270
0.099668
python,matplotlib,seaborn
Give edgecolor='k' a try. This worked for me in a similar scatter plot.
Apparently, importing seaborn sets the marker edges in a matplotlib.pyplot.plot to zero or deletes them. e.g. plt.plot(x,y,maker='s',markerfacecolor='none') results in a plot without markers. Is there a way to get the edges back? markeredgecolor='k' has no effect.
0
1
3,009
0
33,162,435
0
0
0
0
1
false
2
2015-10-16T03:56:00.000
0
2
0
Use Python to Change csv Data Column Format
33,162,320
0
python,excel,csv,pandas
This is a time formatting problem/philosophy by Excel. For some reason, Microsoft prefers to hide seconds and sub-seconds on user displays: even MSDOS's dir command omitted seconds. If I were you, I'd use Excel's format operation and set it to display seconds, then save the spreadsheet as CSV and see if it put anything in it to record the improved formatting. If that doesn't work, you might explore creating a macro which does the formatting, or use one of the IPC to Excel to command it to do your bidding.
I am using python pandas to read csv file. The csv file has a datetime column that has second precisions "9/1/2015 9:25:00 AM", but if I open in excel, it has only minute precisions "9/1/15 9:25". Moreover, when I use the pd.read_csv() function, it only shows up to minute precision. Is there any way that I could solve the problem using python? Thanks much in advance.
0
1
1,815
0
33,167,366
0
0
0
0
2
true
1
2015-10-16T08:21:00.000
2
2
0
Fast Kalman Filter
33,165,668
1.2
python,cython,kalman-filter
The size of the covariance matrix is driven by the size of your state. Another question relates to the assumptions on your model and if this can bring up significant optimizations (obviously, optimizing implies reworking the "standard KF"). From my POV, your situation roughly depends on the value (number_of_states² * number_of_iterations)/(processing_power).
I wonder if anyone can give me a pointer to really fast/efficient Kalman filter implementation, possibly in Python (or Cython, but C/C++ could also work if it is much faster). I have a problem with many learning epochs (possibly hundreds of millions), and many input (cues; say, between tens to hundred thousands). Thus, updating a covariance matrix will be a big issue. I read a bit about Ensemble KF, but, for now, I would really like to stick with the standard KF. [ I started reading and testing it, and I would like to give it a try with my real data. ]
0
1
1,111
0
33,264,437
0
0
0
0
2
false
1
2015-10-16T08:21:00.000
1
2
0
Fast Kalman Filter
33,165,668
0.099668
python,cython,kalman-filter
If you have many measurements per update, you should look at the information form of the Kalman filter. Each additional measurement is just addition. The tradeoff is a more complex predict step, and the cost of inverting the information matrix whenever you want to get your state out.
I wonder if anyone can give me a pointer to really fast/efficient Kalman filter implementation, possibly in Python (or Cython, but C/C++ could also work if it is much faster). I have a problem with many learning epochs (possibly hundreds of millions), and many input (cues; say, between tens to hundred thousands). Thus, updating a covariance matrix will be a big issue. I read a bit about Ensemble KF, but, for now, I would really like to stick with the standard KF. [ I started reading and testing it, and I would like to give it a try with my real data. ]
0
1
1,111
0
33,170,242
0
0
0
0
1
false
1
2015-10-16T10:59:00.000
1
1
0
Implementing online learning with time series
33,168,836
0.197375
python,r,machine-learning,scikit-learn
If you have to make predictions at each time stamp, then this doesn't become a a time series problem (unless you plan to use the sequence of previous observations to make your next prediction, in which case you will need to train a sequence based model). Assuming you can only train a model based on the final data you observe, there can be many approaches, but I'd recommend you use Random Forest with large number of trees and 3 or 4 variables in each tree. That way even if some variables don't give you the desired input other trees can still make predictions to a fair amount of accuracy. Besides this there can be many ensemble approaches. The way you're currently doing may be a very loose approximation and practical but doesn't make much statistical sense.
I have a classification problem with time series data. Each example has 10 variables which are measured at irregular intervals and in the end the object is classified into 1 of the 2 possible classes (binary classification). I have only the final class of the example to learn from during training. But when given a new example, I would like to make a prediction at each timestamp (in an online manner). So, if the new example had 25 measurements, I would like to make 25 predictions of its class; one at each timestamp. The way I am implementing this currently is by using the min, mean and max of the measurements of its 10 variables till that point as features for classification. Is this optimal ? What would be a better way.
0
1
317
0
33,175,701
0
0
0
0
1
true
0
2015-10-16T13:45:00.000
1
1
0
Speed up Python MST calculation using Delaunay Triangulation
33,172,090
1.2
python,algorithm,performance,minimum-spanning-tree,delaunay
NB: this assumes we're working in 2-d I suspect that what you are doing now is feeding all point to point distances to the MST library. There are on the order of N^2 of these distances and the asymptotic runtime of Kruskal's algorithm on such an input is N^2 * log N. Most algorithms for Delaunay triangulation take N log N time. Once the triangulation has been computed only the edges in the triangulation need to be considered (since an MST is always a subset of the triangulation). There are O(N) such edges so the runtime of Kruskal's algorithm in scipy.sparse.csgraph should be N log N. So this brings you to an asymptotic time complexity of N log N. The reason that scipy.sparse.csgraph doesn't incorporate Delaunay triangulation is that the algorithm works on arbitrary input, not only Euclidean inputs. I'm not quite sure how much this will help you in practice but that's what it looks like asymptotically.
I have a code that makes Minimum Spanning Trees of many sets of points (about 25000 data sets containing 40-10000 points in each set) and this is obviously taking a while. I am using the MST algorithm from scipy.sparse.csgraph. I have been told that the MST is a subset of the Delaunay Triangulation, so it was suggested I speed up my code by finding the DT first and finding the MST from that. Does anyone know how much difference this would make? Also, if this makes it quicker, why is it not part of the algorithm in the first place? If it is quicker to calculate the DT and then the MST, then why would scipy.sparse.csgraph.minimum_spanning_tree do something else instead? Please note: I am not a computer whizz, some people may say I should be using a different language but Python is the only one I know well enough to do this sort of thing, and please use simple language in your answers, no jargon please!
0
1
659
0
33,197,300
0
0
0
0
1
true
1
2015-10-18T05:51:00.000
0
1
0
Resampling an irregular distributed 1-D signal in python
33,194,779
1.2
python,arrays,numpy,resampling
Following Warren Weckesser's comment, the answer is using scipy.interpolate.interp1d
I've a nx2 ndarray which represent a height profile of the form h(x), with x being a non-negative real number and h(x) the height value in x. The x-values are irregular distributed, meaning: x[i] - x[i - 1] != x[i + 1] - x[i] I would like to take my array and create a new one with evenly spaced x-values with the corresponding heights. The distance between the x-values can be any positive number. Is there an efficient way to do something like this using numpy?
0
1
142
0
33,197,079
0
0
0
0
1
true
0
2015-10-18T09:36:00.000
0
1
0
How to Calculate width of the middle 98% mass of the gray level histogram of a image
33,196,427
1.2
python-2.7,image-processing,histogram,contrast
Let the total mass of the histogram be M. Accumulate the mass in the bins, starting from index zero, until you pass 0.01 M. You get an index Q01. Decumulate the mass in the bins, starting from the maximum index, until you pass 0.99 M. You get an index Q99. These indexes are the so-called first and last percentiles. The contrast is estimated as Q99-Q01.
I need to calculate the contrast of an color image, so the steps that was given to me are, computed the histogram for RGB channel separately and combined it together as Histogram = histOfRedC + histOfBlueC + histOfgreenC. normalize it to unit length, as each image is of different size. The contrast quality, is equal to the width of the middle 98% mass of the histogram. I have done the first 2 steps but unable to understand what to compute in 3rd step. Can somebody please explain me what it means?
0
1
140
0
33,282,334
0
0
0
0
1
true
0
2015-10-19T13:55:00.000
1
1
0
extracting the data through python script in paraview
33,216,350
1.2
python,paraview
Usually plots are made by plotting one data array versus another. You can often obtain that data directly from the filter/source that produced it and save it to a CSV file. To do this, select You can save data from a filter as a CSV file by selecting the filter/source in the Pipeline Browser and choosing File -> Save Data. Choose the CSV File (*.csv) file type. Arrays in the filter/source output are written to different columns in the CSV file.
How to extract data from plot data filter in paraview through python script?? I want to get data through python script by which paraview is drawing the graph. If anyone know this answer please help Thank you
0
1
628
0
33,228,345
0
0
0
0
1
false
0
2015-10-20T03:15:00.000
0
1
0
Pandas' version of numpy.resize for efficient matrix resizing
33,227,369
0
python,arrays,numpy,pandas,resize
If you want to stay within 'Pandas', I would suggest one of the following: df.unstack() which would result in shape (len(index2), maxlen * num_columns) following your notation; here columns will be stored as a MultiIndex. Alternatively, you can use df.to_panel(); Panel is a natural Pandas data structure used for 3 dimensions, as in your case. I believe that the shape should be (num_columns, len(index1), maxlen). You can then fill any nans with .fillna(0).
I have a dataframe with two indexes. (Both timestamps but thats probably not relevant). I need to get out a numpy matrix with shape (len(first_index), maxlen, num_columns). maxlen is some number (likely the max of all of the len(second_index)) or just something simple like 1000. I can do this with arr = df.as_matrix(...) and then arr.resize((len(first_index), maxlen, num_columns)). Elements in new rows should be 0 so .resize(...) works well. Is there a simpler and more efficient way to do this within the dataframe? Numpy works just fine but I need maximum efficiency because I have millions of rows.
0
1
1,511
0
42,767,296
0
0
0
0
1
false
15
2015-10-20T10:35:00.000
3
3
0
Access pixel values within a contour boundary using OpenCV in Python
33,234,363
0.197375
python,image,opencv,image-processing,opencv-contour
Answer from @rayryeng is excellent! One small thing from my implementation is: The np.where() returns a tuple, which contains an array of row indices and an array of column indices. So, pts[0] includes a list of row indices, which correspond to height of the image, pts[1] includes a list of column indices, which correspond to the width of the image. The img.shape returns (rows, cols, channels). So I think it should be img[pts[0], pts[1]] to slice the ndarray behind the img.
I'm using OpenCV 3.0.0 on Python 2.7.9. I'm trying to track an object in a video with a still background, and estimate some of its properties. Since there can be multiple moving objects in an image, I want to be able to differentiate between them and track them individually throughout the remaining frames of the video. One way I thought I could do that was by converting the image to binary, getting the contours of the blobs (tracked object, in this case) and get the coordinates of the object boundary. Then I can go to these boundary coordinates in the grayscale image, get the pixel intensities surrounded by that boundary, and track this color gradient/pixel intensities in the other frames. This way, I could keep two objects separate from each other, so they won't be considered as new objects in the next frame. I have the contour boundary coordinates, but I don't know how to retrieve the pixel intensities within that boundary. Could someone please help me with that? Thanks!
0
1
32,084
0
33,290,142
0
0
0
0
1
true
0
2015-10-21T02:23:00.000
0
1
0
About learning curves
33,249,904
1.2
python,machine-learning,scikit-learn
If the gap between the training and cross-validation accuracy is increasing then this is an indication that your model is overfitting on the training data. With every iteration (supplying additional training data) your model is better able to capture the training data, however it is no longer able to better generalise (and thus the cross-validation accuracy converges).
I am trying to plot the learning curves for my SVC classifier with sklearn.learning_curve. From the plot, I find that both of my training scores and test scores increases simultaneously. But the gap between the training curve and cross-validation curve becomes larger with the increasing number of the samples. As I know, the training scores should decrease when more samples supplied. Do you guys have any sense about this problem?
0
1
959
0
33,261,884
0
0
0
0
1
false
2
2015-10-21T13:54:00.000
0
2
0
Python multiple rows to one row
33,261,261
0
python,pandas,group-by
You need to make a dictionary, where the key is the id. Each value of that is going to be another dictionary of outN to value. Read a line. You get an id, outN, and a value. Check you have an dict for that id first, and if not, create one. Then shove the value for that outN into the dict for that id. Second step: You need to collect a list of all the outNs. Make a new set. For each value in your dict, add each of its outN keys to your set. At the end, get a list from the set, and sort it. Third step: Go through each id in your dict keys, and then each outn in your new sorted list of outns, and print the value of that, with a fallback to zero. outnval_by_ids[id].get(outn, "0") There's a weird case here, in that you have a lot of timestamps that you are assuming are duplicate by id. Be careful that is really the case. Assumptions like that cause bugs.
I have a question per below - I need to transform multiple rows of ID into one row, and let the different "output"-values become columns with binary 1/0, like example. Here is my table! ID Output Timestamp 1 out1 1501 1 out2 1501 1 out5 1501 1 out9 1501 2 out3 1603 2 out4 1603 2 out9 1603 To be transformed into the following: ID out1 out2 out3 out4 out5 out9 timestamp 1 1 1 0 0 1 1 1501 2 0 0 1 1 0 1 1603 Can someone help me do this in a flexible way in Python, preferably Pandas? I'm quite new to this, have been using SAS for a good many years so any "transition tips" are greatly appreciated. Br,
0
1
1,859
0
34,088,151
0
1
0
0
1
false
1
2015-10-22T22:50:00.000
0
1
0
Spark: How to start remotely Jupyter in 'yarn_client' mode from a different user
33,292,063
0
hadoop,apache-spark,ipython,pyspark,jupyter
I have a working deployment of CDH5.5 + jupyter with pyspark and scala native spark. In my case I am using a dedicated user to start a jupyter server and then connecting to it from a client browser. Before sharing some thoughts about your problem I would like to point out that if your fifth server is not close connected to your cluster you should avoid launching pyspark in yarn-client mode, as the communication latency would surely slow your jobs. As far as I know yarn-cluster mode cannot be invoked remotely without pyspark-submit If you still want your driver node to be executing in that 5th server, make sure that your user "ipython" has the correct permission to access hdfs and other hadoop conf directories, you might need to create that user in your other hadoop nodes. Also make sure that your yarn-conf.xml is correctly configured to reflect the address of your yarn ResourceManager.
Let's assume I've got a 4 nodes Hadoop cluster (Cloudera distro in my case) with a user named 'hadoop' on each node ('/home/hadoop'). Also, I've got a fifth server with installed on it, Jupyter and Anaconda with a user named 'ipython', but without hadoop installation. Let's say I want to start Jupyter remotely from that fifth server in 'yarn_client' mode by keeping the 'ipython' user, my problem is that I've got an issue from logs which says that the user 'ipython' isn't allowed (or something like that). For info I copied-paste a dummy directory (to set the HADOOP_CONF_DIR environment variable) from the Hadoop cluster to that fifth server. Everything works well with the 'local[*]' setting in my 'kernel.json' file (fortunately), but the issue appears back when I change the master value into 'yarn_client' (unfortunately)... Is there a trick to solve that issue ? Or maybe several different tricks ?
0
1
1,327
0
33,339,510
0
0
0
0
1
true
3
2015-10-24T23:31:00.000
2
1
0
How to disable wheel_zoom in Bokeh?
33,324,475
1.2
python,bokeh
There is an open PR to improve this, it will be in the the 0.11 release.
Usually I do plotting inside of IPython Notebook with pylab mode. Whenever I use Bokeh, I like to enable output_notebook() to show my plot inside of the IPython notebook. Most annoying part is that Bokeh enable wheel_zoom by default which cause unintended zoom in IPython notebook. I know I can avoid this by passing comma separated tools string what I want to include into bokeh.plotting.figure. but with this solution, I should list up the other tools but wheel_zoom. Is there any way to exclude wheel_zoom only? or Can I disable wheel_zoom in global setting or something like that?
0
1
513
0
42,668,700
0
0
0
0
1
false
131
2015-10-26T13:08:00.000
1
5
0
What is the difference between size and count in pandas?
33,346,591
0.039979
python,pandas,numpy,nan,difference
When we are dealing with normal dataframes then only difference will be an inclusion of NAN values, means count does not include NAN values while counting rows. But if we are using these functions with the groupby then, to get the correct results by count() we have to associate any numeric field with the groupby to get the exact number of groups where for size() there is no need for this type of association.
That is the difference between groupby("x").count and groupby("x").size in pandas ? Does size just exclude nil ?
0
1
60,325
0
52,785,994
0
1
0
0
1
false
4
2015-10-26T15:55:00.000
0
2
0
Preventing PyTables (in Pandas) from printing "Closing remaining open files..."
33,350,153
0
python,pandas,pytables
You really have to close the open store manually. There is no other way. Why? PyTables uses a file registry to track open files. A destructor for this file registry is registered with Python's atexit module, which is called when the Python interpreter exits. If this destructor method is called, it will print out the names of every open file. This feature is not configurable.
Is there a way to prevent PyTables from printing out Closing remaining open files:path/to/store.h5...done? I want to get rid of it just because it is clogging up the terminal. I'm using pandas.HDFStore if that matters.
0
1
1,063
0
37,483,626
0
0
0
0
1
false
207
2015-10-27T03:59:00.000
4
7
0
Random number between 0 and 1?
33,359,740
0.113791
python,random
random.randrange(0,2) this works!
I want a random number between 0 and 1, like 0.3452. I used random.randrange(0, 1) but it is always 0 for me. What should I do?
0
1
517,528
1
33,434,056
0
0
0
0
2
false
0
2015-10-29T15:41:00.000
0
2
0
tkinter opencv and numpy in windows with python2.7
33,418,678
0
python,windows,opencv,numpy,tkinter
Finally did it with .whl files. Download them, copy to C:\python27\Scripts and then open "cmd" and navigate to that folder with "cd\" etc. Once there run: pip install numpy-1.10.1+mkl-cp27-none-win_amd64.whl for example. In IDLE I then get: import numpy numpy.version '1.10.1'
I want to use "tkinter", "opencv" (cv2) and "numpy" in windows(8 - 64 bit and x64) with python2.7 - the same as I have running perfectly well in Linux (Elementary and DistroAstro) on other machines. I've downloaded the up to date Visual Studio and C++ compiler and installed these, as well as the latest version of PIP following error messages with the first attempts with PIP and numpy first I tried winpython, which already has numpy present but this comes without tkinter, although openCV would install. I don't want to use qt. so I tried vanilla Python, which installs to Python27. Numpy won't install with PIP or EasyInstall (unless it takes over an hour -same for SciPy), and the -.exe installation route for Numpy bombs becausee its looking for Python2.7 (not Python27). openCV won't install with PIP ("no suitable version") extensive searches haven't turned up an answer as to how to get a windows Python 2.7.x environment with all three of numpy, tkinter and cv2 working. Any help would be appreciated!
0
1
170
1
33,441,221
0
0
0
0
2
false
0
2015-10-29T15:41:00.000
0
2
0
tkinter opencv and numpy in windows with python2.7
33,418,678
0
python,windows,opencv,numpy,tkinter
small remark: WinPython has tkinter, as it's included by Python Interpreter itself
I want to use "tkinter", "opencv" (cv2) and "numpy" in windows(8 - 64 bit and x64) with python2.7 - the same as I have running perfectly well in Linux (Elementary and DistroAstro) on other machines. I've downloaded the up to date Visual Studio and C++ compiler and installed these, as well as the latest version of PIP following error messages with the first attempts with PIP and numpy first I tried winpython, which already has numpy present but this comes without tkinter, although openCV would install. I don't want to use qt. so I tried vanilla Python, which installs to Python27. Numpy won't install with PIP or EasyInstall (unless it takes over an hour -same for SciPy), and the -.exe installation route for Numpy bombs becausee its looking for Python2.7 (not Python27). openCV won't install with PIP ("no suitable version") extensive searches haven't turned up an answer as to how to get a windows Python 2.7.x environment with all three of numpy, tkinter and cv2 working. Any help would be appreciated!
0
1
170
0
33,421,040
0
0
0
0
1
false
1
2015-10-29T17:14:00.000
0
1
0
Padding python pivot tables with 0
33,420,633
0
python,python-2.7,pandas,dataframe,pivot-table
I'm going to be general here, since there was no sample code or data provided. Let's say your original dataframe is called df and has columns Date and Sales. I would try creating a list that has all dates from 01-01-2014 to 12-31-2015. Let's call this list dates. I would also create an empty list called sales (i.e. sales = []). At the end of this workflow, sales should include data from dt['Sales'] AND placeholders for dates that are not within the data frame. In your case, these placeholders will be 0. In my answer, the names of the columns in the dataframe are capitalized; names of lists start with a lower case. Next, I would iterate through dates and check to see if each date is in dt['Date']. Each iteration through the list dates will be called date (i.e. date = dates[i]). If date is in dt['Date'], I would append the Sales data for that date into sales. You can find the date in the dataframe through this command: df['Date']==date. So, to append the corresponding Sales data into the list, I would use this command sales.append(df[df['Date']==date]['Sales']. If date is NOT in dt['Date'], I would append a placeholder into sales (i.e. sales.append(0). Once you iterate through all the dates in the list, I would create the final dataframe with dates and sales. The final dataframe should have both your original data and placeholders for dates that were not in the original data.
I have a pivot table which has an index of dates ranging from 01-01-2014 to 12-31-2015. I would like the index to range from 01-01-2013 to 12-31-2016 and do not know how without modifying the underlying dataset by inserting a row in my pandas dataframe with those dates in the column I want to use as my index for the pivot table. Is there a way to accomplish this wihtout modifying the underlying dataset?
0
1
214
0
35,586,970
0
0
0
0
1
false
1
2015-10-31T09:59:00.000
0
1
0
Combining SVM Classifiers in MapReduce
33,450,285
0
python,mapreduce,scikit-learn,svm
Make sure that all of the required libraries (scikit-learn, NumPy, pandas) are installed on every node in your cluster. Your mapper will process each line of input, i.e., your training row and emit a key that basically represents the fold for which you will be training your classifier. Your reducer will collect the lines for each fold and then run the sklearn classifier on all lines for that fold. You can then average the results from each fold.
I've been tasked with solving a sentiment classification problem using scikit-learn, python, and mapreduce. I need to use mapreduce to parallelize the project, thus creating multiple SVM classifiers. I am then supposed to "average" the classifiers together, but I am not sure how that works or if it is even possible. The result of the classification should be one classifier, the trained, averaged classifier. I have written the code using scikit-learn SVM Linear kernel, and it works, but now I need to bring it into a map-reduce, parallelized context, and I don't even know how to begin. Any advice?
0
1
453
0
33,458,868
0
0
0
0
1
true
5
2015-11-01T03:15:00.000
8
1
0
how to make 1 by n dataframe from series in pandas?
33,458,865
1.2
python,pandas,dataframe,series
You can do df.ix[[n]] to get a one-row dataframe of row n.
I have a huge dataframe, and I index it like so: df.ix[<integer>] Depending on the index, sometimes this will have only one row of values. Pandas automatically converts this to a Series, which, quite frankly, is annoying because I can't operate on it the same way I can a df. How do I either: 1) Stop pandas from converting and keep it as a dataframe ? OR 2) easily convert the resulting series back to a dataframe ? pd.DataFrame(df.ix[<integer>]) does not work because it doesn't keep the original columns. It treats the <integer> as the column, and the columns as indices. Much appreciated.
0
1
1,239
0
33,479,441
0
0
0
0
1
true
3
2015-11-01T16:16:00.000
2
1
0
How exactly BIC in Augmented Dickey–Fuller test work in Python?
33,464,294
1.2
python,statsmodels
When we request automatic lag selection in adfulller, then the function needs to compare all models up to the given maxlag lags. For this comparison we need to use the same observations for all models. Because lagged observations enter the regressor matrix we loose observations as initial conditions corresponding to the largest lag included. As a consequence autolag uses nobs - maxlags observations for all models. For calculating the test statistic for adfuller itself, we don't need model comparison anymore and we can use all observations available for the chosen lag, i.e. nobs - best_lag. More general, how to treat initial conditions and different number of initial conditions is not always clear cut, autocorrelation and partial autocorrelation are largely based on using all available observations, full MLE for AR and ARMA models uses the stationary model to include the initial conditions, while conditional MLE or least squares drops them as necessary.
This question is on Augmented Dickey–Fuller test implementation in statsmodels.tsa.stattools python library - adfuller(). In principle, AIC and BIC are supposed to compute information criterion for a set of available models and pick up the best (the one with the lowest information loss). But how do they operate in the context of Augmented Dickey–Fuller? The thing which I don't get: I've set maxlag=30, BIC chose lags=5 with some informational criterion. I've set maxlag=40 - BIC still chooses lags=5 but the information criterion have changed! Why in the world would information criterion for the same number of lags differ with maxlag changed? Sometimes this leads to change of the choice of the model, when BIC switches from lags=5 to lags=4 when maxlag is changed from 20 to 30, which makes no sense as lag=4 was previously available.
0
1
1,081
0
33,465,756
0
1
0
0
1
false
1
2015-11-01T18:32:00.000
0
2
0
What is a good way to implement several very similar functions?
33,465,685
0
python,oop
More information needs to be given to fully understand the context. But, in a general sense, I'd do a mix of all of them. Use helper functions for "shared" parts, and use conditional statements too. Honestly, a lot of it comes down to just what is easier for you to do?
I need several very similar plotting functions in python that share many arguments, but differ in some and of course also differ slightly in what they do. This is what I came up with so far: Obviously just defining them one after the other and copying the code they share is a possibility, though not a very good one, I reckon. One could also transfer the "shared" part of the code to helper functions and call these from inside the different plotting functions. This would make it tedious though, to later add features that all functions should have. And finally I've also thought of implementing one "big" function, making possibly not needed arguments optional and then deciding on what to do in the function body based on additional arguments. This, I believe, would make it difficult though, to find out what really happens in a specific case as one would face a forest of arguments. I can rule out the first option, but I'm hard pressed to decide between the second and third. So I started wondering: is there another, maybe object-oriented, way? And if not, how does one decide between option two and three? I hope this question is not too general and I guess it is not really python-specific, but since I am rather new to programming (I've never done OOP) and first thought about this now, I guess I will add the python tag. EDIT: As pointed out by many, this question is quite general and it was intended to be so, but I understand that this makes answering it rather difficult. So here's some info on the problem that caused me to ask: I need to plot simulation data, so all the plotting problems have simulation parameters in common (location of files, physical parameters,...). I also want the figure design to be the same. But depending on the quantity, some plots will be 1D, some 2D, some should contain more than one figure, sometimes I need to normalize the data or take a logarithm before plotting it. The output format might also vary. I hope this helps a bit.
0
1
444
0
33,504,368
0
0
0
0
1
false
10
2015-11-02T02:01:00.000
7
2
0
How to transform items using sklearn Pipeline?
33,469,633
1
python,machine-learning,scikit-learn
The reason why the results are different (and why calling transform even workds) is that LinearSVC also has a transform (now deprecated) that does feature selection If you want to transform using just the first step, pipeline.named_steps['tfidf'].transform([item]) is the right thing to do. If you would like to transform using all but the last step, olologin's answer provides the code. By default, all steps of the pipeline are executed, so also the transform on the last step, which is the feature selection performed by the LinearSVC.
I have a simple scikit-learn Pipeline of two steps: a TfIdfVectorizer followed by a LinearSVC. I have fit the pipeline using my data. All good. Now I want to transform (not predict!) an item, using my fitted pipeline. I tried pipeline.transform([item]), but it gives a different result compared to pipeline.named_steps['tfidf'].transform([item]). Even the shape and type of the result is different: the first is a 1x3000 CSR matrix, the second a 1x15000 CSC matrix. Which one is correct? Why do they differ? How do I transform items, i.e. get an item's vector representation before the final estimator, when using scikit-learn's Pipeline?
0
1
7,410
0
33,481,202
0
0
0
1
1
true
0
2015-11-02T14:14:00.000
0
1
0
Importing data from text file and saving the same in excel
33,479,646
1.2
matlab,python-2.7,csv,export-to-csv
To read a text file in Matlab you can use fscanf or textscan then to export to excel you can use xlswrite that write directly to the excel file.
I am trying to read data from text file (which is output given by Tesseract OCR) and save the same in excel file. The problem i am facing here is the text files are in space separated format, and there are multiple files. Now i need to read all the files and save the same in excel sheet. I am using MATLAB to import and export data. I even thought of using python to convert the files into CSV format so that i can easily import the same in MATLAB and simply excelwrite the same. But no good solution. Any guidance would be of great help. thank you
0
1
212
0
34,476,701
0
0
0
0
1
false
4
2015-11-03T11:09:00.000
1
1
0
Why is wsgi looking for a library in /lib64 when the correct version is in the python distribution
33,497,639
0.197375
python-2.7,mod-wsgi
Copy all the files libz.so* to any path in your LD_LIBRARY_PATH Short story long, I have miniconda and stuck at the same issue. I realised that conda prefer to search for library in LD_LIBRARY_PATH than its own libs. Hence, you need to make missing library available in LD_LIBRARY_PATH, adding the whole conda lib directory to LD_LIBRARY_PATH never a good idea (i.e. it just breaks your whole system). As a result, copy appropriate lib from conda library to any folder in your LD_LIBRARY_PATH is best solution. Note the path must show up before /lib64 in your LD_LIBRARY_PATH (i.e. export LD_LIBRARY_PATH=/your/path:$LD_LIBRARY_PATH)
I've created a flask application that I'm trying to deploy on an apache server. I've installed a conda distribution of python where I've downloaded associated modules, including flask, matplotlib and others. I'm using wsgi to launch the application. The problem I'm having is when the server runs wsgi script it fails saying that when trying to import matplotlib it can't find the correct version libz ImportError: /lib64/libz.so.1: version `ZLIB_1.2.3.4' not found (required by /mypath/miniconda/lib/python2.7/site-packages/matplotlib/../../.././libpng16.so.16) However the correct version of libz is found at /mypath/miniconda/lib/libz.* The wsgi module was built with this version of python. In addition the apache init script sets the PATH environment variable this location of python (and there are no other python 2.7 on the system). When I print the ldd path of libpng via the wsgi script it points to the python version of libz as the one it should be loading. linux-vdso.so.1 => (0x00007fff9fe00000) libz.so.1 => /mypath/miniconda/lib/python2.7/site-packages/matplotlib/../../../././libz.so.1 (0x00007fb2e4388000) libm.so.6 => /lib64/libm.so.6 (0x00007fb2e40e8000) libc.so.6 => /lib64/libc.so.6 (0x00007fb2e3d50000) /lib64/ld-linux-x86-64.so.2 (0x00000035a9e00000) so why is it trying to load from /lib64 ?? When I try load the module via the same python from a terminal, it loads fine. I understand my environment is not going to be the same as the apache environment but offhand I couldn't see any major differences. I haven't tried explicitly setting the LD_LIBRARY_PATH or WSGIPythonHome, neither which seem like they should be necessary. But that's the next avenue I'll try. Even if that works (but especially if it doesn't), I'd be curious if anyone has any ideas as to what's going on. Thanks in advance.
0
1
725
0
34,154,972
0
0
0
0
1
true
2
2015-11-05T03:30:00.000
7
2
0
Testing the Keras sentiment classification with model.predict
33,536,182
1.2
python,sentiment-analysis,lstm,keras
So what you basically need to do is as follows: Tokenize sequnces: convert the string into words (features): For example: "hello my name is georgio" to ["hello", "my", "name", "is", "georgio"]. Next, you want to remove stop words (check Google for what stop words are). This stage is optional, it may lead to faulty results but I think it worth a try. Stem your words (features), that way you'll reduce the number of features which will lead to a faster run. Again, that's optional and might lead to some failures, for example: if you stem the word 'parking' you get 'park' which has a different meaning. Next thing is to create a dictionary (check Google for that). Each word gets a unique number and from this point we will use this number only. Computers understand numbers only so we need to talk in their language. We'll take the dictionary from stage 4 and replace each word in our corpus with its matching number. Now we need to split our data set to two groups: training and testing sets. One (training) will train our NN model and the second (testing) will help us to figure out how good is our NN. You can use Keras' cross validation function. Next thing is defining whats the max number of features our NN can get as an input. Keras call this parameter - 'maxlen'. But you don't really have to do this manually, Keras can do that automatically just by searching for the longest sentence you have in your corpus. Next, let's say that Keras found out that the longest sentence in your corpus has 20 words (features) and one of your sentences is the example in the first stage, which its length is 5 (if we'll remove stop words it'll be shorter), in such case we'll need to add zeros, 15 zeros actually. This is called pad sequence, we do that so every input sequence will be in the same length.
I have trained the imdb_lstm.py on my PC. Now I want to test the trained network by inputting some text of my own. How do I do it? Thank you!
0
1
2,818
0
33,553,902
0
0
0
0
1
false
0
2015-11-05T18:55:00.000
0
1
0
Does Eigenface method use unsupervised trainning
33,552,557
0
python,face-recognition
Eigenfaces require supervised learning. You generally supply several of each subject, classifying them by identifying the subject. The eigenface model then classifies later images (often real-time snapshots) as to identity.
Eigenface method is a powerful method in face recognition. It uses the training images to find the eigenfaces and then use these eigenfaces to represent a new test image. Do the images in training dataset need to be labeled, or it is unsupervised training?
0
1
116
0
33,560,748
0
0
0
0
1
true
0
2015-11-06T05:45:00.000
0
1
0
Calculating the Angle Between Vectors by using a vector as a reference point:
33,560,269
1.2
python,cosine-similarity,trigonometry
That approach will only work for 2-D vectors. For higher dimensions any two vectors will define a hyperplane, and only if the third (reference) vector also lies within this hyperplane will your approach work. Unfortunately instead of only calculating n angles and subtracting, in order to determine the angles between each pair of vectors you would have to calculate all n choose 2 of them.
I have been trying to find a fast algorithm of calculating all the angle between n vectors that are of length x. For example if x=3 and n=4, my data would look something like this: A: [1,2,3] B: [2,3,4] C: [...] D: [...] I was wondering is it acceptable to find the the angle between all of be vectors (A,B,C,D) with respect to some fix vector (i.e. X:[100,100,100,100]) and then the subtract the angles of (A,B,C,D) found with respect to that fixed value, to find the angle between all of them. I want to do this because I would only have to compute the angle once and then I can subtract angles all of my vectors to find the different between them. In short, I want to know is it safe to make this assumption? angle_between(A,B) == angle_between(A,X) - angle_between(B,X) and the angle_between function is the Cosine similarity.
0
1
390
0
33,596,513
0
0
0
0
1
false
0
2015-11-07T21:04:00.000
0
1
0
finding a local maximum in a 3d array (array of images) in python
33,587,761
0
python,opencv,image-processing,computer-vision
I'd take advantage of the fact that dilations are efficiently implemented in OpenCV. If a point is a local maximum in 3d, then it is also in any 2d slice, therefore: Dilate each image in the array with a 3x3 kernel, keep as candidate maxima the points whose intensity is unchanged. Brute-force test the candidates against their upper and lower slices.
I'm trying to implement a blob detector based on LOG, the steps are: creating an array of n levels of LOG filters use each of the filters on the input image to create a 3d array of h*w*n where h = height, w = width and n = number of levels. find a local maxima and circle the blob in the original image. I already created the filters and the 3d array (which is an array of 2d images). I used padding to make sure I don't have any problems around the borders (which includes creating a constant border for each image and create 2 extra empty images). Now I'm trying to figure out how to find the local maxima in the array. I need to compare each pixel to its 26 neighbours (8 in the same picture and the 9 pixels in each of the two adjacent scales) The brute force way of checking the pixel value directly seems ugly and not very efficient. Whats the best way to find a local maxima point in python using openCV?
0
1
1,030
0
33,729,058
0
0
0
0
1
true
1
2015-11-09T06:03:00.000
1
2
0
Create a "spotlight" in an image using Python
33,603,304
1.2
python,image-processing
I finally did it with ImageMagick, using Python to calculate the various coordinates, etc. This command will create the desired circle (radius 400, centered at (600, 600): convert -size 1024x1024 xc:none -stroke black -fill steelblue -strokewidth 1 -draw "translate 600,600 circle 0,0 400,0" drawn.png This command will then convert it to B/W to get a rudimentary mask: convert drawn.png -alpha extract mask.png This command will blur the mask (radius 180, sigma 16): convert -channel RGBA -blur 100x16 mask.png mask2.png The above three commands gives me the mask I need. This command will darken the whole image (without the mask): convert image.jpg -level 0%,130%,0.7 dark.jpg And this command will put all 3 images together (original image, darkened image, and mask): composite image.jpg dark.jpg mask2.png out.jpg
Here's what I'm trying to do: I have an image. I want to take a circular region in the image, and have it appear as normal. The rest of the image should appear darker. This way, it will be as if the circular region is "highlighted". I would much appreciate feedback on how to do it in Python. Manually, in Gimp, I would create a new layer with a color of gray (less than middle gray). I would then create a circualr region on that layer, and make it middle gray. Then I would change the blending mode to soft light. Essentially, anything that is middle gray on the top layer will show up without modification, and anything darker than middle gray would show up darker. (Ideally, I'd also blur out the top layer so that the transition isn't abrupt). How can I do this algorithmically in Python? I've considered using the Pillow library, but it doesn't have these kinds of blend modes. I also considered using the Blit library, but I couldn't import (not sure it's maintained any more). Am open to scikit-image as well. I just need pointers on the library and some relevant functions. If there's no suitable library, I'm open to calling command line tools (e.g. imagemagick) from within the Python code. Thanks!
0
1
687
0
33,611,826
0
0
0
0
1
true
3
2015-11-09T09:27:00.000
2
1
0
Statsmodels Logistic Regression class imbalance
33,605,979
1.2
python,statistics,statsmodels
programmer's answer: statsmodels Logit and other discrete models don't have weights yet. (*) GLM Binomial has implicitly defined case weights through the number of successful and unsuccessful trials per observation. It would also allow manipulating the weights through the GLM variance function, but that is not officially supported and tested yet. update statsmodels Logit still does not have weights, but GLM has obtained var_weights and freq_weights several statsmodels releases ago. GLM Binomial can be used to estimate a Logit or a Probit model. statistician's/econometrician's answer: Inference, standard errors, confidence intervals, tests and so on, are based on having a random sample. If weights are manipulated, then this should affect the inferential statistics. However, I never looked at the problem for rebalancing the data based on the observed response. In general, this creates a selection bias. A quick internet search shows several answers, from rebalancing doesn't have a positive effect in Logit to penalized estimation as alternative. One possibility is to also try different link function, cloglog or other link functions have asymmetric or heavier tails that are more appropriate for data with small risk in one class or category. (*) One problem with implementing weights is to decide what their interpretation is for inference. Stata, for example, allows for 3 kinds of weights.
I'd like to run a logistic regression on a dataset with 0.5% positive class by re-balancing the dataset through class or sample weights. I can do this in scikit learn, but it doesn't provide any of the inferential stats for the model (confidence intervals, p-values, residual analysis). Is this possible to do in statsmodels? I don't see a sample_weights or class_weights argument in statsmodels.discrete.discrete_model.Logit.fit Thank you!
0
1
3,765
0
33,617,441
0
0
0
0
1
false
1
2015-11-09T11:55:00.000
2
1
0
scikit learn mean shift clustering in one-dimensional array
33,608,541
0.379949
python,scikit-learn,cluster-analysis
It does not make sense to run mean-shift on one-dimensional data. Do regular kernel density estimation instead. Locate the minima, and split the data set there. Mean shift is for data that is too complex for proper KDE. One dimensional data never is.
how can I run a mean shift clustering on a 1D array? Here there is my dataframe: >>>df INFO FREQ R2 31 0.2468213 R5 27 0.003670532 UR 25 0.00337465 I need to apply the clustering on the "INFO" column. Whit the kmeans I solved this problem using the reshape(-1,1) command: kmeans.fit(df["INFO"].values.reshape(-1,1)) , but with the mean shift clustering I get this error: meanshift.fit(df["INFO"].values.reshape(-1,1)) output: ValueError: Invalid shape in axis 1: 0.
0
1
1,483
0
33,621,420
0
0
0
0
1
false
2
2015-11-09T17:12:00.000
1
3
0
How to run multiple concurrent jobs in Spark using python multiprocessing
33,614,453
0.066568
python-2.7,apache-spark,hadoop-yarn,pyspark
How many CPUs do you have and how many are required per job? YARN will schedule the jobs and assign what it can on your cluster: if you require 8CPUs for your job and your system has only 8CPUs, then other jobs will be queued and ran serially. If you requested 4 per job then you would see 2 jobs run in parallel at any one time.
I have setup a Spark on YARN cluster on my laptop, and have problem running multiple concurrent jobs in Spark, using python multiprocessing. I am running on yarn-client mode. I tried two ways to achieve this: Setup a single SparkContext and create multiple processes to submit jobs. This method does not work, and the program crashes. I guess a single SparkContext does not support python multiple processes For each process, setup a SparkContext and submit the job. In this case, the job is submitted successfully to YARN, but the jobs are run serially, only one job is run at a time while the rest are in queue. Is it possible to start multiple jobs concurrently? Update on the settings YARN: yarn.nodemanager.resource.cpu-vcores 8 yarn.nodemanager.resource.memory-mb 11264 yarn.scheduler.maximum-allocation-vcores 1 Spark: SPARK_EXECUTOR_CORES=1 SPARK_EXECUTOR_INSTANCES=2 SPARK_DRIVER_MEMORY=1G spark.scheduler.mode = FAIR spark.dynamicAllocation.enabled = true spark.shuffle.service.enabled = true yarn will only run one job at a time, using 3 containers, 3 vcores, 3GB ram. So there are ample vcores and rams available for the other jobs, but they are not running
0
1
5,740
0
46,858,249
0
0
0
0
1
false
1
2015-11-09T17:42:00.000
1
1
0
Add markeredges in seaborn lmplot?
33,614,947
0.197375
python,seaborn,marker
As per the comment from @Sören, you can add the markeredges with the keyword scatter_kws. For example scatter_kws={'linewidths':1,'edgecolor':'k'}
sns.lmplot(x="size", y="tip", data=tips) gives a scatter plot. By default the markers have no edges. How can I add markeredges? Sometimes I prefer to use edges transparent facecolor. Especially with dense data. However, Neither markeredgewidth nor mew nor linewidths are accepted as keywords. Does anyone know how to add edges to the markers?
0
1
1,371
0
33,622,414
0
0
0
0
1
false
14
2015-11-09T19:23:00.000
1
3
0
How do I use distributed DNN training in TensorFlow?
33,616,593
0.066568
python,parallel-processing,deep-learning,tensorflow
Update As you may have noticed. Tensorflow has already supported distributed DNN training for quite some time. Please refer to its offcial website for details. ========================================================================= Previous No, it doesn't support distribute training yet, which is a little disappointing. But I don't think it is difficult to extend from single machine to multi-machine. Compared to other open source libraries, like Caffe, TF's data graph structure is more suitable for cross-machine tasks.
Google released TensorFlow today. I have been poking around in the code, and I don't see anything in the code or API about training across a cluster of GPU servers. Does it have distributed training functionality yet?
0
1
4,046
0
44,216,923
0
0
0
0
1
false
9
2015-11-10T11:13:00.000
15
2
0
Python Opencv morphological closing gives src data type = 0 is not supported
33,628,679
1
python,opencv,mathematical-morphology
Make sure volume_start is dtype=uint8. You can convert it with volume_start = np.array(volume_start, dtype=np.uint8). Or nicer: volume_start = volume_start.astype(np.uint8)
I'm trying to morphologically close a volume with a ball structuring element created by the function SE3 = skimage.morphology.ball(8). When using closing = cv2.morphologyEx(volume_start, cv2.MORPH_CLOSE, SE) it returns TypeError: src data type = 0 is not supported Do you know how to solve this issue? Thank you
0
1
16,369
0
33,675,492
0
0
0
0
1
true
1
2015-11-11T01:53:00.000
0
1
0
listen for ctf otf changes with traits in mayavi volume rendering
33,642,997
1.2
python-2.7,enthought,mayavi,traitsui
You are going to wade into dangerous territory. As you noted the recorder has idosyncratic behavior -- what that really means is that it uses features to programatically "disable" the trait notifications while it is doing things. You can probably figure out a way to do it that way, but most likely you'll have to dig deeply into the code that assigns the vtk modules. What would probably make the most sense, is for you to write a GUI that does exactly what you want. That is, instead of listening to something like Volume._ctf, and then opening up the menu and changing the color, you can make a GUI and add a button that says "Change volume color" that when clicked brings the user to a color wheel. Then it's just a matter of listening to the GUI elements that you explicitly code for.
I would like to listen to changes in the transfer function in how the color and opacity (ctf/otf) of my data is represented. Listening to sensible-sounding traits such as mayavi.modules.volume.Volume._ctf does not trigger my callback. I would expect this to be changed by the user either through the "standard" mayavi pipeline display (as part of EngineRichView) or through including the Volume object's view directly. No such luck either way. It is maybe telling that when you press the big red "record" button, the recorder also does not seem to notice user changes to the ctf.
0
1
96
0
33,683,680
0
0
0
0
1
false
0
2015-11-12T17:46:00.000
0
1
0
Reverse-engineering a clustering algorithm from the clusters
33,677,932
0
python,scikit-learn,cluster-analysis,feature-selection
Are you sure it was done automatically? It sounds to me as if you should be treating this as a classification problem: construct a classifier that does the same as the human did.
I have a clustering of data performed by a human based solely on their knowledge of the system. I also have a feature vector for each element. I have no knowledge about the meaning of the features, nor do I know what the reasoning behind the human clustering was. I have complete information about which elements belong to which cluster. I can assume that the human was not stupid and there is a way to derive the clustering from the features. Is there an intelligent way to reverse-engineer the clustering? That is, how can I select the features and the clustering algorithm that will yield the same clustering most of the time (on this data set)? So far I have tried the naive approach - going through the clustering algorithms provided by the sklearn library in python and comparing the obtained clusters to the source one. This approach does not yield good results. My next approach would be to use some linear combinations of the features, or subsets of features. Here, again, my question is if there is a more intelligent way to do this than to go through as many combinations as possible. I can't shake the feeling that this is a standard problem and I'm just missing the right term to find the solution on Google.
0
1
497
0
42,054,866
0
1
0
0
1
false
1
2015-11-12T21:01:00.000
1
2
0
matplotlib can't be used from python3
33,681,281
0.099668
python,matplotlib,pip
You can install the package from your distro with: sudo apt-get install python3-matplotlib It will probably throw an error when you import matplotlib, but it is solved by installing the package tkinter with: sudo apt-get install python3-tk
I have two python compilers on my Ubuntu 14.04 VM. I have installed matplotlib as pip install matplotlib But the matplotlib cannot be used from python3.It can be used from python2.7 If I use import matplotlib.pyplot as plt inside my script test.py and run it as python3 test.py I get the error ImportError: No module named 'matplotlib' How can this be fixed.
0
1
1,128
0
33,713,612
0
1
0
0
1
false
2
2015-11-14T21:17:00.000
1
2
0
How to convert specific elements within a numpy array to integers?
33,713,472
0.099668
python,arrays,numpy
The strength of Numpy arrays is that many low-level operations can be quickly performed on the data because most (not all) types used by these arrays have a fixed-size in memory. For instance, the floats you are using probably require 8 bytes each. The most important thing in that case is that all datas share the same type and fit in the same amount of memory. You can play a little around that if you really want (and need) to, but I would not suggest you to start by such special cases. Try to learn the strength of these arrays when used with this requirement (but this involves accepting the fact that you can't mix integers and floats in the same array).
I've written a script that gives me the result of dividing two variables ("A" and "B") -- and the output of each variable is a numpy array with 26 elements. Usually, with any two elements from "A" and "B," the result of the operation is a float, and the the element in the output array that corresponds to that operation shows up as a float. But strangely, even if the output is supposed to be an integer (almost always 0 or 1), the integer will show up as "0." or "1." in the output array. Is there any way to turn these specific elements of the array back into integers, rather than keep them as floats? I'd like to write a simple if statement that will convert any output elements that are supposed to be integers back into integers (i.e., make "0." into "0"). But I'm having some trouble with that. Any ideas?
0
1
82
0
33,754,768
0
0
0
0
1
true
0
2015-11-17T09:24:00.000
1
1
0
Pickling/unpickling alternative (API-compatible) class implementations
33,753,224
1.2
python,c++,alias,pickle
Does it help to alias in another way (fast = normal) if there is no fast implementation available? Maybe this could be done only for the time of unpickling and then reversed, to avoid confusing checks in other code?
In a distributed computing project, we are using Pyro to pass objects over the wire between nodes; Pyro internally serializes and deserializes objects using pickle. Some classes in the project have two implementations: one pure-Python (for ease of installation, especially for Windows users), one in c++/boost::python (much faster, but requires boost + knowledge of how to compile the extension module). Both python and c++ classes support pickling (in c++, that is done via boost::python). These classes have different fully-qualified name (mupif.Octree.Octant vs. mupif.fastOctant.Octant), but the latter is aliased to the former and overwrites the pure-Python definition (mupif.Octree.Octant=mupif.fastOctant.Octant), so it is transparent to the user and the fast variant is always used if available on the node. However, pickle uses __module__ and __class__ to identify the instance, thus when the c++-based object is passed over the wire to another node which does not support it, unpickling will fail. What is a solution to this? Is it acceptable to change the classe's __module__, i.e. foo.fastOctant.Octant.__class__.__module__='mupif.Octree'? Can it have some side-effects I don't see yet?
0
1
109
0
33,757,187
0
1
0
0
1
false
0
2015-11-17T12:29:00.000
0
2
0
How can I blur or pixify images in python by using matrixes?
33,756,970
0
python,image-processing
To make it blurry filter it using any low-pass filter (mean filter, gaussian filter etc.).
I already have a function that converts an image to a matrix, and back. But I was wondering how to manipulate the matrix so that the picture becomes blurry, or pixified?
0
1
161
0
53,183,223
0
0
0
0
2
false
640
2015-11-17T14:37:00.000
3
28
0
How to save/restore a model after training?
33,759,623
0.021425
python,tensorflow
Use tf.train.Saver to save a model. Remember, you need to specify the var_list if you want to reduce the model size. The val_list can be: tf.trainable_variables or tf.global_variables.
After you train a model in Tensorflow: How do you save the trained model? How do you later restore this saved model?
0
1
468,965
0
33,763,208
0
0
0
0
2
false
640
2015-11-17T14:37:00.000
55
28
0
How to save/restore a model after training?
33,759,623
1
python,tensorflow
There are two parts to the model, the model definition, saved by Supervisor as graph.pbtxt in the model directory and the numerical values of tensors, saved into checkpoint files like model.ckpt-1003418. The model definition can be restored using tf.import_graph_def, and the weights are restored using Saver. However, Saver uses special collection holding list of variables that's attached to the model Graph, and this collection is not initialized using import_graph_def, so you can't use the two together at the moment (it's on our roadmap to fix). For now, you have to use approach of Ryan Sepassi -- manually construct a graph with identical node names, and use Saver to load the weights into it. (Alternatively you could hack it by using by using import_graph_def, creating variables manually, and using tf.add_to_collection(tf.GraphKeys.VARIABLES, variable) for each variable, then using Saver)
After you train a model in Tensorflow: How do you save the trained model? How do you later restore this saved model?
0
1
468,965
0
60,106,544
0
0
0
0
1
false
69
2015-11-17T19:24:00.000
5
5
0
Remove nodes from graph or reset entire default graph
33,765,336
0.197375
python,tensorflow
Tensorflow 2.0 Compatible Answer: In Tensorflow Version >= 2.0, the Command to Reset Entire Default Graph, when run in Graph Mode is tf.compat.v1.reset_default_graph. NOTE: The default graph is a property of the current thread. This function applies only to the current thread. Calling this function while a tf.compat.v1.Session or tf.compat.v1.InteractiveSession is active will result in undefined behavior. Using any previously created tf.Operation or tf.Tensor objects after calling this function will result in undefined behavior. Raises: AssertionError: If this function is called within a nested graph.
When working with the default global graph, is it possible to remove nodes after they've been added, or alternatively to reset the default graph to empty? When working with TF interactively in IPython, I find myself having to restart the kernel repeatedly. I would like to be able to experiment with graphs more easily if possible.
0
1
75,804
0
42,991,702
0
0
0
0
1
true
2
2015-11-18T11:24:00.000
2
2
0
How to apply sklearn's EllipticEnvelope to find out top outliers in the given dataset?
33,778,802
1.2
python,scikit-learn,outliers
Right way to do this is: Divide data into normal and outliers. Take large sample from normal data as normal_train for fitting the novelty detection model. Create a test set with a sample from normal that is not used in training (say normal_test) and a sample from outlier (say outlier_test) in a way such that the distribution of the test data (normal_test + outlier_test) retains population distribution. Predict on this test data to get usual metrics (accuracy, sensitivity, positive-predictive-value, etc.) Wow. I have come a long way!
I am using sklearn's EllipticEnvelope to find outliers in dataset. But I am not sure about how to model my problem? Should I just use all the data (without dividing into training and test sets) and apply fit? Also how would I obtain the outlyingness of each datapoint? Should I use predict on the same dataset?
0
1
2,901
0
68,254,814
0
0
0
0
1
false
46
2015-11-18T15:13:00.000
0
4
0
How can I visualize the weights(variables) in cnn in Tensorflow?
33,783,672
0
python,tensorflow
Using the tensorflow 2 API, There are several options: Weights extracted using the get_weights() function. weights_n = model.layers[n].get_weights()[0] Bias extracted using the numpy() convert function. bias_n = model.layers[n].bias.numpy()
After training the cnn model, I want to visualize the weight or print out the weights, what can I do? I cannot even print out the variables after training. Thank you!
0
1
53,725
0
33,846,706
0
0
0
0
1
false
7
2015-11-21T17:03:00.000
6
2
0
When should I use fftshift(fft(fftshift(x))) and when fft(x)?
33,846,123
1
python,fft
fft(fftshift(x)) rotates the input vector so the the phase of the complex FFT result is relative to the center of the original data window. If the input waveform is not exactly integer periodic in the FFT width, phase relative to the center of the original window of data may make more sense than the phase relative to some averaging between the discontinuous beginning and end. fft(fftshift(x)) also has the property that the imaginary component of a result will always be positive for a positive zero crossing at the center of the window of any antisymmetric waveform component. fftshift(fft(y)) rotates the FFT results so that the DC bin is in the center of the result, halfway between -Fs/2 and Fs/2, which is a common spectrum display format.
I am trying to implement an algorithm in python, but I am not sure when I should use fftshift(fft(fftshift(x))) and when only fft(x) (from numpy). Is there a rule of thumb based on the shape of input data? I am using fftshift instead of ifftshift due to the even number of values in the vector x.
0
1
13,324
0
33,871,559
0
1
0
0
1
true
0
2015-11-22T04:50:00.000
1
2
0
Installing numpy and pandas for python 3.5
33,851,716
1.2
python,numpy
I was corresponding with some ppl at python.org and they told me to use py -3.5 -m pip install SomePackage This works.
I've been trying to install numpy and pandas for python 3.5 but it keeps telling me that I have an issue. Could it be because numpy can't run on python 3.5 yet?
0
1
4,445
0
33,853,861
0
1
0
0
2
false
30
2015-11-22T10:31:00.000
-5
7
0
How do I close all pyplot windows (including ones from previous script executions)?
33,853,801
-1
python,matplotlib,pycharm
On *nix you can use killall command. killall app closes every instance of window with app for the window name. You can also use the same command from inside your python script. You can use os.system("bashcommand") to run the bashcommand.
So I have some python code that plots a few graphs using pyplot. Every time I run the script new plot windows are created that I have to close manually. How do I close all open pyplot windows at the start of the script? Ie. closing windows that were opened during previous executions of the script? In MatLab this can be done simply by using closeall.
0
1
61,115
0
52,167,731
0
1
0
0
2
false
30
2015-11-22T10:31:00.000
0
7
0
How do I close all pyplot windows (including ones from previous script executions)?
33,853,801
0
python,matplotlib,pycharm
As there seems no absolutely trivial solution to do this automatically from the script itself: the possibly simplest way to close all existing figures in pycharm is killing the corresponding processes (as jakevdp suggested in his comment): Menu Run\Stop... (Ctrl-F2). You'll find the windows closed with a delay of few seconds.
So I have some python code that plots a few graphs using pyplot. Every time I run the script new plot windows are created that I have to close manually. How do I close all open pyplot windows at the start of the script? Ie. closing windows that were opened during previous executions of the script? In MatLab this can be done simply by using closeall.
0
1
61,115
0
33,919,992
0
1
0
0
1
true
1
2015-11-25T07:03:00.000
1
1
0
datanitro - pass variables from one script to others
33,910,358
1.2
python,datanitro
There's no way to share the dataframe (each DataNitro script runs in its own process). You can read the frame each time, or if reading is slow, you can have the first script store it somewhere the other scripts can access it (e.g. as a csv or by pickling it).
Is there a way to have a variable (resulting from one script) accessible to other scripts while Excel is running? I have tried from script1 import df but it runs script1 again to produce df. I have a script that runs when I first open the workbook and it reads a dataframe and I need that dataframe for other scripts (or for other button clicks). Is there a way to store it in memory or should I read it every time I need it?
0
1
90
0
33,918,217
0
1
0
0
2
true
10
2015-11-25T13:42:00.000
5
3
0
Will casting an "integer" float to int always return the closest integer?
33,918,043
1.2
python
Casting a float to an integer truncates the value, so if you have 3.999998, and you cast it to an integer, you get 3. The way to prevent this is to round the result. int(round(3.99998)) = 4, since the round function always return a precisely integral value.
I get a float by dividing two numbers. I know that the numbers are divisible, so I always have an integer, only it's of type float. However, I need an actual int type. I know that int() strips the decimals (i.e., floor rounding). I am concerned that since floats are not exact, if I do e.g. int(12./3) or int(round(12./3)) it may end up as 3 instead of 4 because the floating point representation of 4 could be 3.9999999593519561 (it's not, just an example). Will this ever happen and can I make sure it doesn't? (I am asking because while reshaping a numpy array, I got a warning saying that the shape must be integers, not floats.)
0
1
3,905
0
33,918,503
0
1
0
0
2
false
10
2015-11-25T13:42:00.000
1
3
0
Will casting an "integer" float to int always return the closest integer?
33,918,043
0.066568
python
I ended up using integer division (a//b) since I divided integers. Wouldn't have worked if I divided e.g. 3.5/0.5=7 though.
I get a float by dividing two numbers. I know that the numbers are divisible, so I always have an integer, only it's of type float. However, I need an actual int type. I know that int() strips the decimals (i.e., floor rounding). I am concerned that since floats are not exact, if I do e.g. int(12./3) or int(round(12./3)) it may end up as 3 instead of 4 because the floating point representation of 4 could be 3.9999999593519561 (it's not, just an example). Will this ever happen and can I make sure it doesn't? (I am asking because while reshaping a numpy array, I got a warning saying that the shape must be integers, not floats.)
0
1
3,905
0
33,926,859
0
0
0
0
1
false
3
2015-11-25T21:28:00.000
0
2
0
Converting coordinates vector to numpy 2D matrix
33,926,704
0
python,numpy,matrix,matplotlib,lidar
I am aware that I am not answering half of your questions but this is how I would do it: Create a 2D array of the desired resolution, The "leftmost" values correspond to the smallest values of x and so forth Fill the array with the elevation value of the closest match in terms of x and y values Smoothen the result.
I have a set of 3D coordinates points: [lat,long,elevation] ([X,Y,Z]), derived from LIDAR data. The points are not sorted and the steps size between the points is more or less random. My goal is to build a function that converts this set of points to a 2D numpy matrix of a constant number of pixels where each (X,Y) cell hold the Z value, then plot it as elevations heatmap. scales must remain realistic, X and Y should have same step size. the matrix doesn't have to catch the exact elevations picture, It will obviously need some kind of resolution reduction in order to have a constant number of pixels. The solution I was thinking of is to build a bucket for each pixel, iterate over the points and put each in a bucket according to it's (X,Y) values. At last create a matrix where each sell holds the mean of the Z values in the corresponding bucket. Since I don't have lots of experience in this field I would love to hear some tips and specially if there are better ways to address this task. Is there a numpy function for converting my set of points to the desired matrix? (maybe meshgrid with steps of a constant value?) If I build very sparse matrix, where the step size is min[min{Xi,Xj} , min{Yk,Yl}] for all i,j,k,l is there a way to "reduce" the resolution and convert it to a matrix with the required size? Thanks!
0
1
2,868
0
33,954,610
0
0
0
0
2
false
3
2015-11-27T09:41:00.000
1
2
0
Using OpenCV with Django
33,954,438
0.099668
python,django,opencv
Am I right that you dream about Django application able to capture video from your camera? This will not work (at least not in a way you expect). Did you check any stack traces left by your web server (the one hosts Django app or the one started as Django built-in)? I suggest you start playing with OpenCV a bit just from Python command line. If you're on Windows use IDLE. Observe behaviour of your calls from there. Django application is running inside WSGI application server where there are several constraints what a module of particular type can and cannot do. I didn't try to repeat what you've done (I don't have camera I can access). Proper way of handling camera in web application requires browser side handling in JavaScript. Small disclaimer at the end: I'm not saying you cannot use OpenCV at all in Django application, but attempt to access the camera is not a way to go.
I want to use OpenCV in my Django application. As OpenCV is a library, I thought we can use it like any other library. When I try to import it using import cv2 in the views of Django, it works fine but when I try to make library function calls in Django view like cap = cv2.VideoCapture(0) and try to run the app on my browser, nothing happens: the template does not load and no traceback in the terminal and the application remains loading forever. Don't know why but the cv2 function call is not executing as expected. Since there is no traceback, I am not able to understand what is the problem. If anyone can suggest what is wrong ? Is it the right way to use OpenCV with Django ?
1
1
4,890
0
35,443,792
0
0
0
0
2
false
3
2015-11-27T09:41:00.000
2
2
0
Using OpenCV with Django
33,954,438
0.197375
python,django,opencv
Use a separate thread for the cv2 function call and the app should work like a charm. From what I figure..infinite loading is probably because the video never ceases recording and hence the code further up ahead is never taken into account, ergo an infinite loading page. Threads should probably do it. :) :)
I want to use OpenCV in my Django application. As OpenCV is a library, I thought we can use it like any other library. When I try to import it using import cv2 in the views of Django, it works fine but when I try to make library function calls in Django view like cap = cv2.VideoCapture(0) and try to run the app on my browser, nothing happens: the template does not load and no traceback in the terminal and the application remains loading forever. Don't know why but the cv2 function call is not executing as expected. Since there is no traceback, I am not able to understand what is the problem. If anyone can suggest what is wrong ? Is it the right way to use OpenCV with Django ?
1
1
4,890
0
33,980,259
0
0
0
0
1
false
0
2015-11-28T20:01:00.000
0
1
0
Fourier series of time domain data
33,975,835
0
python,fft,dft,period
Before doing the FFT, you will need to resample or interpolate the data until you get a set of amplitude values equally spaced in time.
I spent couple days trying to solve this problem, but no luck so I turn to you. I have file for a photometry of a star with time and amplitude data. I'm supposed to use this data to find period changes. I used Lomb-Scargle from pysca library, but I have to use Fourier analysis. I tried fft (dft) from scipy and numpy but I couldn't get anything that would resemble frequency spectrum or Fourier coefficients. I even tried to use nfft from pynfft library because my data are not evenly sampled, but I did not get anywhere with this. So if any of you know how to get from Fourier analysis main frequency in periodical data, please let me know.
0
1
243
0
33,997,375
0
0
0
0
1
false
0
2015-11-30T11:16:00.000
0
4
0
How do I append rows to an array in Python?
33,997,336
0
python,arrays,list,matrix,append
Use .append('item-goes-here') to append.
I have an array which is a 1X3 matrix, where: column 1 = x coordinate column 2 = y coordinate column 3 = direction of vector. I am tracking a series of points along a path. At each point i want to store the x,y and direction back into the array, as a row. So in the end, my array has grown vertically, with more and more rows that represents points along the path. Im struggling to build this function inside a class. Help plz? Xx
0
1
645
0
34,017,212
0
1
0
0
1
true
0
2015-11-30T18:27:00.000
0
1
0
Can't render latex in matplotlib.pyplot in python3
34,005,438
1.2
python-3.x,matplotlib,pdflatex
I could solve the problem using rc('text', usetex=False) which apparently make the matplotlib to use the internal mathtext instead of my default latex installation. Still I can not figure out the reason why my OS latex installation fails.
While something like matplotlib.pyplot.xlabel(r'Wavelenghth [$\mu$m]') works in python2 I get error when I use it in python 3 TypeError: startswith first arg must be str or a tuple of str, not bytes Does anyone know what it the problem? Is it from my latex installation?!
0
1
623
0
34,321,420
0
0
0
0
1
true
0
2015-12-01T16:38:00.000
0
1
0
scoring="roc_auc" on GridSearchCV for RF and DT
34,025,404
1.2
python,scikit-learn
The answer is: It is possible. However, the feature is only available to binary cases under the stated question. As explained by @AndreasMueller.
Reading the scikit-learn documentation and looking for similar topics I couldn't figure out an answer. Can I apply GridSearchCV having scoring="roc_auc" on Random Forest or Decision Trees without any drawback? Thank you in advance for any clarification.
0
1
456
0
34,044,627
0
0
0
0
1
true
0
2015-12-01T22:13:00.000
1
1
0
Find Subplot Number from Matplotlib Pick Event
34,031,206
1.2
python,matplotlib,interactive
Place the axes in a list or dictionary when creating. Then when a pick event has occurred, match the pick event axis to the dictionary. Thank you all.
So I have three matplotlib subplots. I can use a pick event to pull off and re-plot the data in any one of the subplot. Is it possible to read the pick event and to find out what subplot number was selected?
0
1
531
0
34,097,320
0
0
0
0
1
false
1
2015-12-04T20:39:00.000
0
3
0
Count since last occurence in NumPy
34,097,020
0
python,numpy
Split the array based on the condition and use the lengths of the remaining pieces and the condition state of the first and last element in the array.
Seemingly straightforward problem: I want to create an array that gives the count since the last occurence of a given condition. In this condition, let the condition be that a > 0: in: [0, 0, 5, 0, 0, 2, 1, 0, 0] out: [0, 0, 0, 1, 2, 0, 0, 1, 2] I assume step one would be something like np.cumsum(a > 0), but not sure where to go from there. Edit: Should clarify that I want to do this without iteration.
0
1
79
0
34,097,344
0
0
0
0
3
false
274
2015-12-04T20:55:00.000
84
12
0
Convert a tensor to numpy array in Tensorflow?
34,097,281
1
python,numpy,tensorflow
To convert back from tensor to numpy array you can simply run .eval() on the transformed tensor.
How to convert a tensor into a numpy array when using Tensorflow with Python bindings?
0
1
668,313
0
65,860,219
0
0
0
0
3
false
274
2015-12-04T20:55:00.000
4
12
0
Convert a tensor to numpy array in Tensorflow?
34,097,281
0.066568
python,numpy,tensorflow
You can convert a tensor in tensorflow to numpy array in the following ways. First: Use np.array(your_tensor) Second: Use your_tensor.numpy
How to convert a tensor into a numpy array when using Tensorflow with Python bindings?
0
1
668,313
0
63,803,837
0
0
0
0
3
false
274
2015-12-04T20:55:00.000
2
12
0
Convert a tensor to numpy array in Tensorflow?
34,097,281
0.033321
python,numpy,tensorflow
If you see there is a method _numpy(), e.g for an EagerTensor simply call the above method and you will get an ndarray.
How to convert a tensor into a numpy array when using Tensorflow with Python bindings?
0
1
668,313
0
34,116,773
0
0
0
0
1
false
2
2015-12-06T07:26:00.000
0
2
1
Running Octave tasks from Python
34,115,098
0
python,subprocess,octave,message-queue,oct2py
All three options are reasonable depending on your particular case. I don't want to rely on maintenance of external libraries such as oct2py, I am in favor of option 3 oct2py is implemented using option 3. You can reinvent what it already does or use it directly. oct2py is pure Python and it has permissive license: if its development were to stop tomorrow; you could include its code alongside yours.
I have a pretty complex computation code written in Octave and a python script which receives user input, and needs to run the Octave code based on the user inputs. As I see it, I have these options: Port the Octave code to python. Use external libraries (i.e. oct2py) which enable you to run the Octave/Matlab engine from python. Communicate between a python process and an octave process. One such possibility would be to use subprocess from the python code and wait for the answer. Since I'm pretty reluctant to port my code to python and I don't want to rely on maintenance of external libraries such as oct2py, I am in favor of option 3. However, since the system should scale well, I do not want to spawn a new octave process for every request, and a tasks queue system seems more reasonable. Is there any (recommended) tasks queue system to enqueue tasks in python and have an octave worker on the other end process it?
0
1
1,133
0
34,119,821
0
1
0
0
1
false
1
2015-12-06T16:25:00.000
1
2
0
how to sort list in python which has two numbers per index value?
34,119,746
0.099668
python-2.7,sorting
Try this: b = sorted(b, key = lambda i: (i[0], i[1]))
My code b=[((1,1)),((1,2)),((2,1)),((2,2)),((1,3))] for i in range(len(b)): print b[i] Obtained output: (1, 1) (1, 2) (2, 1) (2, 2) (1, 3) how do i sort this list by the first element or/and second element in each index value to get the output as: (1, 1) (1, 2) (1, 3) (2, 1) (2, 2) OR (1, 1) (2, 1) (1, 2) (2, 2) (1, 3) It would be nice if both columns are sorted as shown in the desired output, how ever if either of the output columns is sorted it will suffice.
0
1
48
0
34,122,819
0
0
0
0
1
false
1
2015-12-06T20:36:00.000
1
1
0
Multiclass NaiveBayes classification on a text dataset with changing prior probabilities
34,122,417
0.197375
python,machine-learning,nltk,naivebayes
If you know that priors change, you should refit them periodically (through gathering new training set representable for a new priors). In general - every ML method will fail in terms of accuracy if the priors change and you will not give this information to your classifier. You need at least some kind of feedback for the classifier. Then, if you for example have a closed loop where you get info if the classification is right or not, and you assume that only priors change - you can simply learn changing priors online (through any optimization, as it is rather easy to fit new priors). In general you should look at concept drift phenomen.
Ive come across an issue using Naive Bayes on Document classification into various classes problem. Actually I was wondering that P(C) or the prior probability of classes that we have at our hands initially will keep on changing over the course of time. For instance for classes - [music, sports, news] initial probabilities are [.25, .25, .50] Now suppose that over the time during a certain month if we had a deluge of sports related documents (eg 80% sports ) then our NaiveBayes will fail as it will be based on a prior probability factor which says only 25% are sports. How do we deal with such a situation ?
0
1
211
0
34,132,511
0
0
0
0
1
false
0
2015-12-07T11:12:00.000
1
2
0
Python Pandas IDE that would "know" columns and types
34,132,184
0.099668
python,debugging,pandas,ide
I don't believe that something like that exists, but you can always use df.info().
I'm doing some development in Python, mostly using a simple text editor (Sublime Text). I'm mostly dealing in databases that I fit in Pandas DataFrames. My issue is, I often lose track of the column names, and occasionally the column types as well. Is there some IDE / plug-in / debug tool that would allow me to look into each DataFrame and see how it's defined, a little bit like Eclipse can do for Java classes? Thank you,
0
1
308