GUI and Desktop Applications
int64
0
1
A_Id
int64
5.3k
72.5M
Networking and APIs
int64
0
1
Python Basics and Environment
int64
0
1
Other
int64
0
1
Database and SQL
int64
0
1
Available Count
int64
1
13
is_accepted
bool
2 classes
Q_Score
int64
0
1.72k
CreationDate
stringlengths
23
23
Users Score
int64
-11
327
AnswerCount
int64
1
31
System Administration and DevOps
int64
0
1
Title
stringlengths
15
149
Q_Id
int64
5.14k
60M
Score
float64
-1
1.2
Tags
stringlengths
6
90
Answer
stringlengths
18
5.54k
Question
stringlengths
49
9.42k
Web Development
int64
0
1
Data Science and Machine Learning
int64
1
1
ViewCount
int64
7
3.27M
0
50,758,844
0
0
0
0
1
true
6
2018-06-08T10:03:00.000
6
1
0
Why the following operands could not be broadcasted together?
50,758,165
1.2
python,python-3.x,numpy,array-broadcasting
It's to do with NumPy's broadcasting rules. Quoting the NumPy manual: When operating on two arrays, NumPy compares their shapes element-wise. It starts with the trailing dimensions, and works its way forward. Two dimensions are compatible when they are equal, or one of them is 1 The first statement throws an error because NumPy looks at the only dimension, and (5000,) and (500,) are inequal and cannot be broadcast together. In the second statement, train.reshape(-1,1) has the shape (5000,1) and test.reshape(-1,1) has the shape (500,1). The trailing dimension (length one) is equal, so that's ok, but then NumPy checks the other dimension and 5000 != 500, so the broadcasting fails here. In the third case, your operands are (5000,) and (500,1). In this case NumPy does allow broadcasting. The 1D-array is extended along the trailing length-1 dimension of the 2D-array. FWIW, the shape and broadcasting rules can be a bit tricky sometimes, and I've often been confused with similar matters.
The arrays are of following dimensions: dists: (500,5000) train: (5000,) test:(500,) Why does the first two statements throw an error whereas the third one works fine? dists += train + test Error: ValueError: operands could not be broadcast together with shapes (5000,) (500,) dists += train.reshape(-1,1) + test.reshape(-1,1) Error: ValueError: operands could not be broadcast together with shapes (5000,1) (500,1) dists += train + test.reshape(-1,1) This works fine! Why does this happen?
0
1
13,229
0
57,155,250
0
1
0
0
1
false
0
2018-06-08T10:20:00.000
0
1
0
How to fix import error 'nvcuda.dll' in spyder for python?
50,758,472
0
python-3.x,tensorflow,spyder,dllimport
As the comment mentioned, you need to ensure the time you import the tensorflow, the path environment points to c:\windows\system32 and as you said you have nvcuda.dll, ensure the file is there too. There is no need to set the libraries.
1.I already have nvcuda.dll in system32. 2.I have path C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.2\bin. 3.The program already upgraded tensorflow and GPU tensorflow. I check import still have the error. ImportError: Could not find 'nvcuda.dll'. TensorFlow requires that this DLL be installed in a directory that is named in your %PATH% environment variable. Typically it is installed in 'C:\Windows\System32'. If it is not present, ensure that you have a CUDA-capable GPU with the correct driver installed. Why? How to fix this problem shown by the message error ?
0
1
2,669
0
50,764,934
0
0
0
0
1
false
16
2018-06-08T12:20:00.000
19
2
0
Error: OOM when allocating tensor with shape
50,760,543
1
python-3.x,tensorflow,gpu,gunicorn
OOM stands for Out Of Memory. That means that your GPU has run out of space, presumably because you've allocated other tensors which are too large. You can fix this by making your model smaller or reducing your batch size. By the looks of it, you're feeding in a large image (800x1280) you may want to consider downsampling.
i am facing issue with my inception model during the performance testing with Apache JMeter. Error: OOM when allocating tensor with shape[800,1280,3] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[Node: Cast = CastDstT=DT_FLOAT, SrcT=DT_UINT8, _device="/job:localhost/replica:0/task:0/device:GPU:0"]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
0
1
34,640
0
53,469,361
0
1
0
0
1
false
14
2018-06-08T17:40:00.000
6
1
0
"Solving Environment" during `conda install -c tensorflow` takes 3+ min but changing the name a bit reduces the time significantly
50,765,892
1
python,tensorflow,anaconda,conda
I solved by doing this open Anaconda Navigator application select Environment from menu choose the environment you wanna use (base enviroment if you don't use multiple enviroments) Update index Click on channels and remove all eventual channel, but default Now, for me it takes a reasonable abount of time to install a new packet.
I am writing a custom conda package for tensorflow. When I name the package "tensorflow" it takes it more than 3 minutes to get past the "solving environment" part but if I change the package name even a little bit, to "tensorflowp3" it loads in around 10 seconds. I am using the commands - conda install -c <my_channel> tensorflow conda install -c <my_package> tensorflowp3 I am not sure why setting a slightly different package name causes such a significant time change. I am specifying which channel the package should be loaded from in the command as well. I have tried doing the same with locally stored packages using the --use-local tag as well but it still behaves the same way as with the channel name. Any help would be very appreciated.
0
1
21,062
0
54,032,632
0
1
0
0
1
false
0
2018-06-09T16:51:00.000
0
1
0
Rasa-core, dealing with dates
50,776,518
0
python,date,rasa-nlu,rasa-core
I think you could have a validation in the custom form. Where it perform validation on the time and perform next action base on the decision on the time. Your story will have to train to handle different action paths.
I have a problem with rasa core, let's suppose that I have a rasa-nlu able to detect time eg "let's start tomorrow" would get the entity time: 2018-06-10:T18:39:155Z Ok, now I want next branches, or decisions to be conditioned by: time is in the past time before one month from now time is beyond 1 month I do not know how to do that. I do not know how to convert it to a slot able to influence the dialog. My only idea would be to have an action that converts the date to a categorical slot right after detecting time, but I see two problems with that approach: one it would already be too late, meaning that if I do it with a posterior action it means the rasa-core has already decided what decision to take without using the date and secondly, I do know how to save it, because if I have a stories.md that compares a detecting date like in the example with the current time, maybe in the time of the example it was beyond one month but now it is in the past, so the reset of that story would be wrong. I am pretty lost and I do not know how to deal with this, thanks a lot!!!
0
1
568
0
50,781,250
0
0
0
0
1
false
0
2018-06-09T21:13:00.000
0
1
0
tensorflow save and restore autoencoder
50,778,593
0
python,tensorflow
If you don't care about memory space the easiest way is by saving the whole graph (encoder and decoder) and when using it for prediction, you can pass the last layer of the encoder as the fetch argument. Tensorflow will only calculate to this point and you don't have any computational difference compared to only saving the encoder. Otherwise you can create two graphs (one for the encoder, one for the decoder) an train them at the same time and train them together. But this is I bit more complex.
I used tf.layers.dense to build a fully connected autoencoder. and I want to save it and restore only the encoder to get the embedding output. How to use tf.train.saver to restore only the encoder? Because I want to set different batch size of the restored model, to input only one data into it. I saw many tutorials but there is no tutorials about this. Is there any standard solution about this Thank you very much
0
1
341
0
50,991,998
0
0
0
0
1
false
1
2018-06-10T13:57:00.000
-2
2
0
Multi crtieria alterative ranking based on mixed data types
50,784,441
-0.197375
python,statistics,ranking,recommendation-engine,economics
I am happy to see that you are willing to use multiple criteria decision making tool. You can use Analytic Hierarchy Process (AHP), Analytic Network Process (ANP), TOPSIS, VIKOR etc. Please refer relevant papers. You can also refer my papers. Krishnendu Mukherjee
I am building a recommender system which does Multi Criteria based ranking of car alternatives. I just need to do ranking of the alternatives in a meaningful way. I have ways of asking user questions via a form. Each car will be judged on the following criteria: price, size, electric/non electric, distance etc. As you can see its a mix of various data types, including ordinal, cardinal(count) and quantitative dat. My question is as follows: Which technique should I use for incorporating all the models into a single score Which I can rank. I looked at normalized Weighted sum model, but I have a hard time assigning weights to ordinal(ranked) data. I tried using the SMARTER approach for assigning numerical weights to ordinal data but Im not sure if it is appropriate. Please help! After someone can help me figure out answer to finding the best ranking method, what if the best ranked alternative isnt good enough on an absolute scale? how do i check that so that enlarge the alternative set further? 3.Since the criterion mention above( price, etc) are all on different units, is there a good method to normalized mixed data types belonging to different scales? does it even make sense to do so, given that the data belongs to many different types? any help on these problems will be greatly appreciated! Thank you!
0
1
972
0
58,026,950
0
1
0
0
1
false
8
2018-06-10T19:38:00.000
2
1
0
mpi4py or multiprocessing in Python ?
50,787,392
0.379949
mpi,python-multiprocessing
By using mpi4py you can divide the task into multiple threads, but with a single computer with limited performance or number of cores the usability will be limited. However you might find it handy during training. mpi4py is constructed on top of the MPI-1/2 specifications and provides an object oriented interface which closely follows MPI-2 C++ bindings. MPI for Python provides MPI bindings for the Python language, allowing programmers to exploit multiple processor computing systems. MPI for Python supports convenient, pickle-based communication of generic Python object as well as fast, near C-speed, direct array data communication of buffer-provider objects
I am writing a machine learning toolkit to run algorithm with different settings in parallel (each process run the algorithm for one setting). I am thinking about either to use mpi4py or python's build-in multiprocessing ? There are a few pros and cons I am considering about. Easy-to-use: mpi4py: It seems more concepts to learn and a bit more tricks to make it work well multiprocessing: quite easy and clean API Speed: mpi4py: people say it is more low level, so I am expect it can be faster than python multiprocessing ? multiprocessing: compared with mpi4py, much slower ? Clean and short code: mpi4py: seems more code to write multiprocessing: preferred, easy to use API The working context is I am aiming at running the code basically in one computer or a GPU server. Not really targeting at running in different machines in the network (which only MPI can do it). And since the main goal is doing machine learning, so the parallelization is not really required to be very optimal, the key goal I want to have is to balance easy, clean and quick to maintain code base but at the same time like to exploit the benefits of parallelization. With the background described above, is it recommended that using multiprocessing should just be enough ? Or is there a very strong reason to use mpi4py ?
0
1
3,083
0
63,611,068
0
0
0
0
1
false
1
2018-06-10T19:43:00.000
2
1
0
How to best flatten NDJson data in Python
50,787,438
0.379949
python,ndjson
pandas read_json has a bool param lines, set this to True to read ndjsons data_frame = pd.read_json('ndjson_file.json', lines=True)
I have a huge file (>400MB) of NDJson formatted data and like to flatten it into a table format for further analysis. I started iterate through the various objects manually but some are rather deep and might even change over time, so I was hoping for a more general approach. I was certain pandas lib would offer something but could not find anything that would help my case. Also, the several other libs I found seem to not ‘fully’ provide what I was hoping for (flatten_json). It all seems very early on. Is it possible that there is not good (fast and easy) solve for this at this time? Any help is appreciated
0
1
521
0
50,798,156
0
1
0
0
1
false
2
2018-06-11T11:42:00.000
0
1
0
No module named tensorflow found in Windows 10 64bit version
50,796,923
0
python-3.x
Pls, explain a little more where you're getting that error also it's quite possible that code you're using for verification is using codes which involve GPU. Hit me back on this one.
After installing Tensorflow with cpu support, I am getting some problems in the verification of Tensorflow. I don't have any GPU in my laptop and used pip3 for installation
0
1
163
0
51,878,585
0
0
0
0
1
false
1
2018-06-11T13:08:00.000
2
1
0
How to return 'faiss' unique vector id on 'add_with_ids' trained index?
50,798,515
0.379949
python,knn
Since you provided the actual vectors, you presumably know how to map ids to vectors. Most Faiss indexes in do not store the vectors, because they need to be compressed to fit in RAM.
I'm using Facebook's faiss index with custom indexes using the add_with_ids method. In inference time I use distance, ID = model.search() which returns the custom ID it was trained with. Is it possible to return also a unique id without retraining? Or just return the actual closest vector? Thank you!
1
1
1,900
0
50,819,970
0
0
0
0
1
false
4
2018-06-12T07:00:00.000
2
3
0
Is there an Anderson-Darling implementation for python that returns p-value?
50,811,061
0.132549
python,statistics,p-value,hypothesis-test,goodness-of-fit
I would just rank distributions by the goodness-of-fit statistic and not by p-values. We can use the Anderson-Darling, Kolmogorov-Smirnov or similar statistic just as distance measure to rank how well different distributions fit. background: p-values for Anderson-Darling or Kolmogorov-Smirnov depend on whether the parameters are estimated or not. In both cases the distribution is not a standard distribution. In some cases we can tabulate or use a functional approximation to tabulated values. This is the case when parameters are not estimated and if the distribution is a simple location-scale family without shape parameters. For distributions that have a shape parameter, the distribution of the test statistic that we need for computing the p-values depends on the parameters. That is we would have to compute different distributions or tabulated p-values for each set of parameters, which is impossible. The only solution to get p-values in those cases is either by bootstrap or by simulating the test statistic for the specific parameters. The technical condition is whether the test statistic is asymptotically pivotal which means that the asymptotic distribution of the test statistic is independent of the specific parameters. Using chisquare test on binned data requires fewer assumption, and we can compute it even when parameters are estimated. (Strictly speaking this is only true if the parameters are estimated by MLE using the binned data.)
I want to find the distribution that best fit some data. This would typically be some sort of measurement data, for instance force or torque. Ideally I want to run Anderson-Darling with multiple distributions and select the distribution with the highest p-value. This would be similar to the 'Goodness of fit' test in Minitab. I am having trouble finding a python implementation of Anderson-Darling that calculates the p-value. I have tried scipy's stats.anderson() but it only returns the AD-statistic and a list of critical values with the corresponding significance levels, not the p-value itself. I have also looked into statsmodels, but it seems to only support the normal distribution. I need to compare the fit of several distributions (normal, weibull, lognormal etc.). Is there an implementation of the Anderson-Darling in python that returns p-value and supports nonnormal distributions?
0
1
2,819
0
58,000,976
0
0
0
0
1
false
1
2018-06-12T09:36:00.000
0
1
0
Data extraction from HEC-RAS
50,813,889
0
python,data-modeling,data-extraction
Maybe can use vector data, first with a classified and get different values between areas, after a statistical analysis.
I'm using Hec-Ras for 2D unsteady modeling of a river delta. My model is simulated for one year. I need to extract the velocities and/or discharges and compare them with the velocities from a already done 1DSA model. I wanted to do it in python but I'm new in programming and I wanted to see if anyone has experience with this kind of problem or has any idea on how I can compare the results the easiest because there is a big amount of data and manually doing it from the ras mapper would take a lot of time.
0
1
319
0
50,822,329
0
1
0
0
1
true
1
2018-06-12T16:53:00.000
0
2
0
How to perform a pickling so that it is robust against crashing?
50,822,127
1.2
python,pickle
You're effectively doing backups, as your goal is the same: disaster recovery, lose as little work as possible. In backups, there are these standard practices, so choose whatever fits you best: backing up full backup (save everything each time) incremental backup (save only what changed since the last backup) differential backup (save only what changed since the last full backup) dealing with old backups circular buffer/rotating copies (delete or overwrite backups older than X days/iterations, optionally change indices in others' names) consolidatating old incremental/differential copies into the preceding full backup (as a failsafe, consolidate into a new file and only then delete the old ones)
I routinely use pickle.dump() to save large files in Python 2.7. In my code, I have one .pickle file that I continually update with each iteration of my code, overwriting the same file each time. However, I occasionally encounter crashes (e.g. from server issues). This may happen in the middle of the pickle dump, rendering the pickle incomplete and the pickle file unreadable, and I lose all my data from the past iterations. I guess one way I could do it is to save one .pickle file for each iteration, and combine all of them later. Are there any other recommended methods, or best practices in writing to disk that is robust to crashing?
0
1
175
0
53,845,674
0
0
0
0
1
false
5
2018-06-12T18:05:00.000
3
1
0
Does keras.backend.clear_session() deletes sessions in a process or globally?
50,823,233
0.53705
python,tensorflow,keras,multiprocessing
I faced similar kind of issue but I am not running models in parallel but alternatively i;e; either of the models (in different folders but same model file names) will run. When I run the models directly without clear_session it was conflicting with the previously loaded model and cannot switch to other model. After including clear_session at the beginning of statements (which loads the model) it was working, however it was also deleting global variables declared at the beginning of the program which are necessary for prediction activity. lesson learnt: clear_session will not only "Destroys the current TF graph and creates a new one." as mentioned in the documentation but also deletes global variables defined in the program. So I defined the global variables just after the clear_session statement ** feedback appreciated
I create up to 100 keras models in separated script an save them localy with model.save(). For Training them, I use multiprocessing.pool. In those processes I load each model separately. Because of occuring Memory Errors I used keras.backend.clear_session(). This seems to work but I have also read that it deletes the weights of models. So to come back to my question, if I import "from keras import backend as K" in each process of the pool and at the end, after I saved the models, I use K.clear_session(), do I clear important data of parallel running processes or just data of this process? If it deletes important data of parallel running processes. Is there any possibility of creating a local tensorflow session inside the process. Then assign the needed model to this session and then clear_session() this local one? I´m thankful for any input. In adition it would be helpful if anyone knows the exact functionality of clear_session(). The explanation of this function is not very informative especially for beginners like me. Thank you :)
0
1
1,899
0
50,825,599
0
0
0
0
1
true
0
2018-06-12T19:51:00.000
0
1
0
Re- standardise data after excluding outliers?
50,824,847
1.2
python,data-visualization,data-analysis
As part of the Exploratory Data Analysis(EDA) Process, you'll want to visualize your data with all data points, identify outliers and then further investigate those outliers to figure out what to do with them. Are these outliers inaccurate values that need to be corrected? Perhaps erroneous entries in the raw data? Or are they valid data points that might point to something interesting? You can also assess the distribution of your data with df.describe() If they are errors, correct them in your dataset and don't delete them. If they are accurate, valid outliers, just exclude them from the visualization to have a better picture of the rest of your data. Does this help?
I am experimenting with python and data analytics. I collected tweets, counted the distinct users, and summed them ,grouped by their locations. Then i have calculated the percentage of users per country population. To make my graphs look better i have standardised my data using the z-score formula. Now i observe that i have a few outliers that ruin my graphs, so i will exclude them. My question is, do i have to exlude them from the original dataset and then re standardise my data, or is it correct to just exclude the standardised form from my analysis and proceed with the values i have already calculated?
0
1
29
0
50,840,894
0
0
0
0
1
true
1
2018-06-13T10:49:00.000
1
1
0
Collapsing consecutive linear layers
50,835,327
1.2
python,machine-learning,neural-network,convolution,convolutional-neural-network
Permute the dimensions of the first-layer kernels such that input channels are in the "mini-batch" dimension and output channels are in the "channels" dimension. Apply the second layer to that as if that were an image. Then apply the third layer to the result of that. The final result are kernels of the "collapsed" layer. Use "full" padding all these operations. If that works roughly correctly (apart from padding), try fixing the padding (probably it should be "same" in the last operation).
I have a neural network with 3 consecutive linear layers (convolution), with no activation functions in between. After training the network and obtaining the weights, I would like to collapse all 3 layers into one layer. How can this be done in practice, when each layer has different kernel size and stride? The layers are as follows: Convolution layer with a 3x3 kernel, 5 input channels and 5 output channels (a tensor of size 3x3x5x5), with stride 1 and padding "same" Convolution layer with a 5x5 kernel, 5 input channels and 50 output channels (a tensor of size 5x5x5x50), with stride 2 and padding "same" Convolution layer with a 3x3 kernel, 50 input channels and 50 output channels (a tensor of size 3x3x50x50), with stride 1 and padding "same" Thanks in advance
0
1
223
0
51,275,732
0
0
0
0
1
true
2
2018-06-13T15:15:00.000
2
1
0
Segmentation fault (core dumped) when training more than one Keras NN models
50,840,749
1.2
python-3.x,tensorflow,segmentation-fault,keras,nvidia-jetson
If you run K.clearsession() on a GPU with Keras 2, you may get a segmentation fault. If you have this in your code, try removing it!
I am optimizing the hyper-parameters of my neural-network, for which I am recursively training the network using different hyper-parameters. It works as expected until after some iterations, when creating a new network for training, it dies with the error "Segmentation fault (core dumped)". Furthermore, I am using GPU for training and I am doing this on a Nvidia Jetson TX2 and Python3.5. Also, I am using Keras with TensorFlow backend.
0
1
1,251
0
64,878,625
0
0
0
0
4
false
12
2018-06-13T18:17:00.000
0
7
0
Getting "ModuleNotFoundError: No module named 'sklearn.impute'" despite having latest sklearn installed (0.19.1)
50,843,757
0
python-3.x,scikit-learn,anaconda
Another option is SimpleImputer, it works fine: from sklearn.impute import SimpleImputer
I am doing a Kaggle competition which requires imputing some missing data. I have installed latest Anaconda(4.5.4) with all relevant dependencies (i.e scikit-learn (0.19.1)). When I try to import the modules I am getting the following error: ModuleNotFoundError: No module named 'sklearn.impute' I have tried to import different sklearn modules without any problems. It seems that only sklearn.impute is missing.
0
1
17,981
0
50,844,299
0
0
0
0
4
true
12
2018-06-13T18:17:00.000
10
7
0
Getting "ModuleNotFoundError: No module named 'sklearn.impute'" despite having latest sklearn installed (0.19.1)
50,843,757
1.2
python-3.x,scikit-learn,anaconda
As BallpointBen pointed out, sklearn.impute is not yet released in the latest stable release (0.19.1). Currently it's supported only in 0.20.dev0.
I am doing a Kaggle competition which requires imputing some missing data. I have installed latest Anaconda(4.5.4) with all relevant dependencies (i.e scikit-learn (0.19.1)). When I try to import the modules I am getting the following error: ModuleNotFoundError: No module named 'sklearn.impute' I have tried to import different sklearn modules without any problems. It seems that only sklearn.impute is missing.
0
1
17,981
0
54,895,196
0
0
0
0
4
false
12
2018-06-13T18:17:00.000
1
7
0
Getting "ModuleNotFoundError: No module named 'sklearn.impute'" despite having latest sklearn installed (0.19.1)
50,843,757
0.028564
python-3.x,scikit-learn,anaconda
It's a version error. Here's a fix that worked for me while working in Jupyter Notebook. From your Terminal: conda update anaconda conda update scikit-learn Then restart your jupyter kernal
I am doing a Kaggle competition which requires imputing some missing data. I have installed latest Anaconda(4.5.4) with all relevant dependencies (i.e scikit-learn (0.19.1)). When I try to import the modules I am getting the following error: ModuleNotFoundError: No module named 'sklearn.impute' I have tried to import different sklearn modules without any problems. It seems that only sklearn.impute is missing.
0
1
17,981
0
57,499,632
0
0
0
0
4
false
12
2018-06-13T18:17:00.000
0
7
0
Getting "ModuleNotFoundError: No module named 'sklearn.impute'" despite having latest sklearn installed (0.19.1)
50,843,757
0
python-3.x,scikit-learn,anaconda
you can use from sklearn.preprocessing import Imputer it works.
I am doing a Kaggle competition which requires imputing some missing data. I have installed latest Anaconda(4.5.4) with all relevant dependencies (i.e scikit-learn (0.19.1)). When I try to import the modules I am getting the following error: ModuleNotFoundError: No module named 'sklearn.impute' I have tried to import different sklearn modules without any problems. It seems that only sklearn.impute is missing.
0
1
17,981
0
50,868,421
0
0
0
0
1
false
2
2018-06-14T15:07:00.000
0
1
0
How to predict word using trained skipgram model?
50,860,649
0
python,c++,nlp,word2vec,gensim
I haven't seen any way to do this, and given the way hierarchical-softmax (HS) outputs work, there's no obviously correct way to turn the output nodes' activation levels into a precise per-word likelihood estimation. Note that: the predict_output_word() method that (sort-of) simulates a negative-sampling prediction doesn't even try to handle HS mode during training, neither HS nor negative-sampling modes make exact predictions – they just nudge the outputs to be more like the current training example would require To the extent you could calculate all output node activations for a given context, then check each word's unique HS code-point node values for how close they are to "being predicted", you could potentially synthesize relative scores for each word – some measure of how far the values are from a "certain" output of that word. But whether and how each node's deviation should contribute to that score, and how that score might be indicative of a interpretable liklihood, is unclear. There could also be issues because of the way HS codes are assigned strictly by word-frequency – so 'neighbor' word sharing mostly-the-same-encoding may be very different semantically. (There were some hints in the original word2vec.c code that it could potentially be beneficial to assign HS-encodings by clustering related words to have similar codings, rather than by strict frequency, but I've seen little practice of that since.) I would suggest sticking to negative-sampling if interpretable predictions are important. (But also remember, word2vec isn't mainly used for predictions, it just uses the training-attempts-at-prediction to bootstrap a vector-arrangment that turn out to be useful for other tasks.)
I'm using Google's Word2vec and I'm wondering how to get the top words that are predicted by a skipgram model that is trained using hierarchical softmax, given an input word? For instance, when using negative sampling, one can simply multiply an input word's embedding (from the input matrix) with each of the vectors in the output matrix and take the one with the top value. However, in hierarchical softmax, there are multiple output vectors that correspond to each input word, due to the use of the Huffman tree. How do we compute the likelihood value/probability of an output word given an input word in this case?
0
1
342
0
50,864,355
0
0
0
0
1
false
0
2018-06-14T19:02:00.000
0
1
0
Defining label in confusion matrix with highly imbalanced dataset
50,864,233
0
python,neural-network
It is a binary classification problem. Usually the classes labeled as positive =1, and negative =0
I am currently working on building a neural net model that targets to predict success/failure of server update. However, the existing data is highly imbalanced. I.e. only 3 % of the records are failures, the rest is all success record. I am now trying to do some data exploration using confusion matrix. In this case, should I assign 'positive(1)' label to 'failure' label? or does it even matter which label that I assign to ? Thanks in advances.
0
1
39
0
50,865,504
0
0
0
0
1
true
0
2018-06-14T20:26:00.000
2
1
0
Lazy version of numpy.unpackbits
50,865,421
1.2
python,numpy,boolean,mmap,numpy-memmap
Not possible. The memory layout of a bit-packed array is incompatible with what you're looking for. The NumPy shape-and-strides model of array layout does not have sub-byte resolution. Even if you were to create a class that emulated the view you want, trying to use it with normal NumPy operations would require materializing a representation NumPy can work with, at which point you'd have to spend the memory you don't want to spend.
I use numpy.memmap to load only the parts of arrays into memory that I need, instead of loading an entire huge array. I would like to do the same with bool arrays. Unfortunately, bool memmap arrays aren't stored economically: according to ls, a bool memmap file requires as much space as a uint8 memmap file of the same array shape. So I use numpy.unpackbits to save space. Unfortunately, it seems not lazy: It's slow and can cause a MemoryError, so apparently it loads the array from disk into memory instead of providing a "bool view" on the uint8 array. So if I want to load only certain entries of the bool array from file, I first have to compute which uint8 entries they are part of, then apply numpy.unpackbits to that, and then again index into that. Isn't there a lazy way to get a "bool view" on the bit-packed memmap file?
0
1
163
0
54,122,776
0
0
0
0
1
false
3
2018-06-15T21:23:00.000
3
2
0
Python VADER lexicon Structure for sentiment analysis
50,882,838
0.291313
python,nltk,lexicon,vader
The vader_lexicon.txt file has four tab delimited columns as you said. Column 1: The Token Column 2: It is the Mean of the human Sentiment ratings Column 3: It is the Standard Deviation of the token assuming it follows Normal Distribution Column 4: It is the list of 10 human ratings taken during experiments The actual code or sentiment calculation does not use the 3rd and 4th columns. So if you want to update the lexicon according to your requirement you can leave the last two columns blank or fill in with a random number and a list.
I am using the VADER sentiment lexicon in Python's nltk library to analyze text sentiment. This lexicon does not suit my domain well, and so I wanted to add my own sentiment scores to various words. So, I got my hands on the lexicon text file (vader_lexicon.txt) to do just that. However, I do not understand the architecture of this file well. For example, a word like obliterate will have the following data in the text file: obliterate -2.9 0.83066 [-3, -4, -3, -3, -3, -3, -2, -1, -4, -3] Clearly the -2.9 is the average of sentiment scores in the list. But what does the 0.83066 represent? Thanks!
0
1
1,281
0
50,897,886
0
0
0
0
3
false
0
2018-06-16T21:47:00.000
0
3
0
Python faster way to take logarithm of N-dimensional array
50,892,030
0
python,arrays
The numpy log function is implemented in C and optimised for handling arrays, so although you may be able to scrape a bit of overhead off by writing your own custom log function in a lower-level language, this will still remain the bottleneck. If you want to see a big speed increase, you'll need to implement your algorithm differently. Is it really necessary to take the log of all these elements? You mention that each dimension can have 100 samples, do you plan on averaging these samples? If so you can reduce the number of logarithms you need to compute by using the fact that log(a) + log(b) = log(ab), and so the average [log(a0) + log(a1) + ... + log(aN)]/(a0 + a1 + ... + aN) is the same as log(a0*a1*...*aN)/(a0 + a1 + ... + aN).
My question is trivial, nevertheless I need your help. It's not a problem to take a np.log(x) of an array. But in my case this array could be N-dimensional/Tensor (N=2..1024 and 100 samples in each dimension). For N=4 calculation of element-wise np.log(x) takes 10 seconds. I need to take this log(x) in a cost function for optimization, thus, all process of optimization takes roughly 2 hours. So, the question is how this log(x) can be implemented in faster way for N-dimensional arrays. Is it really possible? Thanks in advance.
0
1
101
0
51,058,038
0
0
0
0
3
false
0
2018-06-16T21:47:00.000
0
3
0
Python faster way to take logarithm of N-dimensional array
50,892,030
0
python,arrays
Thanks guys, the problem was a big amount of entries that I had to go through for processing. I just found another cost function for my optimization. But, to speed up exactly this code - I think the idea with self made log table exactly for my type of signal can make it work.
My question is trivial, nevertheless I need your help. It's not a problem to take a np.log(x) of an array. But in my case this array could be N-dimensional/Tensor (N=2..1024 and 100 samples in each dimension). For N=4 calculation of element-wise np.log(x) takes 10 seconds. I need to take this log(x) in a cost function for optimization, thus, all process of optimization takes roughly 2 hours. So, the question is how this log(x) can be implemented in faster way for N-dimensional arrays. Is it really possible? Thanks in advance.
0
1
101
0
50,892,434
0
0
0
0
3
false
0
2018-06-16T21:47:00.000
0
3
0
Python faster way to take logarithm of N-dimensional array
50,892,030
0
python,arrays
Maybe multiprocessing can help you on this situation
My question is trivial, nevertheless I need your help. It's not a problem to take a np.log(x) of an array. But in my case this array could be N-dimensional/Tensor (N=2..1024 and 100 samples in each dimension). For N=4 calculation of element-wise np.log(x) takes 10 seconds. I need to take this log(x) in a cost function for optimization, thus, all process of optimization takes roughly 2 hours. So, the question is how this log(x) can be implemented in faster way for N-dimensional arrays. Is it really possible? Thanks in advance.
0
1
101
0
50,905,356
0
1
0
0
1
true
0
2018-06-18T07:48:00.000
0
1
0
Feature extraction from multiple images in python using SIFT
50,904,849
1.2
python-3.x,image-processing
Key-points extracted from SIFT describe numerous features. If you wish to compare all 400 frames from a video to an image that you have, you will have to make a loop over your process and run SIFT iteratively. This will be computationally expensive. One method to make this fast would be to read all key-points of these 400 frames and store them into a file so that you don't have to detect them each time you want to compare them with a test image. This is what I've made from the question that you've asked.
In feature extraction and detection using SIFT, I could extract features from 2 image. But I have 400 frames in video and want to have features from all 400 images in python. Can someone help me out with this? Thank you.
0
1
520
0
50,916,669
0
0
0
0
1
false
8
2018-06-18T17:32:00.000
0
3
0
Gensim Word2Vec select minor set of word vectors from pretrained model
50,914,729
0
python,keras,word2vec,gensim,word-embedding
There's no built-in feature that does exactly that, but it shouldn't require much code, and could be modeled on existing gensim code. A few possible alternative strategies: Load the full vectors, then save in an easy-to-parse format - such as via .save_word2vec_format(..., binary=False). This format is nearly self-explanatory; write your own code to drop all lines from this file that aren't on your whitelist (being sure to update the leading line declaration of entry-count). The existing source code for load_word2vec_format() & save_word2vec_format() may be instructive. You'll then have a subset file. Or, pretend you were going to train a new Word2Vec model, using your corpus-of-interest (with just the interesting words). But, only create the model and do the build_vocab() step. Now, you have untrained model, with random vectors, but just the right vocabulary. Grab the model's wv property - a KeyedVectors instance with that right vocabulary. Then separately load the oversized vector-set, and for each word in the right-sized KeyedVectors, copy over the actual vector from the larger set. Then save the right-sized subset. Or, look at the (possibly-broken-since-gensim-3.4) method on Word2Vec intersect_word2vec_format(). It more-or-less tries to do what's described in (2) above: with an in-memory model that has the vocabulary you want, merge in just the overlapping words from another word2vec-format set on disk. It'll either work, or provide the template for what you'd want to do.
I have a large pretrained Word2Vec model in gensim from which I want to use the pretrained word vectors for an embedding layer in my Keras model. The problem is that the embedding size is enormous and I don't need most of the word vectors (because I know which words can occure as Input). So I want to get rid of them to reduce the size of my embedding layer. Is there a way to just keep desired wordvectors (including the coresponding indices!), based on a whitelist of words?
0
1
2,182
0
50,926,356
0
0
0
0
1
true
1
2018-06-19T03:56:00.000
3
1
0
Keras vs TensorFlow - does Keras have any actual benefits?
50,920,425
1.2
python,tensorflow,keras
Keras used to have an upper hand on TensorFlow in the past but ever since the author is now affiliated with Google all the features that made it attractive are being implemented into TensorFlow you can check version 1.8, like you rightfully pointed out tf.layers is one such example.
I have been implementing some deep nets in Keras, but have eventually gotten frustrated with some limitations (for example: setting floatx to float16 fails on batch normalization layers, and the only way to fix it is to actually edit the Keras source; implementing custom layers requires coding them in backend code, which destroys the ability to switch backends), there appear to be no parallel training mechanisms [unlike tf.Estimator], and even vanilla programs run 30% slower in Keras than in tf (if one is to trust the interwebs), and was grumbling about moving to tensorflow, but was pleased to discover that TensorFlow (especially if you use tf.layers stuff) is not actually any longer for anything imaginable you might want to do. Is this a failure of my imagination, or is tf.layers basically a backporting of Keras into core TensorFlow, and is there any actual use case for Keras?
0
1
423
0
50,923,449
0
0
0
0
1
true
0
2018-06-19T07:17:00.000
0
1
0
How to choose number of perceptron in fine-tuning FC layer?
50,922,606
1.2
python,tensorflow
The easiest way to adapt your network is to add another FC layer on top of the VGG (with weight kernel of size 1000x3). Alternatively, replace the last FC layer (of size 4096x1000) with an FC layer of size 4096x3. Don't forget to properly initialize your newly added layers.
I use VGG-16 pre-trained model and fine-tune the last 3 FC layers. But in my case, I only use 3 classes as my classification. I want to ask how to choose the perceptron of FC layers. Should I visualize the Conv5_3 layer, then making a decision? BTW, VGG-16 official model is 4096, 4096, 1000 perceptron in FC layers.
0
1
50
0
64,109,376
0
0
0
0
2
false
6
2018-06-19T15:10:00.000
0
2
0
Undo a change that was performed using Pandas
50,931,697
0
python,pandas
Yes, there is a way to do this. If you're using the newest iteration of python and pandas you could do it this way: df.replace(to_replace='and', value='&', inplace=true) This is the way I learned it!
I would like to know if there's a technique to simply undo a change that was done using Pandas. For example, I did a string replacement on a few thousand rows of Pandas Dataframe, where, every occurrence of "&" in its string be replaced with "and". However after performing the replacement, I found out that I've made a mistake in the changes and would want to revert back to the Dataframe's most latest form before that string replacement was done. Is there a way to do this?
0
1
4,868
0
67,360,151
0
0
0
0
2
false
6
2018-06-19T15:10:00.000
0
2
0
Undo a change that was performed using Pandas
50,931,697
0
python,pandas
If you have cells structured in step, and the mess is because of running a couple of cells that have affected the dataset, you can stop the kernel and run all the cells from the beginning.
I would like to know if there's a technique to simply undo a change that was done using Pandas. For example, I did a string replacement on a few thousand rows of Pandas Dataframe, where, every occurrence of "&" in its string be replaced with "and". However after performing the replacement, I found out that I've made a mistake in the changes and would want to revert back to the Dataframe's most latest form before that string replacement was done. Is there a way to do this?
0
1
4,868
0
50,947,050
0
0
0
0
1
true
0
2018-06-19T17:17:00.000
1
2
0
How to use trained neural network in different platform/technology?
50,933,736
1.2
python,c++,tensorflow,neural-network
Technically you don't need a framework at all. A conventional fully connected neural network is simple enough that you can implement it in straight C++. It's about 100 lines of code for the matrix multiplication and a dozen or so for the non-linear activation function. The biggest part is figuring out how to parse a serialized Tensorflow model, especially given that there are quite a few ways to do so. You probably will want to freeze your TensorFlow model; this inserts the weights from the latest training into the model.
Given I trained a simple neural network using Tensorflow and Python on my laptop and I want to use this model on my phone in C++ app. Is there any compatibility format I can use? What is the minimal framework to run neural networks (not to train)? UDP. I'm also interested in Tensorflow to NOT-Tensorflow compatibility. Do I need to build it up from scratch or there're any best practices?
0
1
117
0
50,935,236
0
0
0
0
1
false
0
2018-06-19T18:36:00.000
1
1
0
dataframe from underlying script not updating
50,934,946
0.197375
python,dataframe,reference
I figured it out, sorry for the confusion. I did not save the risktemplate that I updated the dataframe to in the same folder that the other reference script was looking at! Newbie!
I have a script called "RiskTemplate.py" which generates a pandas dataframe consisting of 156 columns. I created two additional columns which gives me a total count of 158 columns. However, when I run this "RiskTemplate.py" script in another script using the below code, the dataframe only pulls the original 156 columns I had before the two additional columns were added. exec(open("RiskTemplate.py").read()) how can I get the reference script to pull in the revised dataframe from the underlying script "RiskTemplate.py"? here are the lines creating the two additional dataframe columns, they work as intended when I run it directly in the "RiskTemplate.py" script. The original dataframe is pulling from SQL via df = pd.read_sql(query,connection) df['LMV % of NAV'] = df['longmv']/df['End of Month NAV']*100 df['SMV % of NAV'] = df['shortmv']/df['End of Month NAV']*100
0
1
21
0
50,941,849
0
0
0
0
1
false
0
2018-06-20T06:23:00.000
0
4
0
Pandas dataframe merge by function on column names
50,941,528
0
python-3.x,pandas,dataframe,merge,concat
A simple concatenation will do pd.concat([df_A, df_B], join='outer')[['A', 'B']].copy(). or 'pd.concat([df_A, df_B], join='inner')
I say to dataframes. df_A has columns A__a, B__b, C. (shape 5,3) df_B has columns A_a, B_b, D. (shape 4,3) How can I unify them (without having to iterate over all columns) to get one df with columns A,B ? (shape 9,2) - meaning A__a and A_a should be unified to the same column. I need to use merge with applying the function lambda x: x.replace("_",""). Is it possible?
0
1
200
0
50,955,970
0
0
0
0
1
true
0
2018-06-20T19:12:00.000
0
2
0
ImportError: libnvidia-fatbinaryloader.so.396.24.02: cannot open shared object file: No such file or directory
50,955,454
1.2
python,tensorflow
Running export LD_LIBRARY_PATH=/usr/lib/nvidia-396 fixed it now i have another error
I just updated my nvidia GPU driver and got this error when i import tensorflow like that: import tensorflow as tf Config: Ubuntu 16.04 NVIDIA Corporation GM204M [GeForce GTX 970M] 16GB RAM i7 6700HQ Python 3.5.2 GCC 5.4.0 Cuda 9.0.176 Tensorflow 1.8 CudNN 7 This error had no result on Google ... Maybe i should downgrade any version like my GPU driver or update CudNN ? Thanks for any help
0
1
798
0
50,964,829
0
0
0
0
1
true
2
2018-06-21T08:56:00.000
1
1
0
Image classification: Best approach to training the model
50,964,401
1.2
python,machine-learning,classification,conv-neural-network
The strategy that you will choose depends mainly on the structure of the CNN that you are going to create. If you train a model that is able to recognize if an image contains a spoon or a fork, you will not be able to test on a table with several table-cloth items (e.g. both a fork and a spoon) because the network will try to answer if in the image there is a spoon or a fork indeed. Anyway, it is still possible to train the network to classify several features (strategy "A") but in that case you need a model that is able to do Multi-label classification. Finally, I would suggest to go for the "B" strategy because, in my humble opinion, it fits good the application domain. Hope this answer is clear and helpful! Cheers.
Given a model that has to classify 10 table-cloth items (spoons, forks, cups, plate etc,) and must be tested on an image of a table with all the table-cloth items in it (test_model_accuracy,) which is the best approach for training: A: Train the model on individual items then test on test_model_accuracy B: Train the model on an entire table with bounding boxes then test on test_model_accuracy C: Start with A, then B or vice-versa, then test on test_model_accuracy
0
1
140
0
62,225,211
0
0
0
0
1
false
14
2018-06-21T09:24:00.000
11
2
0
Sklearn custom transformers: difference between using FunctionTransformer and subclassing TransformerMixin
50,965,004
1
python,machine-learning,scikit-learn,cross-validation
The key difference between FunctionTransformer and a subclass of TransformerMixin is that with the latter, you have the possibility that your custom transformer can learn by applying the fit method. E.g. the StandardScaler learns the means and standard deviations of the columns during the fit method, and in the transform method these attributes are used for transformation. This cannot be achieved by a simple FunctionTransformer, at least not in a canonical way as you have to pass the train set somehow. This possibility to learn is in fact the reason to use custom transformers and pipelines - if you just apply an ordinary function by the usage of a FunctionTransformer, nothing is gained in the cross validation process. It makes no difference whether you transform before the cross validation once or in each step of the cross validation (except that the latter will take more time).
In order to do proper CV it is advisable to use pipelines so that same transformations can be applied to each fold in the CV. I can define custom transformations by using either sklearn.preprocessing.FunctionTrasformer or by subclassing sklearn.base.TransformerMixin. Which one is the recommended approach? Why?
0
1
10,493
0
50,966,711
0
0
0
0
1
true
11
2018-06-21T10:25:00.000
12
3
0
convert images from [-1; 1] to [0; 255]
50,966,204
1.2
python,numpy,opencv
As you have found, img * 255 gives you a resulting range of [-255:255], and (img + 1) * 255 gives you a result of [0:510]. You're on the right track. What you need is either: int((img + 1) * 255 / 2) or round((img + 1) * 255 / 2). This shifts the input from [-1:1] to [0:2] then multiplies by 127.5 to get [0.0:255.0]. Using int() will actually result in [0:254]
I know that question is really simple, but I didn't find how to bypass the issue: I'm processing images, the output pixels are float32, and values are in range [-1; 1]. The thing is, when saving using openCV, all negative data and float values are lost (I only get images with 0 or 1 values) So I need to convert those images to [0; 255] (Int8) I've tried img * 255, but doing this does not help with negative values. (img + 1) * 255, I'm removing the negative values, but I'm creating an overflow Is there a (clean) way to do it ? I'm using Python35, OpenCV2 and Numpy, but I think it's more a math problem than a library thing
0
1
30,202
0
50,974,355
0
0
0
0
1
false
0
2018-06-21T16:43:00.000
0
2
0
How to get two or more maximum indexes values set to 1 from tf.softmax's output
50,973,687
0
python,tensorflow,softmax
tf.nn.softmax forces everything to add up to 1.0 to make a valid probability distribution. If you want multiple values in the vector to be ones then you should use tf.nn.sigmoid instead. If you want to retrieve the maximum numbers in the vector use tf.nn.top_k.
I want to get the maximum (2 or more) indexes set to 1 from the output of tf.nn.softmax(). given tf.nn.softmax's outputs as [0.1, 0.4, 0.2, 0.1, 0.8] I want to get something like [0,1,0,0,1] since those indexes have the maximum numbers (in this case I chose just the maximum 2). Thank you in advance!
0
1
445
0
50,975,222
0
0
0
0
1
true
0
2018-06-21T18:06:00.000
1
2
0
matplotlib colormap without normalization
50,975,047
1.2
python,matplotlib
Create a list of colors, say colors = ['blue', 'red', 'green', 'purple'], that has as many colors as you have different targets. Then, set c=colors[target] with target being the integer your model popped out. This means you will need to plot each point one at a time unless you sort all the targets and plot at the end.
I am creating scatterplots of data with integer targets. Naturally, I represent the targets as color in the scatterplot. However, sometimes my models, because of the nature of the model, predict targets that are not in the original set. I.e., my training targets are chosen from [0,1,2], and my model occasionally predicts 3, because it is not very bright. The problem is that when I scatterplot my data, and then separately scatterplot the predictions, the target 2 gets mapped to a different color in each scatter, which makes for a bad picture. This is because matplotlib by default scales the values in my color list to be between some given values. I would like to override this default behavior, and have my color list (which are integers), always map to the same color, e.g. 1 maps to green, regardless of how many different classes are in my c=targets parameter.
0
1
174
0
50,977,040
0
0
0
0
1
true
3
2018-06-21T19:25:00.000
2
1
0
matplotlib modified color map with white as zero
50,976,177
1.2
python,matplotlib,colormap
You would rather mask zero out of your data, e.g. setting those values to nan or use a masked array. Then you can just set_bad("white") for your colormap.
Some of the standard matplotlib cmaps, such as viridis or jet show dark colors in small values. While this is what I need, I like them to show nothing, i.e. white background if the value is exactly zero. For non zero values the usual colors of that color map are fine. Is it possible to do this?
0
1
4,611
0
57,412,703
0
0
0
0
4
false
3
2018-06-21T21:46:00.000
0
5
0
Locking of HDF files using h5py
50,977,839
0
python-2.7,hdf5,h5py,hdf
with h5py.File(), the same .h5 file can be open for read ("r") multiple times. But h5py doesn't support more than a single thread. You can experience bad data with multiple concurrent readers.
I have a whole bunch of code interacting with hdf files through h5py. The code has been working for years. Recently, with a change in python environments, I am receiving this new error message. IOError: Unable to open file (unable to lock file, errno = 11, error message = 'Resource temporarily unavailable') What is interesting is the error occurs intermittently in some places and persistently in others. In places where it is occuring routinely, I have looked at my code and confirm that there is no other h5py instance connected to the file and that the last connection was properly flushed and closed. Again this was all working fine prior to the environment change. Heres snippets from my conda environment: h5py 2.8.0 py27h470a237_0 conda-forge hdf4 4.2.13 0 conda-forge hdf5 1.10.1 2 conda-forge
0
1
7,784
0
71,661,095
0
0
0
0
4
false
3
2018-06-21T21:46:00.000
0
5
0
Locking of HDF files using h5py
50,977,839
0
python-2.7,hdf5,h5py,hdf
Similar as with the other answers I had already opened the file, but for me it was in a separate HDF5 viewer.
I have a whole bunch of code interacting with hdf files through h5py. The code has been working for years. Recently, with a change in python environments, I am receiving this new error message. IOError: Unable to open file (unable to lock file, errno = 11, error message = 'Resource temporarily unavailable') What is interesting is the error occurs intermittently in some places and persistently in others. In places where it is occuring routinely, I have looked at my code and confirm that there is no other h5py instance connected to the file and that the last connection was properly flushed and closed. Again this was all working fine prior to the environment change. Heres snippets from my conda environment: h5py 2.8.0 py27h470a237_0 conda-forge hdf4 4.2.13 0 conda-forge hdf5 1.10.1 2 conda-forge
0
1
7,784
0
51,071,193
0
0
0
0
4
false
3
2018-06-21T21:46:00.000
2
5
0
Locking of HDF files using h5py
50,977,839
0.07983
python-2.7,hdf5,h5py,hdf
My issue! Failed to close the file in an obscure method. Interesting thing is that unlocking the file in some cases just took a restart of ipython, other times took a full reboot.
I have a whole bunch of code interacting with hdf files through h5py. The code has been working for years. Recently, with a change in python environments, I am receiving this new error message. IOError: Unable to open file (unable to lock file, errno = 11, error message = 'Resource temporarily unavailable') What is interesting is the error occurs intermittently in some places and persistently in others. In places where it is occuring routinely, I have looked at my code and confirm that there is no other h5py instance connected to the file and that the last connection was properly flushed and closed. Again this was all working fine prior to the environment change. Heres snippets from my conda environment: h5py 2.8.0 py27h470a237_0 conda-forge hdf4 4.2.13 0 conda-forge hdf5 1.10.1 2 conda-forge
0
1
7,784
0
65,687,853
0
0
0
0
4
false
3
2018-06-21T21:46:00.000
1
5
0
Locking of HDF files using h5py
50,977,839
0.039979
python-2.7,hdf5,h5py,hdf
I had other process running that I did not realize. How did I solved my problem: Used ps aux | grep myapp.py to find the process number that was running myapp.py. Kill the process usill the kill command Run again
I have a whole bunch of code interacting with hdf files through h5py. The code has been working for years. Recently, with a change in python environments, I am receiving this new error message. IOError: Unable to open file (unable to lock file, errno = 11, error message = 'Resource temporarily unavailable') What is interesting is the error occurs intermittently in some places and persistently in others. In places where it is occuring routinely, I have looked at my code and confirm that there is no other h5py instance connected to the file and that the last connection was properly flushed and closed. Again this was all working fine prior to the environment change. Heres snippets from my conda environment: h5py 2.8.0 py27h470a237_0 conda-forge hdf4 4.2.13 0 conda-forge hdf5 1.10.1 2 conda-forge
0
1
7,784
0
50,995,396
0
0
0
0
1
false
0
2018-06-22T03:23:00.000
0
1
0
How to cluster some object in HAC but they have same value of Cosine Similarity
50,980,199
0
python-2.7,cluster-analysis,hierarchical-clustering,cosine-similarity
With Cosine similarity, you'll probably want to stop at 0... But of course the problem of ties can arise with any distance function, too. But there obviously is no mathematical answer. They are all equally good. Usually, one hopes that the order does not matter. For a it doesn't, but for all other it does. Don't forget that HAC cannot guarantee to find the best solution (except for single link). So just choose any, or even all at once. It's fairly common to choose the first found. This allows getting different versions by shuffling the data.
I want to cluster Object A with Object B or Object C. But value of Cosine Similarity Object A with Object B is 0 and Cosine Similarity Object A with Object C is 0. Before it directly clustered, I need to cluster those object step by stem, which one should be combined first Object A with B or Object A with C?
0
1
18
0
51,001,967
0
1
0
0
2
false
0
2018-06-22T17:44:00.000
0
2
0
Tensor flow package installation in PyCharm
50,993,189
0
python-3.x,tensorflow,pycharm
Please run pip install tensorflow in command line and post the output here. Tensorflow can be installed on Windows but the process is often annoying.
I have been successfully using PyCharm for my python work.All the packages can be easily installed by going to settings and then project interpreter but tensorflow installation is showing error.In suggestions it asked me to upgrade pip module.But even after that it shows error with following message: " Could not find a version that satisfies the requirement tensorflow (from versions: ) No matching distribution found for tensorflow" I am able to install all other packages but error is with tensorflow only.I am using windows.
0
1
182
0
51,021,687
0
1
0
0
2
false
0
2018-06-22T17:44:00.000
0
2
0
Tensor flow package installation in PyCharm
50,993,189
0
python-3.x,tensorflow,pycharm
You could also tried anaconda. It is has very nice UI and you could switch between different version.
I have been successfully using PyCharm for my python work.All the packages can be easily installed by going to settings and then project interpreter but tensorflow installation is showing error.In suggestions it asked me to upgrade pip module.But even after that it shows error with following message: " Could not find a version that satisfies the requirement tensorflow (from versions: ) No matching distribution found for tensorflow" I am able to install all other packages but error is with tensorflow only.I am using windows.
0
1
182
0
50,994,909
0
0
0
0
1
false
3
2018-06-22T18:41:00.000
1
1
0
Getting IDs from t-SNE plot?
50,993,934
0.197375
python,mapping
If you are using sklearn's t-SNE, then your assumption is correct. The ordering of the inputs match the ordering of the outputs. So if you do y=TSNE(n_components=n).fit_transform(x) then y and x will be in the same order so y[7] will be the embedding of x[7]. You can trust scikit-learn that this will be the case.
Quite simple, If I perform t-SNE in Python for high-dimensional data then I get 2 or 3 coordinates that reflect each new point. But how do I map these to the original IDs? One way that I can think of is if the indices are kept fixed the entire time, then I can do: Pick a point in t-SNE See what row it was in t-SNE (e.g. index 7) Go to original data and pick out row/index 7. However, I don't know how to check if this actually works. My data is super high-dimensional and it is very hard to make sense of it with a normal "sanity check". Thanks a lot! Best,
0
1
660
0
56,987,465
0
0
0
0
1
false
4
2018-06-22T21:16:00.000
1
3
0
Ignoring visible gpu device with compute capability 3.0. The minimum required Cuda capability is 3.5
50,995,707
0.066568
python,docker,tensorflow
I just spent a day trying to build this thing from source and what worked for me finally is quite surprising: the pre-built wheel for TF 1.5.0 does not complain about this anymore, while pre-built wheel for TF 1.14.0 does complain. It seems you have used the same version, so it's quite interesting, but I thought I would share, so if anyone struggles with this, there seems to be an easy way out. Configs: Visual Studio version: 2017 Cuda compute capabilitz: 3.0 GPU: two Geforce GPU 755M OS: Windows 10 Python: 3.6.8 Cuda Toolkit: 9.0 CuDNN: 7.0 (the earliest available is needed from, but it will complain anyway)
I am running Tensorflow 1.5.0 on a docker container because i need to use a version that doesn't use the AVX bytecodes because the hardware i am running on is too old to support it. I finally got tensorflow-gpu to import correctly (after downgrading the docker image to tf 1.5.0) but now when i run any code to detect the GPU it says the GPU is not there. I looked at the docker log and Jupyter is spitting out this message Ignoring visible gpu device (device: 0, name: GeForce GTX 760, pci bus id: 0000:01:00.0, compute capability: 3.0) with Cuda compute capability 3.0. The minimum required Cuda capability is 3.5. The tensorflow website says that GPUs with compute capability of 3.0 is supported so why does it says it needs compute capability 3.5? Is there any way to get a docker image for tensorflow and jupyter that uses tf 1.5.0 but supports GPUs with compute capability?
0
1
9,306
0
51,002,024
0
0
0
0
1
true
0
2018-06-23T12:54:00.000
0
1
0
Unable to import cv2 in 64 bit version of python 3.6.5
51,001,356
1.2
python,opencv
You may uninstall previous opencv installation via pip uninstall opencv-python then do the pip install opencv-python.
I had installed opencv in python 3.6 32 bit version using the command 'pip install opencv-python', which I successfully used. Later when I upgraded my version to 64 bit as to use tensorflow as well, and ran the same command 'pip install opencv-python', opencv was already present, yet when I tried to import cv2 module, it showed the error that any module named cv2 is not found. Now I don't understand what to do next. Hope someone can help.
0
1
433
0
51,012,003
0
1
0
0
1
true
0
2018-06-24T00:45:00.000
1
1
0
importing library Jupyter Notebook vs Canopy
51,006,130
1.2
python,jupyter,canopy
Each Python environment is independent. Installing a package into an anaconda Python environment does not install it into a Canopy Python environment (nor into a different anaconda Python environment). This is a feature, not a bug; it allows different Python environments to be configured differently, even incompatibly. To use OpenCV in Canopy User Python environment, first install it using the Canopy Package Manager.
I just installed Canopy because I had some issues running code in Jupyter Notebook. I have an Anaconda distribution installed. I installed OpenCV through anaconda and can easily import cv2 in Jupyter Notebook. However, when I import cv2 in Canopy IDE it says "No module named cv2". How can I safely fix this?
0
1
327
0
51,192,080
0
0
0
0
1
true
0
2018-06-25T06:08:00.000
0
1
0
numpy concatenate multiple arrays arrays
51,017,203
1.2
python,numpy,concatenation
So, the main problem here was with the one of the arrays of shape (0,) instead of (0,227,227,3). np.concatenate(alist,axis=0) works.
I have many numpy arrays of shape (Ni,227,227,3), where Ni of each array is different. I want to join them and make array of shape (N1+N2+..+Nk,227,227,3) where k is the number of arrays. I tried numpy.concatenate and numpy.append but they ask for same dimension in axis 0. I am also confused on what is axis 1 and axis 2 in my arrays.
0
1
1,321
0
51,018,770
0
0
0
0
2
false
0
2018-06-25T07:56:00.000
0
4
0
How to add rows to pandas dataframe with reasonable performance
51,018,628
0
python,pandas,dataframe
The Fastest way would be load to dataframe directly via pd.read_csv() Try separating the logic to clean out unstructured to structured data and then use pd.read_csv to load the dataframe. You can share the sample unstructured line and logic to take out the structured data, So that might share some insights on the same.
I have an empty data frame with about 120 columns, I want to fill it using data I have in a file. I'm iterating over a file that has about 1.8 million lines. (The lines are unstructured, I can't load them to a dataframe directly) For each line in the file I do the following: Extract the data I need from the current line Copy the last row in the data frame and append it to the end df = df.append(df.iloc[-1]). The copy is critical, most of the data in the previous row won't be changed. Change several values in the last row according to the data I've extracted df.iloc[-1, df.columns.get_loc('column_name')] = some_extracted_value This is very slow, I assume the fault is in the append. What is the correct approach to speed things up ? preallocate the dataframe ? EDIT: After reading the answers I did the following: I preallocated the dataframe (saved like 10% of the time) I replaced this : df = df.append(df.iloc[-1]) with this : df.iloc[i] = df.iloc[i-1] (i is the current iteration in the loop).(save like 10% of the time). Did profiling, even though I removed the append the main issue is copying the previous line, meaning : df.iloc[i] = df.iloc[i-1] takes about 95% of the time.
0
1
88
0
51,018,824
0
0
0
0
2
false
0
2018-06-25T07:56:00.000
0
4
0
How to add rows to pandas dataframe with reasonable performance
51,018,628
0
python,pandas,dataframe
Where you use append you end up copying the dataframe which is inefficient. Try this whole thing again but avoiding this line: df = df.append(df.iloc[-1]) You could do something like this to copy the last row to a new row (only do this if the last row contains information that you want in the new row): df.iloc[...calculate the next available index...] = df.iloc[-1] Then edit the last row accordingly as you have done df.iloc[-1, df.columns.get_loc('column_name')] = some_extracted_value
I have an empty data frame with about 120 columns, I want to fill it using data I have in a file. I'm iterating over a file that has about 1.8 million lines. (The lines are unstructured, I can't load them to a dataframe directly) For each line in the file I do the following: Extract the data I need from the current line Copy the last row in the data frame and append it to the end df = df.append(df.iloc[-1]). The copy is critical, most of the data in the previous row won't be changed. Change several values in the last row according to the data I've extracted df.iloc[-1, df.columns.get_loc('column_name')] = some_extracted_value This is very slow, I assume the fault is in the append. What is the correct approach to speed things up ? preallocate the dataframe ? EDIT: After reading the answers I did the following: I preallocated the dataframe (saved like 10% of the time) I replaced this : df = df.append(df.iloc[-1]) with this : df.iloc[i] = df.iloc[i-1] (i is the current iteration in the loop).(save like 10% of the time). Did profiling, even though I removed the append the main issue is copying the previous line, meaning : df.iloc[i] = df.iloc[i-1] takes about 95% of the time.
0
1
88
0
56,691,091
0
0
0
0
1
false
8
2018-06-25T11:50:00.000
14
3
0
subsample, colsample_bytree, colsample_bylevel in XGBClassifier() Python 3.x
51,022,822
1
python-3.x,xgboost
The idea of "subsample", "colsample_by_tree", and "colsample_bylevel" comes from Random Forests. In it, you build an ensemble of many trees and then group them together when making a prediction. The "random" part happens through random sampling of the training samples for each tree (bootstrapping), and building each tree (actually each tree's node) only considering a random subset of the attributes. In other words, for each tree in a random forest you: Select a random sample from the dataset to train this tree; For each node of this tree, use a random subset of the features. This avoids overfitting and decorrelates the trees. Similarly to random forests, XGB is an ensemble of weak models that when put together give robust and accurate results. The weak models can be decision trees, which can be randomized in the same way as random forests. In this case: "subsample" is the fraction of the training samples (randomly selected) that will be used to train each tree. "colsample_by_tree" is the fraction of features (randomly selected) that will be used to train each tree. "colsample_bylevel" is the fraction of features (randomly selected) that will be used in each node to train each tree.
I've spent a good deal of time trying to find out what these "subsample", "colsample_by_tree", and "colsample_bylevel" actually did in XGBClassifier() but I can't exactly find out what they do. Can someone please explain briefly what it is they do? Thanks!
0
1
13,307
0
51,035,810
0
0
0
1
1
false
1
2018-06-25T12:36:00.000
0
1
0
Python pandas dataframe transaction
51,023,642
0
python,pandas,dataframe,transactions,sqlalchemy
After further investigation I realized that it is possible to do only with sqllite3, because to_sql supports both sqlalchemy engine and plain connection object as conn parameter, but as a connection it is supported only for sqllite3 database In other words you have no influence on connection which will be created by to_sql function of dataframe
Please suggest a way to execute SQL statement and pandas dataframe .to_sql() in one transaction I have the dataframe and want to delete some rows on the database side before insertion So basically I need to delete and then insert in one transaction using .to_sql of dataframe I use sqlalchemy engine with pandas.df.to_sql()
0
1
1,173
0
51,275,886
0
0
0
0
1
false
0
2018-06-25T13:53:00.000
0
1
0
Maximising prediction accuracy of the majority class in an imbalanced dataset
51,025,178
0
python,optimization,classification,data-science
You are thinking about this the wrong way. If all you cared about was the majority class, you could just predict everything as belonging to the majority class. You'd get 100% of them right. You would have lots of false positives, but you don't care about those right? Ah, if you do care about the false positives, then that means you actually care about the minority class after all. The more things in the minority class you predict correctly, the less false positives you have. This is two sides of the same coin.
When talking about imbalanced datasets, most articles would refer to maximising the prediction of the minority class (e.g. for fraud detection). I have an imbalanced dataset (ratio approximately 1:20). where I am interested to achieve the highest prediction accuracy for the majority class. My work is in Python. Possible solutions I have researched are: Oversampling of the minority class Changing the loss/cost matrix for some classification models What are the pros and cons of using each method? Are there any other methods I could try?
0
1
129
0
51,027,093
0
0
0
0
1
false
1
2018-06-25T15:25:00.000
0
2
0
Install Keras/Tensorflow on Mac with cpu python2.7
51,026,983
0
python,macos,python-2.7,tensorflow,cpu
I think what you read meant that tensorflow programs work much faster if your computer has a GPU. You need a Nvidia GPU in your computer to install tensorflow with GPU support on your Mac and as far as I know, after version 1.2 tensorflow no longer provides GPU support for MacOS
I recently found an article that indicates that the conventional methods for downloading python machine learning modules such as tensorflow and keras are not optimized for computers with a cpu. How can I configure tensorflow and keras to make it most compatible with my processor on MacOSX in python 2.7? If it helps, I use pycharm to download most of my libraries and for my coding interface.
0
1
784
0
51,041,372
0
0
0
0
1
true
0
2018-06-26T07:17:00.000
0
1
0
How to restart dlib's correlation tracker
51,037,016
1.2
python,tracking,dlib
Call the correlation_tracker's start_track() member function.
I'm using dlib's correaltion tracker and would like to restart it on some que. When I pass None as the image it crashes. How can I tell the tracker a new video is starting? I'm using multiple thread and would not like to open a new tracker every time. Thank you!
0
1
213
0
51,039,032
0
0
0
0
2
false
5
2018-06-26T07:35:00.000
3
2
0
Linear Regression vs Random Forest performance accuracy
51,037,363
0.291313
python,data-science
There for sure have to be situations where Linear Regression outperforms Random Forests, but I think the more important thing to consider is the complexity of the model. Linear Models have very few parameters, Random Forests a lot more. That means that Random Forests will overfit more easily than a Linear Regression.
If the dataset contains features some of which are Categorical Variables and some of the others are continuous variable Decision Tree is better than Linear Regression,since Trees can accurately divide the data based on Categorical Variables. Is there any situation where Linear regression outperforms Random Forest?
0
1
9,961
0
51,062,800
0
0
0
0
2
false
5
2018-06-26T07:35:00.000
2
2
0
Linear Regression vs Random Forest performance accuracy
51,037,363
0.197375
python,data-science
Key advantages of linear models over tree-based ones are: they can extrapolate (e.g. if labels are between 1-5 in train set, tree based model will never predict 10, but linear will) could be used for anomaly detection because of extrapolation interpretability (yes, tree based models have feature importance, but it's only a proxy, weights in linear model are better) need less data to get good results have strong online learning implementations (Vowpal Wabbit), which is crucial to work with giant datasets with a lot of features (e.g. texts)
If the dataset contains features some of which are Categorical Variables and some of the others are continuous variable Decision Tree is better than Linear Regression,since Trees can accurately divide the data based on Categorical Variables. Is there any situation where Linear regression outperforms Random Forest?
0
1
9,961
0
51,051,880
0
0
0
0
1
true
0
2018-06-26T17:21:00.000
0
1
0
how does Keras flow_from_directory affect computer storage?
51,048,421
1.2
python-3.x,tensorflow,machine-learning,neural-network,keras
By default, ImageDataGenerator does data augmentation on the fly and does not store the augmented images anywhere. As you mention, doing so would require too much space. So you should only worry about having enough RAM to fit a certain number of augmented batches, not the whole dataset.
I'm trying to figure out the minimum amount of room I will need to train neural networks on my machine. Often times (image) data sets are relatively small in their raw forms, but when we transform them (in keras w/ flow_from_dir) we augment the images and kind of multiply the size of the data set to our desire. My question is: how does flow_from_directory work with storing the augmented images? If I don't specify that the images need to be stored (parameter of the class) does keras augment the image, train with it and discard it or does it save it for a period of time, weighing down my computer until training is over? Also, would those images be in persistent memory or RAM? Thank you in advance.
0
1
369
0
51,058,681
0
0
0
0
1
false
2
2018-06-27T08:24:00.000
0
1
0
Evaluation of Forecasting performance Metric on original or transformed Dependent Variable
51,057,993
0
python,logging,machine-learning,transformation,metrics
I would say that in your circumstance it is necessary to scale back to true prices. This is not an absolute statement, but really depends on the setup of your problem: if you have a true price that is "1", then its log will be "0" and, whatever you predict for that single point, you'll get undefined / infinite MAPE. So I'd say yes, at least scale it back to exp before doing it. Also I don't understand the difference between "1" and "2": they seem identical to me, in "1" you're just taking the log of the price for the test set and then taking the exp again, in "2" you're just avoiding doing the two operations... As for "3", no, definitely they aren't independent on all transformations - in particular not log. MAPE is only independent to rescaling data by a constant factor, MAE by shifting it by a constant addend. On this point, beware that no measure gives a perfect truth and you might get very bad results just applying them. For example, using MAPE, if you have something with price of 1 cent and you're estimating it at 1$ you'll give it the same (huge) error as if it had price 1000$ and you're estimating it at 100k. On the other hand, since you're taking logs in training, that's basically the same rule you're using to train your model, so it might not be catastrophic. Just beware if you have true prices that are very close to, or worse exactly, 0. (MAE is probably worse in this case, because it will basically give all the weight to the few very expensive items in your database, but I can't say for sure from here)
I am building a machine learning model to forecast future prices in scikit-learn. The dependent variable price is not normally distributed, thus, I will perform log transformation on only dependent variable price using np.log(price). After this, I will split complete data-set into train and test sets. Thus y_train and y_test both are now log transformed prices. After machine learning model fitting, I have to calculate forecasting performance metrics like MAPE error for the fitted model. Should the data (price) be transformed back to its original scale before calculating MAPE using np.exp() for both model.predict method and y_test set ? Or we should first split the data into train and test, apply log transformation on training set y_train only, after this apply inverse transform on model.prediction set. Thus, y_test set (original) and np.exp(model.predict()) would then be used to calculate MAPE Or the values of MAPE or MAE metric is independent of scaling of response variable y and MAPE can be reported using transformed log values of dependent variable price?
0
1
514
0
55,392,343
0
1
0
0
1
false
1
2018-06-27T13:40:00.000
0
1
0
TimeoutError import error when using matlab engine with python 3.5
51,064,328
0
python,matlab,matlab-engine
I get a similar running error. But after trying several times, I found that for the same *.py manuscript, the phrase import matlab.engine and eng = matlab.engine.start_matlab() should be implemented only once I commented them, by doing this I can re-run the script *.py again. Otherwise, it will post the error ImportError: cannot import name 'TimeoutError'. I think using the import matlab.engine to start a new MATLAB® process is just like to open a door before we close the door, everyone can enter the room or go out again and again, but do not need open the door again because it is not closed yet. "Restarting the kernel" is just like close the door automatically (the Python stops the engine and its MATLAB process).
I am trying to run a function written in matlab in a python script using matlab.engine. The first time I run the script everything works fine, but when I try to run the script again I get the error "ImportError: cannot import name 'TimeoutError'" on importing the matlab engine. Restarting the kernel allows me to run the script again. I am also using import matlab.engine and not from matlab.engine import to avoid circular importing. Any suggestions on how I can solve the issue? I am using Ubuntu 16.04 and working with spyder. Many Thanks!
0
1
429
0
51,066,356
0
0
1
0
1
false
1
2018-06-27T15:12:00.000
1
2
0
the best way to conduct fft using GPU accelaration with cuda
51,066,245
0.099668
python,cuda,cufft
I want to use pycuda to accelerate the fft You can't. PyCUDA has no built-in FFT support of any kind.
In python, what is the best to run fft using cuda gpu computation? I am using pyfftw to accelerate the fftn, which is about 5x faster than numpy.fftn. I want to use pycuda to accelerate the fft. I know there is a library called pyculib, but I always failed to install it using conda install pyculib. Is there any suggestions?
0
1
1,084
0
51,071,525
0
1
0
0
1
false
0
2018-06-27T21:11:00.000
0
4
0
Separate a list based on values in a second list
51,071,467
0
python
list(itertools.compress(X, Y)) will get you the list of good lists. list(itertools.compress(X, [not a for a in Y])) will get you the list of bad lists.
X: a list of lists, where each list element corresponds to a label in Y Y: a binary list of labels (values are either 1 or 0) I want to extract the elements in X according to the value at the corresponding index in Y, as follows: good = values of X where the label/value in Y is 1 bad = values of X where the label/value in Y is 0 I am still fairly new to sub-setting in Python and not really sure of a good way to do this.
0
1
123
0
51,078,286
0
0
0
0
1
false
0
2018-06-28T07:01:00.000
0
1
0
how to plot different stats for every year
51,076,577
0
python,matplotlib,time,statistics
A line chart for each stat with year on the x-axis, percentage on the y-axis and a line for each user.
I have a dataset where over 5 years, for each person, i have 3 stats (for, against, neutral) which are represented as percentage. Do you have any ideas on how to plot this over time for each person ? I tought of a pie chart for each year, is it good idea ? year|x|y|z|uniq_key 2011|0.005835365238989241|0.7761263149278178|0.21803831983319283|P1 2012|0.009289549431275945|0.7361277645833669|0.2545826859853564|P1 2013|0.008078032727124805|0.7392018809692821|0.2527200863035934|P1 2011|0.0069655450457009405|0.7275518410866034|0.26548261386769734|P2 2012|0.008719188605542267|0.7855644106925562|0.20571640070190217|P2 2013|0.011938649525870312|0.8028230127919557|0.1852383376821743|P2
0
1
28
0
51,085,292
0
0
0
0
1
true
0
2018-06-28T10:33:00.000
2
1
0
Why doesn't Dask dataframe have a shape attribute?
51,080,681
1.2
python,dataframe,dask
This has been discussed in dask. First I'll point out that in the python spec, len() is always supposed to return a concrete integer. Dask respects this, and so len(df) blocks, unlike most operations on a data-frame. There is no such constraint on .size, which is therefore lazy. The metadata of the dataframe is immediately available, however, the number, names and types of the columns are known without computing any of the data. Therefore, .shape would be a combination of a known value and either a lazy or a slowly-computed concrete value. This doesn't seem necessary.
Just out of curiosity, if dask enables both len() and size, why is there not shape as well?
0
1
660
0
51,082,005
0
0
0
0
2
true
2
2018-06-28T11:11:00.000
2
2
0
Is the usage of on-line data augmentation a fair comparison between CNN models
51,081,439
1.2
python,tensorflow,machine-learning,keras,convolutional-neural-network
If I understand you correct you are wondering whether the randomness caused by the data augmentation affects the result? The randomness of the augmentation does not affect the result (at least not to a degree that makes a difference anyway) if you train long enough. The other options you have are (as I think about it): Augment your data deterministically applying the same transformation to your images before inserting them to your model. Those transformation could be (a) either random ones, e.g. rotate your images by a random degree between some limits, or (b) predetermined ones, e.g. rotate all your images by 1, 3 and 5 degrees. Don't augment your data at all. Use your initial data to train your model. The effect of those choices are: The number of transformation you would apply is limited and even if choice 1a is chosen would be predefined set. If you are willing to increase this dramatically other issues arise like where are you going to store all this data, how are you going to handle it during training etc. So, on the fly has the advantage that the storage of your data does not change, neither is the way you deal with your data. The disadvantage of course being a slower procedure is used (which depending on the transformation could make quite a difference). This choice to be valid means that you have a lot of data. And by meaning a lot (depending on the problem of course) sometimes a lot is not enough. Since your are (probably) using different data for testing differences appear between your training and testing data in many aspects. For example for human detection (arbitrary choice) differences in poses, colors, light conditions, image clarity, image size, aspect ratio are common. How do you deal with that? You either collect a super huge collection of data or (probably) use data augmentation, right? To sum it up, it's fair because on the long run it does not make a big difference. Consider the option of early stopping for your model for example. Is it fair to compare models that have stopped their training not in the best iteration? Well, it's not completely fair but it does not make a difference.
I am using on-line data augmentation of images I feed into my Convolutional Neural Network. I am using the Keras ImageDataGenerator for this. The images are augmented in each batch and then the model is trained on these images. I am comparing different models, but since the images are augmented on the fly, is this really fair, since each models is getting slightly different images?
0
1
424
0
51,081,966
0
0
0
0
2
false
2
2018-06-28T11:11:00.000
1
2
0
Is the usage of on-line data augmentation a fair comparison between CNN models
51,081,439
0.099668
python,tensorflow,machine-learning,keras,convolutional-neural-network
In my opinion, you already give part of the answer within your question: images are augmented on the fly, is this really fair, since each models is getting slightly different images? For Evaluation / Validation I usually try to provide situations as similar as possible over the different architectures - otherwise you might induce unnecessary bias you are not able to account for. Also you could reduce computational effort by offline-Augmentation and then directly handing over the same augmented Training samples
I am using on-line data augmentation of images I feed into my Convolutional Neural Network. I am using the Keras ImageDataGenerator for this. The images are augmented in each batch and then the model is trained on these images. I am comparing different models, but since the images are augmented on the fly, is this really fair, since each models is getting slightly different images?
0
1
424
0
51,082,709
0
1
0
0
2
false
0
2018-06-28T12:06:00.000
0
2
0
OpenCV install on Python 3.6: ModuleNotFoundError
51,082,504
0
python,module,pip
Use pip install -U opencv-python
I am getting the following error when using import cv2: ModuleNotFoundError: No module named 'cv2' My version of Python is 3.6 64 bit. I have downloaded the whl file to install it via pip manually, and have also installed it with pip install opencv-python however I still get ModuleNotFoundError. pip outputs Requirement already satisfied: opencv-python in c:\...\python36\site-packages Help would be much appreciated.
0
1
1,891
0
51,082,582
0
1
0
0
2
false
0
2018-06-28T12:06:00.000
2
2
0
OpenCV install on Python 3.6: ModuleNotFoundError
51,082,504
0.197375
python,module,pip
Your download must have been corrupted. It happened to me too. Simply uninstall the package and use sudo apt-get install python open-cv
I am getting the following error when using import cv2: ModuleNotFoundError: No module named 'cv2' My version of Python is 3.6 64 bit. I have downloaded the whl file to install it via pip manually, and have also installed it with pip install opencv-python however I still get ModuleNotFoundError. pip outputs Requirement already satisfied: opencv-python in c:\...\python36\site-packages Help would be much appreciated.
0
1
1,891
0
51,105,559
0
0
0
0
1
false
2
2018-06-29T15:50:00.000
0
2
0
Error by running script with python3
51,105,431
0
python-3.x,pandas
Maybe you’re using python2 instead of python3. Use the appropiate syntax to import pandas and also try pip install pandas
I have tried to run a script with pandas in python3, but it appears to have an terrible error I even don't understand. Note: my script is "gh.py" and has an error in line 1, i can't import pandas. File "gh.py", line 1 from pandas as pd ^ SyntaxError: invalid syntax
0
1
23
0
51,106,833
0
0
0
0
1
true
0
2018-06-29T16:57:00.000
0
1
0
Tensorflow 1.9 DNNClassifier unequal output labels optimization
51,106,402
1.2
python,tensorflow,machine-learning,artificial-intelligence
What you said is considered one approach to solving this issue, as this prevents your computed gradient from being dominated by Classes 1 and 2. Depending on your data, it may be effective to add some Gaussian noise to the underrepresented samples to create similar samples. Another good idea in general to avoid overfitting is to apply a dropout if you have not already.
I have a classification problem which requires some optimization, as my results are not quite adequate. I'm using the DNNClassifier for a huge dataset in order to classify items in 5 different classes (labels). I have over 2000 distinct items (in a hashbucket column with size 2000 and dims 6 - is this adequate?) and multiple numeric columns for said classification. My problem is the following: the amount of items belonging in each class is very variable. Class 1 is very common, class 2 is common but classes 3 4 and 5 are highly uncommon (under 2% of the dataset) but they are the most interesting ones in my test case. Even if I tweak the network size/number of neurons or the training epoch, I get close to no results in classes 3, 4 and 5, so class 1 and 2 are clearly overfitted. I saw the weight_column option in the documentation - would that be a good idea to change the learning weight of these three class to "normalize" the weight in each class ? Is there a more efficient way to get better results on rarer cases without losing the detection precision on the common classes? Many thanks!
0
1
84
0
51,121,201
0
1
0
0
1
false
5
2018-07-01T07:10:00.000
0
2
0
How to replace all string in all columns using pandas?
51,121,170
0
python,python-3.x,pandas
Try this df['Title'] = titanic_df['Title'].replace("&amp;", "&")
In pandas, how do I replace &amp; with '&' from all columns where &amp could be in any position in a string? For example, in column Title if there is a value 'Good &amp; bad', how do I replace it with 'Good & bad'?
0
1
11,751
0
51,134,741
0
0
0
0
1
false
2
2018-07-02T10:22:00.000
1
2
0
How can i scale a thickness of a character in image using python OpenCV?
51,133,962
0.099668
python,image,opencv,computer-vision
One possible solution that I can think of is to alternate erosion and find contours till you have only one contour left (that should be the thicker). This could work if the difference in thickness is enough, but I can also foresee many particular cases that can prevent a correct identification, so it depends very much on how is your original image.
I created one task, where I have white background and black digits. I need to take the largest by thickness digit. I have made my picture bw, recognized all symbols, but I don't understand, how to scale thickness. I have tried arcLength(contours), but it gave me the largest by size. I have tried morphological operations, but as I undestood, it helps to remove noises and another mistakes in picture, right? And I had a thought to check the distance between neighbour points of contours, but then I thought that it would be hard because of not exact and clear form of symbols(I draw tnem on paint). So, that's all Ideas, that I had. Can you help me in this question by telling names of themes in Comp. vision and OpenCV, that could help me to solve this task? I don't need exact algorithm of solution, only themes. And if that's not OpenCV task, so which is? What library? Should I learn some pack of themes and basics before the solution of my task?
0
1
2,238
0
51,139,393
0
0
0
0
1
false
0
2018-07-02T11:26:00.000
1
1
0
Gensim Word2vec Freeze some wordvectors and Update others
51,135,118
0.197375
python,word2vec,gensim
There is! But it's an experimental feature with little documentation – you'd need to read the source to fully understand it, and directly mutate your model to make use of it. Look through the word2vec.py source for properties ending _lockf – specifically in the latest code, one named vectors_lockf. It's a sort of mask which either allows, weakens, or stops training of certain words. For each word, if it's value is 1.0, normal full backpropagated updates are applied. Any lower value weakens the update – so 0.0 freezes a word against updates. (The potential update is still calculated – so there's no net speedup – it's just multiplied-by-0.0 before final application to particular frozen words.)
Regarding word2vec with gensim, Suppose you already trained a model on a big corpus, and you want to update it with new words from new sentences, but not update the words which already have a vector. Is it possible to freeze the vectors of some words and update only some chosen words (like the new words) when calling model.train ? Or maybe is there a trick to do it ? Thanks.
0
1
436
0
51,140,808
0
0
0
0
1
false
0
2018-07-02T17:04:00.000
0
2
0
Count Specific Values in Dataframe
51,140,765
0
python,python-3.x,pandas
You could use len([x for x in df["Sex"] if x == "Male"). This iterates through the Sex column of your dataframe and determines whether an element is "Male" or not. If it is, it is appended to a list via list comprehension. The length of that list is the number of Males in your dataframe.
If I had a column in a dataframe, and that column contained two possible categorical variables, how do I count how many times each variable appeared? So e.g, how do I count how many of the participants in the study were male or female? I've tried value_counts, groupby, len etc, but seem to be getting it wrong. Thanks
0
1
2,617
0
71,822,982
0
0
0
0
1
false
0
2018-07-02T18:32:00.000
0
1
1
Kafka Messages are split in consumer
51,141,942
0
python-3.x,apache-kafka,streaming
Once use fetch.message.max.bytes=2000000 in consumer side
I am new to Kafka. I am sending a request to REST server and send the response of the request to kafka server as messages. When i consume the data from consumer the message is split into multiple smaller messages. How do i avoid this. The response is a JSON row. I want each json row to be one message. Any help would be appreciated. The size of the Json is also not very big. A json with over 1500 rows is about 2 MB For eg ConsumerRecord(topic='meetup_sample', partition=0, offset=445386, >timestamp=1530554568191, timestamp_type=0, >key=b'5fa4964b035c072a81fedb93cfca8f0ecb562cf913c69f63efbbf4e799871f05', >value=b':"food-and-drink","topic_name":"Food and Drink"},{"urlkey":"newintown","topic_name":"New In Town"},{"urlkey":"beer","topic_name"', >checksum=None, serialized_key_size=64, serialized_value_size=128) ConsumerRecord(topic='meetup_sample', partition=0, offset=445387, >timestamp=1530554568192, timestamp_type=0, >key=b'1f170ceabf91335d332487ebb0890f0bd0ed69018c618ecc2c789a260d561f43', >value=b':"Beer"},{"urlkey":"game-night","topic_name":"Game Night"},{"urlkey":"happy-hours","topic_name":"Happy Hour"},{"urlkey":"water-s', >checksum=None, serialized_key_size=64, serialized_value_size=128)
0
1
556
0
52,422,681
0
0
0
0
1
false
1
2018-07-02T19:59:00.000
1
1
0
Error while importing Keras
51,142,979
0.197375
python,tensorflow,keras
pip install --upgrade pip setuptools work for me.
I am facing error while importing Keras. Below is the error trace: Using TensorFlow backend. Traceback (most recent call last): File "recognize.py", line 8, in <module> import keras File "/home/pi/.local/lib/python2.7/site-packages/keras/__init__.py", line 3, in <module> from . import utils File "/home/pi/.local/lib/python2.7/site-packages/keras/utils/__init__.py", line 6, in <module> from . import conv_utils File "/home/pi/.local/lib/python2.7/site-packages/keras/utils/conv_utils.py", line 9, in <module> from .. import backend as K File "/home/pi/.local/lib/python2.7/site-packages/keras/backend/__init__.py", line 87, in <module> from .tensorflow_backend import * File "/home/pi/.local/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py", line 5, in <module> import tensorflow as tf File "/home/pi/.local/lib/python2.7/site-packages/tensorflow/__init__.py", line 24, in <module> from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import File "/home/pi/.local/lib/python2.7/site-packages/tensorflow/python/__init__.py", line 63, in <module> from tensorflow.python.framework.framework_lib import * # pylint: disable=redefined-builtin File "/home/pi/.local/lib/python2.7/site-packages/tensorflow/python/framework/framework_lib.py", line 25, in <module> from tensorflow.python.framework.ops import Graph File "/home/pi/.local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 54, in <module> from tensorflow.python.platform import app File "/home/pi/.local/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 24, in <module> from tensorflow.python.platform import flags File "/home/pi/.local/lib/python2.7/site-packages/tensorflow/python/platform/flags.py", line 33, in <module> disclaim_key_flags() # pylint: disable=undefined-variable NameError: name 'disclaim_key_flags' is not defined This was found in RaspberryPi 3 Model B. OS: Raspbian Strech
0
1
588
0
66,550,315
0
0
0
0
1
false
27
2018-07-02T20:18:00.000
2
4
0
Difference between tensor.permute and tensor.view in PyTorch?
51,143,206
0.099668
python,multidimensional-array,deep-learning,pytorch,tensor
tensor.permute() permutes the order of the axes of a tensor. tensor.view() reshapes the tensor (analogous to numpy.reshape) by reducing/expanding the size of each dimension (if one increases, the others must decrease).
What is the difference between tensor.permute() and tensor.view()? They seem to do the same thing.
0
1
25,972
0
51,153,467
0
0
0
0
1
false
0
2018-07-03T08:31:00.000
3
1
0
What is the difference between sklearn.cross_validation and sklearn.model_estimation?
51,149,995
0.53705
python,machine-learning,scikit-learn
cross_validation is an older package used previously in scikit. model_selection is newer replacement of the cross_validation (and some others too). It has some structural changes in the classes defined in it. So same class which was previously in cross_validation is now present in model_selection but with changed behaviour (input params, output type, attributes etc). So you should always use classes from model_selection.
I want to know the difference between importing sklearn.model_estimation and sklearn.cross_validation when I run Python code for linear regression. I found out that sklearn.model_estimation calls a method called next(ShuffleSplit().split(X, y)) and sklearn.cross_validation calls a method called next(iter(ShuffleSplit(n_samples))) but I am still in darkness as to what is the difference between what these two methods actually perform. Looking for help. Thanks in advance.
0
1
819
0
51,152,822
0
0
0
1
1
true
0
2018-07-03T09:37:00.000
0
2
0
Creating star schema from csv files using Python
51,151,263
1.2
python,csv,star-schema
Reading certain blogs look like it is not a good way to handle such cases in python in memory but still if the below post make sense you cn use it Fact Loading The first step in DW loading is dimensional conformance. With a little cleverness the above processing can all be done in parallel, hogging a lot of CPU time. To do this in parallel, each conformance algorithm forms part of a large OS-level pipeline. The source file must be reformatted to leave empty columns for each dimension's FK reference. Each conformance process reads in the source file and writes out the same format file with one dimension FK filled in. If all of these conformance algorithms form a simple OS pipe, they all run in parallel. It looks something like this. src2cvs source | conform1 | conform2 | conform3 | load At the end, you use the RDBMS's bulk loader (or write your own in Python, it's easy) to pick the actual fact values and the dimension FK's out of the source records that are fully populated with all dimension FK's and load these into the fact table.
I have 6 dimension tables, all in the form of csv files. I have to form a star schema using Python. I'm not sure how to create the fact table using Python. The fact table (theoretically) has at least one column that is common with a dimension table. How can I create the fact table, keeping in mind that quantities from multiple dimension tables should correspond correctly in the fact table? I am not allowed to reveal the code or exact data, but I'll add a small example. File 1 contains the following columns: student_id, student_name. File 2 contains : student_id, department_id, department_name, sem_id. Lastly File 3 contains student_id, subject_code, subject_score. The 3 dimension tables are in the form of csv files. I now need the fact table to contain: student_id, student_name, department_id, subject_code. How can I form the fact table in that form? Thank you for your help.
0
1
1,539
0
51,211,614
0
1
0
0
1
false
0
2018-07-03T13:11:00.000
0
1
0
How is dask implemented on multiple systems?
51,155,513
0
python-2.7,parallel-processing,dask,dask-distributed
Dask dataframes are chunked, so in general you have one big dataframe made up of smaller dataframes spread across your cluster. Computations apply to each chunk individually with shuffling of results where required (such as groupby, sum and other aggregate tasks).
I am new to Dask library.I wanted to know if we implement parallel computation using dask on two systems ,then is the data frame on which we apply the computation stored on both the systems ? How actually does the parallel computation takes place,it is not clear from the documentation.
0
1
52
0
51,195,993
0
0
0
0
1
false
2
2018-07-03T17:27:00.000
0
2
0
Which newline character is in my CSV?
51,160,071
0
python,csv,ssis,delimiter,eol
Seeing that you have EmEditor, you can use EmEditor to find the eol character in two ways: Use View > Character Code Value... at the end of a line to display a dialog box showing information about the character at the current position. Go to View > Marks and turn on Newline Characters and CR and LF with Different Marks to show the eol while editing. LF is displayed with a down arrow while CRLF is a right angle. Some other things you could try checking for are: file encoding, wrong type of data for a field and an inconsistent number of columns.
We receive a .tar.gz file from a client every day and I am rewriting our import process using SSIS. One of the first steps in my process is to unzip the .tar.gz file which I achieve via a Python script. After unzipping we are left with a number of CSV files which I then import into SQL Server. As an aside, I am loading using the CozyRoc DataFlow Task Plus. Most of my CSV files load without issue but I have five files which fail. By reading the log I can see that the process is reading the Header and First line as though there is no HeaderRow Delimiter (i.e. it is trying to import the column header as ColumnHeader1ColumnValue1 I took one of these CSVs, copied the top 5 rows into Excel, used Text-To-Columns to delimit the data then saved that as a new CSV file. This version imported successfully. That makes me think that somehow the original CSV isn't using {CR}{LF} as the row delimiter but I don't know how to check. Any suggestions?
0
1
2,081
0
51,160,557
0
0
0
0
1
false
1
2018-07-03T17:43:00.000
1
1
0
CNTK evaluation for image classification
51,160,298
0.197375
python,image,classification,cntk
Reshaping your input would not affect the output of the model. If it is only predicting one class for every image, it is an issue with model training. I would suggest you try predicting on your training data to see if it only predicts one class on the training data. If that is the case, it is definitely a model training issue.
I built an image classifier using CNTK. The images are grayscale. Therefore, I entered the number of channels as 1. So, the model requires (1x64x64) data (64 being the image height and width). The problem is, when I try to predict the class of a new image, it is seen as (64x64) only. So, the code errors out due to data mismatch. Therefore, I reshaped the image using: image_data = image_data.reshape((1, image_data.shape[0], image_data.shape[1])) This generated (1x64x64) - which worked. Though the predictions are coming the same class for every image I select. I wonder if it is because of this reshaping or not. Can someone chime in? Thanks!
0
1
72
0
51,180,855
0
0
0
0
1
true
0
2018-07-04T01:22:00.000
2
2
0
How to crop a circular image to inscribed square, then crop to inscribed circle, and finally crop to inscribed square?
51,164,645
1.2
python,image,opencv,image-processing,python-imaging-library
The image you want to crop to is, geometrically, a square centered on the input image, half as large. This is because you're inscribing twice, each time the square shrinks by the square root of two, and dividing by SQRT(2) twice is the same thing as dividing by 2. So if you have an input square of side D (or a circular image of diameter D), what you need to do is crop with center (D/2, D/2) and a side of D/2.
I would like to crop the circular image attached below according to the following: crop input circular image to the unique square inscribed in the circle. crop the square image down to the circle inscribed in the square crop the circular image from the previous step down to square that inscribes image. I am using python, opencv, and PIL. I have tried cropping with hard coding coordinates, but this obviously causes problems when applied to other images, so I would like a universal solution. I have included a rough visualization of how I would like to crop it: In addition, I have included the original image: A code snippet with brief explanation would be greatly appreciated.
0
1
577
0
51,167,856
0
0
0
0
1
true
3
2018-07-04T03:22:00.000
1
1
0
How to train HMM with audio senteces dataset for speech recognition?
51,165,305
1.2
python,tensorflow,speech-recognition,mfcc,hmmlearn
Do i need to cut my sentences into words or just use sentences for train HMM models? Theoretically you just need sentences and phonemes. But having isolated words may be useful for your model (it increases the size of your training data) Do I need phonemes dataset for train ? if yes do i need to train it use HMM too ? if not how my program recognize the phonemes for HMM predict input? You need phonemes, otherwise it will be too hard for your model to find the right phoneme segmentation if it does not have any example of isolated phonemes. You should first train your HMM states on the isolated phonemes and then add the rest of the data. If you have enough data, your model may be able to learn without the isolated phoneme examples, but I wouldn't beat on this. What steps i must do first ? Build your phoneme examples and use them to train a simple HMM model you don't model the transition between phonemes. Once your hidden states have some information about phonemes, you may continue the training on isolated words and sentences.
I have read some journals and paper of HMM and MFCC but i still got confused on how it works step by step with my dataset (audio of sentences dataset). My data set Example (Audio Form) : hello good morning good luck for you exam etc about 343 audio data and 20 speaker (6800 audio data) All i know : My sentences datasets is used to get the transition probabilty Hmm states is the phonemes 39 MFCC features is used to train the HMM models My Questions : Do i need to cut my sentences into words or just use sentences for train HMM models? Do I need phonemes dataset for train ? if yes do i need to train it use HMM too ? if not how my program recognize the phonemes for HMM predict input? What steps i must do first ? Note : Im working with python and i used hmmlearn and python_speech_features as my library.
0
1
795
0
51,165,703
0
0
0
0
1
true
0
2018-07-04T04:04:00.000
1
1
0
Data analytics using Python
51,165,589
1.2
python,data-analysis
I feel you can leave the fact tables as is and combine the rest of the data with which you can reduce the amount of data your dealing with and have the star schema intact too.. Thanks, Ram
I have multiple csv files in the form of a star schema. To perform analytics using Python, is it better to combine all these csv files into one csv file, or to extract data from each csv file and then do analytics? People online have almost always combined all files into one and have then performed analytics. However, combining all csv files would eliminate my star schema. I currently have approximately 25,000 rows and 10 columns in each csv file. The size of each csv file is around 7 MB. Thank you in advance for your help.
0
1
143
0
51,173,697
0
1
0
0
1
true
1
2018-07-04T12:26:00.000
1
1
0
How do I install and run pytorch in MSVS2017 (to avoid "module not found" error on "import torch" statement)?
51,173,695
1.2
python,installation,anaconda,pytorch
Probably, at the date of our MSVS2017 installation (esp. if prior to April 2018), there were no official .whl files for Windows pytorch (this has since changed). Also, given the default installation pathway, permissions on Windows (or file lock access) may be a problem (for example, when attempting to install to the "c:\ProgramData" folder). The solution is to 1) ensure all pytorch requisites are installed first (for example, if, during your failed pytorch installation you get a "_____ requires _____ which is not installed, for example cython, then install cython) 2) avoid permission errors by using the --user switch, and 3) install directly from the online repository. So, at the environment command line (top right corner in the "Python Environments" tool) provide --user http://download.pytorch.org/whl/cpu/torch-0.4.0-cp36-cp36m-win_amd64.whl. This operation will create and execute the command: pip install --user http://download.pytorch.org/whl/cpu/torch-0.4.0-cp36-cp36m-win_amd64.whl. Incidentally, you can install all packages at this environmental command line simply by typing the package name (e.g., cython, torchvision, scipy, etc...).
I'm trying to use pytorch in MSVS2017. I started a pytorch project, have anaconda environment set using python3.6, but when I run the debugger, I get a "module not found" error on the first import statement "import torch". I've tried various methods for installing pytorch in a way that allows MSVS2017 to use it, including command line and Anaconda command line installations (using tips from other closely related StackOverflow questions), but I cannot clear the error. This is a native MSVS2017 project type that came with their AI Tools module. What am I doing wrong?
0
1
92
0
51,180,929
0
0
0
0
1
false
0
2018-07-04T21:19:00.000
0
1
0
operation on blocks of a matrix efficiently in python
51,180,854
0
python,vectorization
You can iterate over the array and just call numpy.average on the 5x5 blocks ?
Let's say I have 100x200 numpy array of random numbers and wish to average blocks of 5x5 in the this array, that is I need the operation to be done on all 800 distinct blocks that are 5x5. I wonder if there is an efficient way to do this without nested loop and possibly without any loop.
0
1
26
0
51,184,234
0
1
0
0
1
false
0
2018-07-05T05:46:00.000
0
3
0
Import chainer in python throws error
51,184,116
0
python,anaconda,chainer
Try restarting your text editor and trying it, sometimes it needs to be restarted for changes to take effect.
I get the error: module 'matplotlib.colors' has no attribute 'to_rgba', when i import chainer in ipynb. I am using python 2 ,anaconda 4.1.1 ,chainer 4 and matplotlib 1.5.1.could anyone asses the problem
0
1
322
0
51,226,434
0
0
0
0
1
false
1
2018-07-05T07:15:00.000
0
1
0
Fast way to determine the optimal number of topics for a large corpus using LDA
51,185,358
0
python,r,lda,topic-modeling
Start with some guess in middle. decrease and increase the number of topics by say 50 or 100 instead of 1. Check in which way Coherence Score is increasing. I am sure it will converge.
I have a corpus consisting of around 160,000 documents. I want to do a topic modeling on it using LDA in R (specifically the function lda.collapsed.gibbs.sampler in lda package). I want to determine the optimal number of topics. It seems the common procedure is to have a vector of topic numbers, e.g., from 1 to 100, then run the model for 100 times and the find the one has the largest harmonic mean or samllest perplexity. However, given the large amount of documents, the optimal number of topics can easily go to several hundreds or even thousands. I find that as the number of topic increases, the computation time grows significantly. Even if I use parallel computing, it will several days or weeks. I wonder is there a better (time-efficient) way to choose the optimal number of topics? or is there any suggestion to reduce the computation time? Any suggestion is welcomed.
0
1
826
0
63,504,519
0
0
0
0
1
false
2
2018-07-05T21:37:00.000
2
3
0
How to create sequential number column in pyspark dataframe?
51,200,217
0.132549
python,dataframe,pyspark,sequential-number
Three simple steps: from pyspark.sql.window import Window from pyspark.sql.functions import monotonically_increasing_id,row_number df =df.withColumn("row_idx",row_number().over(Window.orderBy(monotonically_increasing_id())))
I would like to create column with sequential numbers in pyspark dataframe starting from specified number. For instance, I want to add column A to my dataframe df which will start from 5 to the length of my dataframe, incrementing by one, so 5, 6, 7, ..., length(df). Some simple solution using pyspark methods?
0
1
11,579
0
51,209,553
0
0
0
0
1
true
4
2018-07-05T23:32:00.000
4
2
1
Python 3.6 airflow with a Operator that requires 2.7
51,201,188
1.2
python,tensorflow,google-cloud-dataflow,airflow
There isn't a way specify the python version dynamically on a worker. However if you are using the the Celery executor, you can run multiple workers either on difference servers/vms or in different virtual environments. You can have one worker running python 3, and one running 2.7, and have each listening to different queues. This can be done three different ways: When starting the worker you can add a -q [queue-name] flag set an env of AIRFLOW__CELERY__DEFAULT_QUEUE updating default_queue under [celery] in the airflow.cfg. Then in your task definitions specify a queue parameter, changing the queue up depending on which python version the task needs to run. I'm not familiar with the MLEngineOperator, but you can specify a python_version in the PythonOperator which should run it in a virtualenv of that version. Alternative you can use the BashOperator, write the code to run in a different file and specify the python command to run it using the absolute path the version of python you want to use. Regardless of how the task is run, you just need to ensure the DAG itself is compatible with python version you are running it as it. ie. if you are going to start an airflow worker in different python versions, the DAG file itself needs to be python 2 & 3 compatible. The DAG can have addition file dependencies that it uses have version incompatibilities.
I'm currently running an airflow (1.9.0) instance on python 3.6.5. I have a manual workflow that I'd like to move to a DAG. This manual workflow now requires code written in python 2 and 3. Let's simplify my DAG to 3 steps: Dataflow job that processes data and sets up data for Machine Learning Training Tensorflow ML training job Other PythonOperators that I wrote using python 3 code The dataflow job is written in python 2.7 (required by google) and the tensorflow model code is in python 3. Looking at "MLEngineTrainingOperator" in airflow 1.9.0 there is a python_version parameter which sets the "The version of Python used in training". Questions: Can I dynamically specify a specific python version in a worker environment? Do I have to just install airflow on python 2.7 to make step 1) run? Can I have tensorflow model code in python 3 that just gets packaged up and submitted via MlEngineTraining running on python 2? Do I have to rewrite my 3) operators in python 2?
0
1
4,405