GUI and Desktop Applications
int64
0
1
A_Id
int64
5.3k
72.5M
Networking and APIs
int64
0
1
Python Basics and Environment
int64
0
1
Other
int64
0
1
Database and SQL
int64
0
1
Available Count
int64
1
13
is_accepted
bool
2 classes
Q_Score
int64
0
1.72k
CreationDate
stringlengths
23
23
Users Score
int64
-11
327
AnswerCount
int64
1
31
System Administration and DevOps
int64
0
1
Title
stringlengths
15
149
Q_Id
int64
5.14k
60M
Score
float64
-1
1.2
Tags
stringlengths
6
90
Answer
stringlengths
18
5.54k
Question
stringlengths
49
9.42k
Web Development
int64
0
1
Data Science and Machine Learning
int64
1
1
ViewCount
int64
7
3.27M
0
58,178,725
0
1
0
0
2
false
1
2019-10-01T05:24:00.000
0
3
0
can't install pandas (ERROR: Cannot uninstall 'numpy')
58,178,508
0
python,pandas,macos
I found an alternative method to install pandas, by installing minicondas, and running conda install pandas.
I'm on macOS 10.15 Beta, running a .py script that requires pandas, which is not installed. When I run sudo python -m pip install --upgrade pandas I receive: ERROR: Cannot uninstall 'numpy'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall. If I run sudo pip install pandas I receive the same error. Help appreciated.
0
1
1,097
0
59,296,658
0
1
0
0
2
true
1
2019-10-01T05:24:00.000
0
3
0
can't install pandas (ERROR: Cannot uninstall 'numpy')
58,178,508
1.2
python,pandas,macos
I tried the solutions presented in the other answers, but what worked for me was installing pandas using conda.
I'm on macOS 10.15 Beta, running a .py script that requires pandas, which is not installed. When I run sudo python -m pip install --upgrade pandas I receive: ERROR: Cannot uninstall 'numpy'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall. If I run sudo pip install pandas I receive the same error. Help appreciated.
0
1
1,097
0
58,179,988
0
0
0
0
1
false
1
2019-10-01T07:26:00.000
2
4
0
String problem / Select all values > 8000 in pandas dataframe
58,179,925
0.099668
python,pandas
You can see value of df.dtypes to see what is the type of each column. Then, if the column type is not as you want to, you can change it by df['GM'].astype(float), and then new_df = df.loc[df['GM'].astype(float) > 8000] should work as you want to.
I want to select all values bigger than 8000 within a pandas dataframe. new_df = df.loc[df['GM'] > 8000] However, it is not working. I think the problem is, that the value comes from an Excel file and the number is interpreted as string e.g. "1.111,52". Do you know how I can convert such a string to float / int in order to compare it properly?
0
1
93
0
58,198,069
0
0
0
0
1
false
3
2019-10-02T08:15:00.000
0
2
0
"Process finished with exit code -2147483645" Pycharm
58,197,666
0
python,pycharm,exit-code
Mmmm... I dont know about the error. But given the fact that it starts and works well for 368 episodes... I would aim that is some lack of memory related problem. I would run it several times, if it crash after a similar number of episodes I'd try with more memory. Hope this helps even just a little bit.
I ran Python 3.6.6 Deep Learning with Pycharm 2019.1.3. The process was set at maximum 651 episode and it stopped at episode 368 with this message "Process finished with exit code -2147483645". I searched through Google but there's not even a result. Anyone knows about the code? Please help!
0
1
2,679
0
58,202,111
0
0
0
0
1
true
2
2019-10-02T08:20:00.000
1
1
0
Which version of TensorFlow.js is compatible with models trained in TensorFlow 1.12.0 (Python)?
58,197,749
1.2
python,tensorflow,tensorflow.js
You have not mentioned in which format you are saving your model in TensorFlow 1.12. I would recommend to make use of saved model format to save your model. If you use saved models, you can use the latest versions of tf.js and tf.js converters. Same is the case for keras h5 model as well. However, if you save it in form of pb files, you will have to use tf.js version of 0.15 and tf.js converter of 0.8.6
I need to convert models trained in TensorFlow 1.12.0 Python into that of TensorFlow.js. What version of tf.js and tf.js converter is compatible with it?
0
1
259
0
58,201,684
0
0
0
0
1
true
0
2019-10-02T12:02:00.000
0
2
0
Single-label multiclass classification random forest python
58,201,116
1.2
python,machine-learning,scikit-learn,random-forest,multiclass-classification
Method of (One Hot Encoder) applies to category variables, and category variables have no size relationship.For the price variable,I suggest you use OrinalEncoder.Sklearn is a good package for machine.like, sklearn learning.preprocessing.OneHotEncoder or sklearn.preprocessing.OrdinalEncoder
I am pretty new to machine learning and I am currently dealing with a dataset in the format of a csv file comprised of categorical data. As a means of preprocessing, I One Hot Encoded all the variables in my dataset. At the moment I am trying to apply a random forest algorithm to classify the entries into one of the 4 classes. My problem is that I do not understand exactly what happens to these One Hot Encoded variables. How do I feed them to the algorithm? Is it able to make the difference between buying_price_high, buying_price_low (One Hot Encoded from buying_price)? I One Hot Encoded the response variable as well.
0
1
170
0
59,021,516
0
1
0
0
1
false
10
2019-10-02T13:02:00.000
4
5
0
Can't import tensorflow.keras in VS Code
58,202,095
0.158649
python,tensorflow,keras,visual-studio-code
I too faced the same issue. I solved it by installing keras as a new package and then I changed all packages name removing the prefix tensorflow.. So in your case after installing keras you should replace tensorflow.keras.layers with keras.layers
I'm running into problems using tensorflow 2 in VS Code. The code executes without a problem, the errors are just related to pylint in VS Code. For example this import from tensorflow.keras.layers import Dense gives a warning "Unable to import 'tensorflow.keras.layers'pylint(import-error)". Importing tensorflow and using tf.keras.layers.Dense does not produce an error. I'm just using a global python environment (3.7.2) on Windows 10, tensorflow is installed via Pip.
0
1
11,921
0
58,225,796
0
0
0
0
1
false
0
2019-10-03T16:36:00.000
0
1
0
How to fix 'Cannot handle this data type' while trying to convert a numpy array into an image using PIL
58,223,453
0
python-3.x,numpy,python-imaging-library
This depends a little bit on what you want to do. You have two channels with n-samples ((nsamples, 2) ndarray); do you want each channel to be a column of the image where the color varies depending on what the value is? That is why you were getting a very narrow image when you just plot myrecording. You do not really have the data to create a full 2D image, unless you reshape the time series data to be something more like a square (so it actually looks like an image), but then you sort of lose the time dependence nature that I think you are going for.
I am trying to visualize music into an image by using sounddevice to input the sound and then converting it to a numpy array. The array is 2D and so I convert it to 3D (otherwise I only get a single thin vertical line in the image). However when I use PIL to show the image it says 'Cannot handle this datatype' The code is mentioned below: import sounddevice as sd from scipy.io.wavfile import write import soundfile as sf import numpy from numpy import zeros, newaxis from PIL import Image fs = 44100 # Sample rate seconds = 3 # Duration of recording myrecording = sd.rec(int(seconds * fs), samplerate=fs, channels=2) sd.wait() # Wait until recording is finished print(myrecording) print(numpy.shape(myrecording)) write('output.wav', fs, myrecording) # Save as WAV file filename = 'output.wav' A=myrecording[:,:,newaxis] print(A) im = Image.fromarray((A * 255).astype(numpy.uint8)) im.show() I expect to get an image which shows colours corresponding to the sound being inputted in
0
1
62
0
58,337,978
0
0
0
0
1
true
0
2019-10-03T19:17:00.000
0
1
0
Can we detect multiple objects in image using caltech101 dataset containing label wise images?
58,225,543
1.2
python,keras,deep-learning,object-detection,tensorflow-datasets
The dataset can be used for detecting multiple objects but with below steps to be followed: The dataset has to be annotated with bounding boxes on the object present in the image After the annotations are done, you can use any of the Object detectors to do transfer learning and train on the annotated caltech 101 dataset Note: - Without annotations, with just the caltech 101 dataset, detecting multiple objects in a single image is not possible
I have a caltech101 dataset for object detection. Can we detect multiple objects in single image using model trained on caltech101 dataset? This dataset contains only folders (label-wise) and in each folder, some images label wise. I have trained model on caltech101 dataset using keras and it predicts single object in image. Results are satisfactory but is it possible to detect multiple objects in single image? As I know some how regarding this. for detecting multiple objects in single image, we should have dataset containing images and bounding boxes with name of objects in images. Thanks in advance
0
1
233
0
60,456,032
0
0
0
0
1
false
2
2019-10-03T19:53:00.000
0
2
0
Edit python script used as Data entry in Power BI
58,226,050
0
python,powerbi
You can edit the python scripts doing the following steps: Open Query Editor At 'Applied steps', the first one, source, contains a small gear symbol just on the right side, click on it. You can change the script direct into Power Query.
I have a python script and used it to create a dataframe in Power BI. Now I want to edit that dataframe in Power BI but don´t enter from scratch as new data because I want to keep all the charts inside my Power BI model. For example in my old dataframe i specified some dates inside my script so the information was limited to those dates. Now i want to change the dates to new ones but dont want to lose all the model. df = df
0
1
3,747
0
58,232,793
0
0
0
0
1
false
0
2019-10-04T04:37:00.000
0
1
0
Using spam classification in a different application?
58,229,976
0
python,nlp,classification,text-classification
I'm skeptical. The reason simple Bayesian filtering works for spam is that spam messages typically use a quite different vocabulary than legitimate messages. Anecdotally, people who sell pharmaceuticals use the same words and phrases in their legitimate business correspondence as in some types of spam; so they get bad filtering results on pharma spam, while the spam filter quickly learns to correctly discard dating, Nigerian fraud, stock scams etc. (Pharma spam might still contain various hyperbolic phrases etc which set them apart even from non-spam marketing messaging, though.) Business bullshit lingo tends to look the same whether the underlying plan is sound or not. You may be able to filter out the worst gibberish, but word-token level analysis is simply not a good indicator of whether actual sound thought went into composing those words into a particular arrangement.
I want to use the concept of spam classification and apply it to a business problem where we identify if a vision statement for a company is good or not. Here's a rough outline of what I've come up with for the project. Does this seem feasible? Prepare dataset by collecting vision statements from top leading companies (i.e. Fortune 5000) Let features = most frequent words (excluding non-alphanumerics, to, the, etc) Create feature vector (dictionary) x of all words listed above Use supervised learning algorithm (logistic regression) to train and test data Let y = good vision statement and return the value 1; y = 0 if not good
0
1
29
0
58,237,280
0
1
0
0
1
false
0
2019-10-04T13:04:00.000
0
1
0
OSError: Failed to interpret file 'name.data' as a pickle
58,237,039
0
python-3.x,compiler-errors
I had the same problem. In my case it was the newer version of numpy that caused the problem. Try installing numpy version 1.12.0 pip install numpy==1.12.0
I am trying to loadthis file. But, Python3 said "OSError: Failed to interpret file 'D:/USER/Downloads/wine.data' as a pickle". How to load this file? The code I used following this data = np.load("D:/USER/Downloads/wine.data")
0
1
150
0
58,237,688
0
1
0
0
1
false
1
2019-10-04T13:40:00.000
0
1
0
Data type to save expanding data for data logging in Python
58,237,574
0
python,types,data-acquisition
Python doesn't have arrays as you think of them in most languages. It has "lists", which use the standard array syntax myList[0] but unlike arrays, lists can change size as needed. using myList.append(newItem) you can add more data to the list without any trouble on your part. Since you asked for proper vocabulary in a useful concept to you would be "linked lists" which is a way of implementing array like things with varying lengths in other languages.
I am writing a serial data logger in Python and am wondering which data type would be best suited for this. Every few milliseconds a new value is read from the serial interface and is saved into my variable along with the current time. I don't know how long the logger is going to run, so I can't preallocate for a known size. Intuitively I would use an numpy array for this, but appending / concatenating elements creates a new array each time from what I've read. So what would be the appropriate data type to use for this? Also, what would be the proper vocabulary to describe this problem?
0
1
139
0
58,316,577
0
0
0
0
1
false
0
2019-10-05T05:32:00.000
1
2
0
How to know if dynamic tensor returned by "tf.boolean_mask" is empty or not?
58,245,630
0.099668
python,tensorflow
There are multiple ways to solve this, basically you are trying to identify a null tensor. Possible solutions can be: is_empty = tf.equal(tf.size(boolean_tensor), 0). If not empty it will give false Count non zeros number using tf.count_nonzero(boolean_tensor) By simply printing the tensor and checking the vaules
tf.boolean_mask(tensor, mask) => returns (?, 4) How do I check if the returned tensor by boolean_mask is empty or not?
0
1
422
0
58,246,743
0
0
0
0
1
true
0
2019-10-05T08:41:00.000
1
1
0
What is the difference between single bracket df["column"] and double bracket df[["column"]]
58,246,700
1.2
python,pandas
to be more specific, df['column'] returns only one column, but when you use df[['column']] you can call more than one column. for example df[['column1','column2']] returns column1 and column2 from df
One is a pandas.core.series.Series and another is a dataframe pandas.core.frame.DataFrame. I have seen codes using them both. Is there a guideline on when to use which?
0
1
70
0
59,757,240
0
0
0
0
1
false
0
2019-10-05T12:38:00.000
0
1
0
How to inverse transform models output?
58,248,367
0
python-3.x,scikit-learn,neural-network,deep-learning,sklearn-pandas
Use sc.inverse_transform(predicted)
So I have a trained model, that was trained on a standardized dataset. When I try to use the model for testing on new data, that isn't in a dataset and that isn't standardized, it returns ridiculous values, because I can standardize the inputs, but I can't inverse transform the output as I did during training. What should I do?
0
1
277
0
58,253,014
0
1
0
0
1
false
0
2019-10-05T23:06:00.000
0
1
0
File progress for long duration
58,253,000
0
python,dataframe,for-loop,data-analysis
Do you have sample code of what you are doing ? Are you reading your file every time you fetch a latitude and longitude If this is the case this is why it takes so long. First load the file in your memory as a pandas object for example and then you should be able to look for your data much faster.
I am trying to fetch location based on lat and long. I have a data of 600K in my csv and I am trying to run my for loop on it . My notebook is taking very long time to process the data. ( 40min to complete 2percent) I have decent laptop Core i7-8550U quad-core 1.8GHz and 16GB DDR4 RAM . I am not sure how to optain the result for my data quickly .Pls help
0
1
10
0
58,255,664
0
0
0
0
1
false
0
2019-10-06T08:35:00.000
0
1
0
How can i recognize two picture are same?
58,255,567
0
python-3.x,keras,conv-neural-network
you can try correlation on both images if there exactly the same should get 1
for example i have a image dataset. I used these images for train my model, and I am using another image for test. How can i recognize the test image is on the dataset? How do I determine what the similarity percentage is?
0
1
32
0
58,305,335
0
0
0
1
1
false
1
2019-10-06T09:13:00.000
0
3
0
Databricks: merge dataframe into sql datawarehouse table
58,255,818
0
python,databricks
you can save the output in a file and then use the stored procedure activity from azure data factory for the upsert. Just a small procedure which will upsert the values from the file. I am assuming that you are using the Azure data factory here.
Are there any method where I can upsert into a SQL datawarehouse table ? Suppose I have a Azure SQL datawarehouse table : col1 col2 col3 2019 09 10 2019 10 15 I have a dataframe col1 col2 col3 2019 10 20 2019 11 30 Then merge into the original table of Azure data warehouse table col1 col2 col3 2019 09 10 2019 10 20 2019 11 30 Thanks for everyone idea
0
1
782
0
58,258,675
0
0
0
0
1
false
1
2019-10-06T14:38:00.000
1
3
0
What is the meaning of "trainable_weights" in Keras?
58,258,312
0.066568
python,keras,deep-learning,conv-neural-network,transfer-learning
Trainable weights are the weights that will be learnt during the training process. If you do trainable=False then those weights are kept as it is and are not changed because they are not learnt. You might see some "strange numbers" because either you are using a pre-trained network that has its weights already learnt or you might be using random initialization when defining the model. When using transfer learning with pre-trained models a common practice is to freeze the weights of base model (pre-trained) and only train the extra layers that you add at the end.
If I freeze my base_model with trainable=false, I get strange numbers with trainable_weights. Before freezing my model has 162 trainable_weights. After freezing, the model only has 2. I tied 2 layers to the pre-trained network. Does trainable_weights show me the layers to train? I find the number weird, when I see 2,253,335 Trainable params.
0
1
3,554
0
58,277,773
0
0
0
0
1
true
0
2019-10-07T21:54:00.000
1
1
0
What is the threshold for sparse matrices? Is it a matrix that contain less than 50% 0's?
58,277,617
1.2
python,matrix,sparse-matrix
You can't locate a definition because there isn't one. "Sparse" is whatever relation makes a different algorithm more efficient. It may be a particular proportion of elements; it may be a function of the matrix side (e.g. n element in a nxn matrix); it may require zero rows or diagonals. It depends critically on how you plan to alter your handling of a "sparse" matrix. When we learned the basics of sparse representations, we used a heuristic of 10% non-zero elements. However, that was a particular family of OS data storage and retrieval.
Does a "sparse matrix" mean that it contains *more than 50% 0's? I can't seem to locate that information. edit - more
0
1
480
0
58,278,374
0
0
0
0
1
true
0
2019-10-07T22:38:00.000
1
1
0
Tensorflow 2.0 fit() is not recognizing batch_size
58,277,991
1.2
python,tensorflow,keras
model.compile() only does configure the model for training and it doesn't have any memory allocation. Your bug is self-explained, you directly feed a large numpy array into the model. I would suggest coding a new data generator or keras.utils.Sequence to feed your input data. If so, you do not need specify the batch_size in fit method again, because your own generator or Sequence will generate batches.
So I'm initializing a model as: model = tf.keras.utils.multi_gpu_model(model, gpus=NUM_GPUS) and when I do model.compile() it runs perfectly fine. But when I do history = model.fit(tf.cast(X_train, tf.float32), tf.cast(Y_train, tf.float32), validation_split=0.25, batch_size = 16, verbose=1, epochs=100), it gives me error: OOM when allocating tensor with shape[4760,256,256,3] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [Op:Cast] name: Cast/ This code worked perfectly fine previously but not anymore with Tensorflow 2.0. I have 4760 samples in my training set. I don't know why it's taking the entire set instead of the batch size.
0
1
451
0
58,915,896
0
0
0
0
1
false
0
2019-10-07T22:56:00.000
1
1
0
How to execute python script (face detection on very large dataset) on Nvidia GPU
58,278,145
0.197375
python,gpu,face-detection,numba
Your first problem is actually getting your Python code to run on all CPU cores! Python is not fast, and this is pretty much by design. More accurately, the design of Python emphasizes other qualities. Multi-threading is fairly hard in general, and Python can't make it easy due to those design constraints. A pity, because modern CPU's are highly parallel. In your case, there's a lucky escape. Your problem is also highly parallel. You can just divide those 500,000 video's over CPU cores. Each core runs a copy of the Python script over its own input. Even a quad-core would process 4x125.000 files using that strategy. As for the GPU, that's not going to help much with Python code. Python simply doesn't know how to send data to the GPU, send commands to the CPU, or get results back. Some Pythons extensions can use the GPU, such as Tensorflow. But they use the GPU for their own internal purposes, not to run Python code.
I have a python script that loops through a dataset of videos and applies a face and lip detector function to each video. The function returns a 3D numpy array of pixel data centered on the human lips in each frame. The dataset is quite large (70GB total, ~500,000 videos each about 1 second in duration) and executing on a normal CPU would take days. I have a Nvidia 2080 Ti that I would like to use to execute code. Is it possible to include some code that executes my entire script on the available GPU? Or am I oversimplifying a complex problem? So far I have been trying to implement using numba and pycuda and havent made any progress as the examples provided don't really fit my problem well.
0
1
81
0
58,283,603
0
0
0
0
1
false
2
2019-10-08T00:10:00.000
1
1
0
What is the difference between spline filtering and spline interpolation?
58,278,604
0.197375
python,interpolation,spline
I'm guessing a bit here. In order to calculate a 2nd order spline, you need the 1st derivative of the data. To calculate a 3rd order spline, you need the second derivative. I've not implemented an interpolation motor beyond 3rd order, but I suppose the 4th and 5th order splines will require at least the 3rd and 4th derivatives. Rather than recalculating these derivatives every time you want to perform an interpolation, it is best to calculate them just once. My guess is that spline_filter is doing this pre-calculation of the derivatives which then get used later for the interpolation calculations.
I'm having trouble connecting the mathematical concept of spline interpolation with the application of a spline filter in python. My very basic understanding of spline interpolation is that it's fitting the data in a piece-wise fashion, and the piece-wise polynomials fitted are called splines. But its applications in image processing involve pre-filtering the image and then performing interpolation, which I'm having trouble understanding. To give an example, I want to interpolate an image using scipy.ndimage.map_coordinates(input, coordinates, prefilter=True), and the keyword prefilter according to the documentation: Determines if the input array is prefiltered with spline_filter before interpolation And the documentation for scipy.ndimage.interpolation.spline_filter simply says the input is filtered by a spline filter. So what exactly is a spline filter and how does it alter the input data to allow spline interpolation?
0
1
389
0
58,279,203
0
0
0
0
1
false
0
2019-10-08T01:19:00.000
0
1
0
unicode vs character: What is '\x10'
58,279,018
0
python,pandas,unicode,luigi
Because it's the likely answer, even if the details aren't provide in your question: It's highly likely something in your pipeline is intentionally producing fields with length prefixed text, rather than the raw unstructured text. \x103189069486778499 is a binary byte with the value 16 (0x10), followed by precisely 16 characters. The 0. before it may be from a previous output, or some other part of whatever custom data serialization format it's using. This design is usually intended to make parsing more efficient; if you use a delimiter character between fields (e.g. a comma, like CSV), you're stuck coming up with ways to escape or quote the delimiter when it occurs in your actual data, and parsers have to scan character by character, statefully, to figure out where a field begins and ends. With length prefixed text, the parser finds a field length and knows exactly how many characters to read to slurp the field, or how many to skip to find the next field, no quoting or escaping required, no matter what the field contains. As for what's doing this: You're going to have to check the commands in your pipeline. Your question provides no meaningful way to determine the cause of this problem.
I'm trying to understand why when we were using pandas to_csv(), a number 3189069486778499 has been output as "0.\x103189069486778499". And this is the only case happened within a huge amount of data. When using to_csv(), we have already used encoding='utf8', normally that would solve some unicode problems... So, I'm trying to understand what is "\x10", so that I may know why... Since the whole process was running in luigi pipeline, sometimes luigi will generate weird output. I tried the same thing in IPython, same version of pandas and everything works fine....
0
1
2,069
0
58,283,102
0
0
0
0
1
true
0
2019-10-08T07:27:00.000
2
1
0
Word embeddings with multiple categorial features for a single word
58,281,876
1.2
python,python-3.x,pytorch,word-embedding
I am not sure what do you mean by word2vec algorithm with LSTM because the original word2vec algorithm does not use LSTMs and uses directly embeddings to predict surrounding words. Anyway, it seems you have multiple categorical variables to embed. In the example, it is word ID, color ID, and font size (if you round it to integer values). You have two option: You can create new IDs for all possible combinations of your features and use nn.Embedding for them. There is however a risk that most of the IDs will appear too sparsely in the data to learn reliable embeddings. Have separate embedding for each of the features. Then, you will need to combine the embeddings for the features together. You have basically three options how to do it: Just concatenate the embeddings and let the following layers of the network to resolve the combination. Choose the same embedding dimension for all features and average them. (I would start with this one probably.) Add a nn.Dense layer (or two, the first one with ReLU activation and the second without activation) that will explicitly combine the embeddings for your features. If you need to include continuous features that cannot be discretized, you can always take the continuous features, apply a layer or two on top of them and combine them with the embeddings of the discrete features.
I'm looking for a method to implement word embedding network with LSTM layers in Pytorch such that the input to the nn.Embedding layer has a different form than vectors of words IDs. Each word in my case has a corresponding vector and the sentence in my corpus is consequently a vector of vectors. So, for example, I may have the word "King" with vector [500, 3, 18] where 500 is the Word ID, 3 is the word color, and 18 is the font size, etc. The embedding layer role here is to do some automatic feature reduction/extraction. How can I feed the embedding layer with such form data? Or do you have any better suggestions?
0
1
1,159
0
58,302,623
0
0
0
0
1
false
2
2019-10-09T10:32:00.000
0
3
0
why numpy array has size of 112 byte and when I do flatten it, it has 96 byte of memory?
58,302,190
0
python,numpy,reshape
As Derte mentioned, sys.getsizeof doesn't say the size of the array. The 96 you got is holding information about the array (if it's 1-Dimensional) and the 112 if it's multi dimensional. Any additional element will increase the size with 8 bytes assuming you are using a dtype=int64.
I have a numpy array and I flatten it by np.ravel() and I am confused when i tried to learn the size of the both array array =np.arange(15).reshape(3,5) sys.getsizeof(array) 112 sys.getsizeof(array.ravel()) 96 array.size 15 array.ravel().size 15 array = np.arange(30).reshape(5,6) sys.getsizeof(array) 112 sys.getsizeof(array.ravel()) 96 array.size 30 As seen above two different arrays have the same memory size but each has different amount of element. Why does it happen?
0
1
344
0
58,315,161
0
0
0
0
1
true
1
2019-10-10T03:36:00.000
0
2
0
DataFrame issue (parenthesis)
58,315,096
1.2
python,pandas,dataframe
You understand it well, in general parenthesis call class method, and without you call an attribute. In your exemple you don't have an error, because df.head is bound to NDFrame.head who is a method as well. If df.head was only a method, calling it without parenthesis will raise an AttributeError.
May I ask what's the difference between df.head() and df.head in python's syntax nature? Could I interpret as the former one is for calling a method and the later one is just trying to obtain the DataFrame's attribute, which is the head? I am so confused why sometimes there is a parenthesis at the end but sometimes not... Thank you so much.
0
1
44
0
58,334,583
0
0
0
0
1
false
0
2019-10-10T23:11:00.000
1
1
0
Python script closes after a while
58,332,232
0.197375
python,python-3.x,machine-learning,artificial-intelligence
The way I was looping my function needed to be a for loop instead of directly calling the function as a loop method. And my error was a stack overflow
I'm using Keras for the layers, optimizer, and model and my model is Sequential I've got two DQN networks and I'm making them duel each other in a simulated environment however after about 35 episodes (different each time) the script just stops without any errors. I've isolated my issue to be somewhere around when the agent runs the prediction model for the current state to get the action. The process is called but never completed and the script just stops without any error. How can I debug this issue?
0
1
46
0
58,345,624
0
0
0
0
2
false
0
2019-10-11T00:23:00.000
0
2
0
How does TF know what object you are finetuning for
58,332,687
0
python,tensorflow,deep-learning,conv-neural-network,object-detection
The model works with the category labels (numbers) you give it. The string "boat" is only a translation for human convenience in reading the output. If you have a model that has learned to identify a set of 40 images as class 9, then giving it a very similar image that you insist is class 1 will confuse it. Doing so prompts the model to elevate the importance of differences between the 9 boats and the new 1 boats. If there are no significant differences, then the change in weights will find unintended features that you don't care about. The result is a model that is much less effective.
I am trying to improve mobilenet_v2's detection of boats with about 400 images I have annotated myself, but keep on getting an underfitted model when I freeze the graphs, (detections are random does not actually seem to be detecting rather just randomly placing an inference). I performed 20,000 steps and had a loss of 2.3. I was wondering how TF knows that what I am training it on with my custom label map ID:1 Name: 'boat' Is the same as what it regards as a boat ( with an ID of 9) in the mscoco label map. Or whether, by using an ID of 1, I am training the models' idea of what a person looks like to be a boat? Thank you in advance for any advice.
0
1
51
0
58,513,059
0
0
0
0
2
false
0
2019-10-11T00:23:00.000
0
2
0
How does TF know what object you are finetuning for
58,332,687
0
python,tensorflow,deep-learning,conv-neural-network,object-detection
so I managed to figure out the issue. We created the annotation tool from scratch and the issue that was causing underfitting whenever we trained regardless of the number of steps or various fixes I tried to implement was that When creating bounding boxes there was no check to identify whether the xmin and ymin coordinates were less than the xmax and ymax I did not realize this would be such a large issue but after creating a very simple check to ensure the coordinates are correct training ran smoothly.
I am trying to improve mobilenet_v2's detection of boats with about 400 images I have annotated myself, but keep on getting an underfitted model when I freeze the graphs, (detections are random does not actually seem to be detecting rather just randomly placing an inference). I performed 20,000 steps and had a loss of 2.3. I was wondering how TF knows that what I am training it on with my custom label map ID:1 Name: 'boat' Is the same as what it regards as a boat ( with an ID of 9) in the mscoco label map. Or whether, by using an ID of 1, I am training the models' idea of what a person looks like to be a boat? Thank you in advance for any advice.
0
1
51
0
59,558,292
0
0
0
0
1
false
9
2019-10-11T08:46:00.000
1
2
0
How to save fasttext model in vec format?
58,337,469
0.099668
python,word-embedding,fasttext
you should add words num and dimension at first line of your vec file, than use -preTrainedVectors para
I trained my unsupervised model using fasttext.train_unsupervised() function in python. I want to save it as vec file since I will use this file for pretrainedVectors parameter in fasttext.train_supervised() function. pretrainedVectors only accepts vec file but I am having troubles to creating this vec file. Can someone help me? Ps. I am able to save it in bin format. It would be also helpful if you suggest me a way to convert bin file to vec file.
0
1
6,401
0
61,201,527
0
0
0
0
1
false
0
2019-10-11T10:11:00.000
1
1
0
Numpy can't be imported in Spyder
58,339,023
0.197375
python,numpy,spyder
Problem did not occur again after re-installing Anaconda. Thanks @CarlosCordoba.
When trying to import numpy in spyder i get the following error message: ImportError: IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE! Importing the numpy c-extensions failed. - Try uninstalling and reinstalling numpy. - If you have already done that, then: 1. Check that you expected to use Python3.7 from "/home/sltzgs/anaconda3/bin/python",and that you have no directories in your PATH or PYTHONPATH that can interfere with the Python and numpy version "1.17.2" you're trying to use. 2. If (1) looks fine, you can open a new issue at https://github.com/numpy/numpy/issues. Please include details on: - how you installed Python - how you installed numpy - your operating system - whether or not you have multiple versions of Python installed - if you built from source, your compiler versions and ideally a build log - If you're working with a numpy git repository, trygit clean -xdf(removes all files not under version control) and rebuild numpy. Note: this error has many possible causes, so please don't comment on an existing issue about this - open a new one instead. Original error was: No module named 'numpy.core._multiarray_umath' However, importing in a jupyter notebook works fine. How is that possible? I have uninstalled/installed numpy a few times by now and also made sure, that the sys.paths are identical. Any help appreciated. In case any additional information is required I would happily help out. Thanks. Some essentials: python 3.7 Spyder 3.3.6 numpy 1.17.2
0
1
1,431
0
58,340,103
0
0
0
0
1
true
0
2019-10-11T10:47:00.000
0
1
0
Combining relative camera rotations & translations with known pose
58,339,628
1.2
python,opencv,camera-calibration
If you know relative poses between all the cameras via a chain (relative poses between cameras a, b and b, c), you can combine the translations and rotations from camera a to c via b by R ac = Rab R bc t ac = t ab + R ab t bc In other words, the new rotation from ac is rotating first from a to b and then from b to c. Translation is calculated in the same way, in addition the second translation vector has to be rotated by R ab. Some amount of error is expected, depending how accurate your pairwise calibration is. Errors in camera pose accumulate over the chain. If you have camera poses making full circle you generally don't get the exact same poses for the starting/ending camera.
I have used OpenCVs stereocalibrate function to get the relative rotation and translation from one camera to another. What I'd like to do is change the origin of the world space and update the extrinsics of both cameras accordingly. I can easily do this with cameras that have a shared view with SolvePnP but I'd like to do this with cameras in which each is defined by it's pose relative to an adjacent camera where all of their fields don't overlap - like daisy chaining their relative poses. I've determined the pose of the cameras relative to where I'd like the world origin and orientation to be using SolvePnP so that I know what the final extrinsics 'should be'. I've then tried combining the rotation matrices and translation vectors from the stereocalibration with the SolvePnP from the primary camera to get the same value both with ComposeRT and manually but to no avail. Edit: So it turns out that for whatever reason the StereoCalibration and SolvePnP functions produce mirrored versions of the poses as the StereoCalibration appears to produce poses with a 180 degree rotation around the Y-axis. So by applying that rotation to the produced relative rotation matrix and translation vector everything works!
0
1
554
0
58,342,047
0
0
0
0
2
true
3
2019-10-11T12:36:00.000
3
3
0
Error when running tensorflow in virtualenv: module 'tensorflow' has no attribute 'truncated_normal'
58,341,433
1.2
python,tensorflow,keras
Keras 2.2.4 does not support TensorFlow 2.0 (it was released much before TF 2.0), so you can either downgrade TensorFlow to version 1.x, or upgrade Keras to version 2.3, which does support TensorFlow 2.0.
I have the following error when running a CNN made in keras File "venv/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 4185, in truncated_normal return tf.truncated_normal(shape, mean, stddev, dtype=dtype, seed=seed) AttributeError: module 'tensorflow' has no attribute 'truncated_normal' I have already installed and reinstalled Tensorflow 2.0 several times. What could be happening?
0
1
8,138
0
67,832,940
0
0
0
0
2
false
3
2019-10-11T12:36:00.000
5
3
0
Error when running tensorflow in virtualenv: module 'tensorflow' has no attribute 'truncated_normal'
58,341,433
0.321513
python,tensorflow,keras
In Tensorflow v2.0 and above, "tf.truncated_normal" replaced with "tf.random.truncated_normal"
I have the following error when running a CNN made in keras File "venv/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 4185, in truncated_normal return tf.truncated_normal(shape, mean, stddev, dtype=dtype, seed=seed) AttributeError: module 'tensorflow' has no attribute 'truncated_normal' I have already installed and reinstalled Tensorflow 2.0 several times. What could be happening?
0
1
8,138
0
60,562,288
0
0
0
0
1
false
0
2019-10-11T13:46:00.000
1
1
0
I have a network with 3 features and 4 vector outputs. How is MSE and accuracy metric calculated?
58,342,612
0.197375
python-3.x,tensorflow,neural-network,conv-neural-network,recurrent-neural-network
It’s not advised to calculate accuracy for continuous values. For such values you would want to calculate a measure of how close the predicted values are to the true values. This task of prediction of continuous values is known as regression. And generally R-squared value is used to measure the performance of the model. If the predicted output is of continuous values then mean square error is the right option For example: Predicted o/p vector1-----> [2,4,8] and Actual o/p vector1 -------> [2,3.5,6] 1.Mean square error is sqrt((2-2)^2+(4-3.5)^2+(8-6)^2 ) 2.Mean absolute error..etc. (2)if the output is of classes then accuracy is the right metric to decide on model performance Predicted o/p vector1-----> [0,1,1] Actual o/p vector1 -------> [1,0,1] Then accuracy calculation can be done with following: 1.Classification Accuracy 2.Logarithmic Loss 3.Confusion Matrix 4.Area under Curve 5.F1 Score
I understand how it works when you have one column output but could not understand how it is done for 4 column outputs.
0
1
49
0
68,995,590
0
0
0
0
1
false
0
2019-10-12T02:55:00.000
2
1
0
CUDA goes out of memory during inference and gives InternalError: CUDA runtime implicit initialization on GPU:0 failed. Status: out of memory
58,350,456
0.379949
python,tensorflow
I am using Tensorflow 2.3.0 on a remote server. My code was working fine, but suddenly the server gets disconnected from the network, and my training stopped. When I re-run the code I got the same issue you got. So I guess this problem is related to GPU being busy in something not existing anymore. Clearing the session as the comment said is enough to solve the problem (I also believe restarting the machine can also fix the problem but I did not get the chance to try this solution). for tensorflow 2.3 use tf.kerasbackend.clear_session() it solve the issue
During inference, when the models are being loaded, Cuda throws InternalError: CUDA runtime implicit initialization on GPU:0 failed. Status: out of memory. I am performing inference on a machine with 6GB of VRAM. A few days back, the machine was able to perform the tasks, but now I am frequently getting these messages. Restarting the device sometimes does help, but is not a viable solution. I have checked through nvidia-smi, but it is also showing only about 500 MB of VRam being used and I was not able to see any spike in memory usage when tensorflow was trying to load the models. I am currently using tensorflow 1.14.0 and python 3.7.4
0
1
2,542
0
58,358,854
0
0
0
0
1
false
0
2019-10-12T21:35:00.000
0
1
0
Dependent vs Independent Variables
58,358,757
0
python,statistics,data-science,cross-validation
Dependence and correlation are different. if 2 variables are dependent, then they are correlated. However, if they are correlated, it is not sure that they are dependent, we need domain knowledge to consider more. To check the correlation, we can use the Correlation Coefficient. For the dependence test, we can use the Chi-Square Test.
If I am given a large data set with many variables is it possible to determine whether any two of them are independent or dependent? Lets assume I know nothing else about the data other than a statistical study. Would looking at the correlation/covariance be able to determine this? The purpose of this is to determine which variables would be the best to use in machine learning to predict a specific outcome. I have some variables with a correlation of 0.40 - 0.50 with one another but I'm not sure if a high correlation == dependence. Thanks
0
1
75
0
58,361,716
0
0
0
0
3
true
4
2019-10-13T01:25:00.000
4
4
0
tf.contrib.layers.fully_connected() in Tensorflow 2?
58,359,881
1.2
python,python-3.x,tensorflow,tensorflow2.0
In TensorFlow 2.0 the package tf.contrib has been removed (and this was a good choice since the whole package was a huge mix of different projects all placed inside the same box), so you can't use it. In TensorFlow 2.0 we need to use tf.keras.layers.Dense to create a fully connected layer, but more importantly, you have to migrate your codebase to Keras. In fact, you can't define a layer and use it, without creating a tf.keras.Model object that uses it.
I'm trying to use tf.contrib.layers.fully_connected() in one of my projects, and it's been deprecated in tensorflow 2.0. Is there an equivalent function, or should I just keep tensorflow v1.x in my virtual environment for this projcet?
0
1
9,478
0
61,319,962
0
0
0
0
3
false
4
2019-10-13T01:25:00.000
5
4
0
tf.contrib.layers.fully_connected() in Tensorflow 2?
58,359,881
0.244919
python,python-3.x,tensorflow,tensorflow2.0
tf-slim, as a standalone package, already included tf.contrib.layers.you can install by pip install tf-slim,call it by from tf_slim.layers import layers as _layers; _layers.fully_conntected(..).The same as the original, easy to replace
I'm trying to use tf.contrib.layers.fully_connected() in one of my projects, and it's been deprecated in tensorflow 2.0. Is there an equivalent function, or should I just keep tensorflow v1.x in my virtual environment for this projcet?
0
1
9,478
0
64,493,808
0
0
0
0
3
false
4
2019-10-13T01:25:00.000
0
4
0
tf.contrib.layers.fully_connected() in Tensorflow 2?
58,359,881
0
python,python-3.x,tensorflow,tensorflow2.0
tf.contrib.layers.fully_connected() is a perfect mess. It is a very old historical mark(or a prehistory DNN legacy). Google has completely deprecated the function since Google hated it. There is no any direct function in TensoFlow 2.x to replace tf.contrib.layers.fully_connected(). Therefore, it is not worth inquiring and getting to know the function.
I'm trying to use tf.contrib.layers.fully_connected() in one of my projects, and it's been deprecated in tensorflow 2.0. Is there an equivalent function, or should I just keep tensorflow v1.x in my virtual environment for this projcet?
0
1
9,478
0
58,363,469
0
0
0
0
1
true
0
2019-10-13T10:37:00.000
1
1
0
How to extract/cut out parts of images classified by the model?
58,362,763
1.2
python,tensorflow,machine-learning,keras,deep-learning
Your thinking is correct, you can have multiple pipelines based on the number of classes. Training: Main model will be an object detection and localization model like Faster RCNN, YOLO, SSD etc trained to classify at a high level like cat and dog. This pipeline provides you bounding box details (left, bottom, right, top) along with the labels. Sub models will be multiple models trained on a lover level. For example a model that is trained to classify breed. This can be done by using models like vgg, resnet, inception etc. You can utilize transfer learning here. Inference: Pass the image through Main model, crop out the detection objects using bounding box details (left, bottom, right, top) and based on the label information, feed it appropriate sub model and extract the results.
I am new to deep learning, I was wondering if there is a way to extract parts of images containing the different label and then feed those parts to different model for further processing? For example,consider the dog vs cat classification. Suppose the image contains both cat and dog. We successfully classify that the image contains both, but how can we classify the breed of the dog and cat present? The approach I thought of was,extracting/cutting out the parts of the image containing dog and cat.And then feed those parts to the respective dog breed classification model and cat breed classification model separately. But I have no clue on how to do this.
0
1
170
0
58,399,685
0
0
0
0
1
true
0
2019-10-13T14:56:00.000
0
2
0
Creating dask_jobqueue schedulers to launch on a custom HPC
58,364,733
1.2
python,python-3.x,dask,dask-distributed
Got it working after going through the source code. Tips for anyone trying: Create a customCluster & customJob class similar to LSFCluster & LSFJob. Override the following submit_command cancel_command config_name (you'll have to define it in the jobqueue.yaml) Depending on the cluster, you may need to override the _submit_job, _job_id_from_submit_ouput and other functions. Hope this helps.
I'm new to dask and trying to use it in our cluster which uses NC job scheduler (from Runtime Design Automation, similar to LSF). I'm trying to create an NCCluster class similar to LSFCluster to keep things simple. What are the steps involved in creating a job scheduler for custom clusters? Is there any other way to interface dask to custom clusters without using JobQueueCluster? I could find info on how to use the LSFCluster/PBSCluster/..., but couldn't find much information on creating one for a different HPC. Any links to material/examples/docs will help Thanks
0
1
60
0
58,468,050
0
0
0
0
1
false
0
2019-10-13T16:35:00.000
0
2
0
How to define a new optimization function for Keras
58,365,610
0
python,tensorflow,math,keras
In fact, after having looked at the Keras code of the Optimizer 2-3 times, not only did I quickly give up trying to understand everything, but it seemed to me that the get_updates function simply returns the gradients already calculated, where I seek to directly access the partial derivation functions of the parameters in order to use the derivatives of these derivatives. So the gradients are useless ...
I would like to implement for Keras a new optimization function that would not be based on the partial derivatives of the parameters, but also on the derivatives of these partial derivatives. How can I proceed?
0
1
87
0
58,385,939
0
0
0
0
1
false
0
2019-10-14T03:42:00.000
0
2
0
DataBricks: using variable in arrays_zip function
58,369,843
0
python,databricks
using the following: array=["col1","col2"] df.select(arrays_zip(*[c for c in array]]).show() Thanks
May I know if we can use variable/array in the arrays_zip function ?? For example I declare and array array1=["col1","col2"] then in the dataframe. I write the following : df.withColumn("zipped",arrays_zip(array1)) then it tells me it's not a valid argument not a string or column any one has the idea ?
0
1
376
0
58,375,333
0
0
0
0
2
false
0
2019-10-14T08:24:00.000
0
2
0
How to we add a new face into trained face recognition model(inception/resnet/vgg) without retraining complete model?
58,372,751
0
python-3.x,machine-learning,computer-vision,face-recognition,object-recognition
basically, by the mathematics theory behind the machine learning models, you basically need to do another train iteration with only this new data... but, in practice, those models, especially the sophisticated ones, rely on multiple iterations for training and a various technics of suffering and nose reductions a good approach can be train of the model from previous state with a subset of the data that include the new data, for a couple of iterations
Is it possible to add a new face features into trained face recognition model, without retraining it with previous faces? Currently am using facenet architecture,
0
1
718
0
59,680,250
0
0
0
0
2
false
0
2019-10-14T08:24:00.000
1
2
0
How to we add a new face into trained face recognition model(inception/resnet/vgg) without retraining complete model?
58,372,751
0.099668
python-3.x,machine-learning,computer-vision,face-recognition,object-recognition
Take a look in Siamese Neural Network. Actually if you use such approach you don't need to retrain the model. Basically you train a model to generate an embedding (a vector) that maps similar images near and different ones far. After you have this model trainned, when you add a new face it will be far from the others but near of the samples of the same person.
Is it possible to add a new face features into trained face recognition model, without retraining it with previous faces? Currently am using facenet architecture,
0
1
718
0
59,246,432
0
1
0
0
1
false
4
2019-10-14T10:15:00.000
1
1
0
modulenotfounderror no module named '_pywrap_tensorflow_internal'
58,374,635
0.197375
python,tensorflow
Mentioning the Answer here for the benefit of the Community. Issue is resolved by using Python==3.6, Tensorflow==1.5, protobuf==3.6.0.
I am using Windows 10, CPU: Intel(R) Core(TM) 2 Duo CPU T6600 @ 2.2GHz (2 CPUs) ~2.2GHz. RAM: 4GB. Video card: ATI Mobility Radeon HD 3400 Series. I uninstalled everything and then I installed Python 3.6 and Tensorflow==1.10.0. When I do import tensorflow, I get this error. modulenotfounderror no module named '_pywrap_tensorflow_internal' I can install whichever Python/Tensorflow version you want. I just want to use Tensorflow. I saw similar issues, but none of them seems to be solving my issue. I know there are many similar questions on github/stackoverflow, but none of them seems to help me.
0
1
242
0
58,376,285
0
0
0
0
2
false
0
2019-10-14T11:48:00.000
1
2
0
How to calculate similarity between categorical variables in collaborative filtering
58,376,140
0.099668
python,recommendation-engine,collaborative-filtering
One good example to calculate distance between categorical features is Hamming Distance where we calculate the number of different instances. On the other hand, you can still calculate Cosine Similarity for user-item data set. As an example; user 1 buys item 1, item 2 user 2 buys item 2, item 3 Then, user vectors are; user 1 = [1, 1, 0] user 2 = [0, 1, 1] And cosine similarity will be 0.5 Same rules apply for items.
I am trying to build a recommender system using collaborative filtering. I am having user-item dataset. I am unable to find similarity between similar user, since i cannot use Euclidean / Cosine distance will not work here. If i convert categorical variable into 0, 1 then will not able to calculate distance. Can you please suggest any recommendation algorithm in python which handles categorical data.
0
1
2,649
0
62,181,010
0
0
0
0
2
false
0
2019-10-14T11:48:00.000
0
2
0
How to calculate similarity between categorical variables in collaborative filtering
58,376,140
0
python,recommendation-engine,collaborative-filtering
Cosine similarity will handle the problem as a whole vector which includes all values of the variable. And may not give the answer for correlation. So when you received a good score from cosine similarity, it will not make sure they are also correlated.
I am trying to build a recommender system using collaborative filtering. I am having user-item dataset. I am unable to find similarity between similar user, since i cannot use Euclidean / Cosine distance will not work here. If i convert categorical variable into 0, 1 then will not able to calculate distance. Can you please suggest any recommendation algorithm in python which handles categorical data.
0
1
2,649
0
58,378,980
0
0
0
0
1
false
2
2019-10-14T13:35:00.000
1
1
0
Which actvation function to use for linear-chain CRF classifier?
58,377,983
0.197375
python,tensorflow,keras,neural-network,crf
When using Embeddings → BiLSTM → Dense + softmax, you implicitly assume that the likelihood of the tags is conditionally independent given the RNN states. This can lead to the label bias problem. The distribution over the tags always needs to sum up to one. There is no way to express that the model is not certain about the particular tag does an independent prediction for that. In a CRF, this can get fixed using the transition scores that the CRF learns in addition to scoring the hidden states. The score for the tag can be an arbitrary real number. If the model is uncertain about a tag, all scores can be low (because they do not have to sum up to one) and predictions from the neighboring tags might help in choosing what tag to chose via the transition scores. The likelihood of the tags is not factorized over the sequence but computed for the entire sequence of tags using a dynamic programming algorithm. If you used an activation function with a limited range, it would limit what scores can be assigned to the tags and might the CRF not efficient. If you think you need a non-linearity after the RNN, you can add one dense layer with activation of your choice and then do the linear projection.
I have a sequence tagging model that predicts a tag for every word in an input sequence (essentially named entity recognition). Model structure: Embeddings layer → BiLSTM → CRF So essentially the BiLSTM learns non-linear combinations of features based on the token embeddings and uses these to output the unnormalized scores for every possible tag at every timestep. The CRF classifier then learns how to choose the best tag sequence given this information. My CRF is an instance of the keras_contrib crf, which implements a linear chain CRF (as does tensorflow.contrib.crf). Thus it considers tag transition probabilities from one tag to the next but doesn't maximize the global tag sequence (which a general CRF would). The default activation function is 'linear'. My question is, why is it linear, and what difference would other activations make? I.e., is it linear because it's decisions are essentially being reduced to predicting the likelihood of tag yt given tag y-1 (which could possibly be framed as a linear regression problem)? Or is it linear for some other reason, e.g. giving the user flexibility to apply the CRF wherever they like and choose the most appropriate activation function themselves? For my problem, should I actually be using softmax activation? I already have a separate model with a similar but different structure: Embeddings → BiLSTM → Dense with softmax. So I if I were to use softmax activation in the linear chain CRF (i.e. in the Embeddings layer → BiLSTM → CRF I mentioned at the start of this post), it sounds like it would be nearly identical to that separate model except for being able to use transition probabilities from yt-1 to yt.
0
1
458
0
59,798,103
0
1
0
0
1
false
1
2019-10-14T14:28:00.000
0
1
0
How to bring variable values from csv to rivescript?
58,378,873
0
python,csv,rivescript
you can use macro for doing this. > object read_from_csv python #code to read from CSV return "" < object From rive response -<call>read_from_csv <star></call>
In rivescript, if a user asks me the price of a certain item, I want the bot to look for that items' price in the csv file. Im new to to rivescript so any kind of help would be appreciated.
0
1
89
0
58,399,690
0
0
0
0
2
false
0
2019-10-15T11:24:00.000
0
3
0
binary classification for imbalanced data
58,393,565
0
python
You can use Synthetic Minority Oversampling Techniques(SMOTE) or Adasyn to tackle this. Try both methods and finalize based on your desired results.
In data mining, I use a machine learning algorithm to solve the binary classification. However, the distribution of data samples is extremely imbalanced. The ratio between good samples and bad samples is as high as 500:1. Which methods can be used to solve the binary classification for imbalanced data?
0
1
104
0
66,716,747
0
0
0
0
2
false
0
2019-10-15T11:24:00.000
0
3
0
binary classification for imbalanced data
58,393,565
0
python
Also you can use asymmetric loss functions which penalize models differently based on the label of data. In your case the loss function should penalize errors in "bad" samples much more than errors in "good" samples. In this way the model pays more attentions to the rare data points.
In data mining, I use a machine learning algorithm to solve the binary classification. However, the distribution of data samples is extremely imbalanced. The ratio between good samples and bad samples is as high as 500:1. Which methods can be used to solve the binary classification for imbalanced data?
0
1
104
0
58,406,856
0
0
0
0
1
false
2
2019-10-15T14:18:00.000
0
2
0
How to cluster large amounts of data with minimal memory usage
58,396,826
0
python,python-3.x,scipy,cluster-analysis,data-analysis
You'll need to choose a different algorithm. Hierarchical clustering needs O(n²) memory and the textbook algorithm O(n³) time. This cannot scale well to large data.
I am using scipy.cluster.hierarchy.fclusterdata function to cluster a list of vectors (vectors with 384 components). It works nice, but when I try to cluster large amounts of data I run out of memory and the program crashes. How can I perform the same task without running out of memory? My machine has 32GB RAM, Windows 10 x64, python 3.6 (64 bit)
0
1
593
0
58,408,874
0
0
0
0
1
false
1
2019-10-16T07:16:00.000
0
5
0
How to generate random values in range (-1, 1) such that the total sum is 0?
58,407,760
0
python,random
Since you are fine with the approach of generating lots of numbers and dividing by the sum, why not generate n/2 positive numbers divide by sum. Then generate n/2 negative numbers and divide by sum? Want a random positive to negative mix? Randomly generate that mix randomly first then continue.
If the sum is 1, I could just divide the values by their sum. However, this approach is not applicable when the sum is 0. Maybe I could compute the opposite of each value I sample, so I would always have a pair of numbers, such that their sum is 0. However this approach reduces the "randomness" I would like to have in my random array. Are there better approaches? Edit: the array length can vary (from 3 to few hundreds), but it has to be fixed before sampling.
0
1
437
0
58,479,863
0
1
0
0
1
true
0
2019-10-16T08:45:00.000
1
1
0
How to design realtime deeplearnig application for robotics using python?
58,409,257
1.2
python,tensorflow,deep-learning,robotics
Let me summarize everything first. What you want to do The "object" is on the conveyer belt The camera will take pictures of the object MaskRCNN will run to do the analyzing Here are some problems you're facing "The first problem is the time model takes to create segmentation masks, it varies from one object to another." -> if you want to reduce the processing time for each image, then an accelerator (FPGA, Chip, etc) or some acceleration technique is needed. Intel OpenVino and Intel DL stick is a good start. -> if there are too many pictures to process then you'll have 2 choices: 1) put a lot of machines so all the job can be done or 2) select only the important job and discard others. The fact that you set the "Maximum Accutuation" to a fixed number (3/sec) made me think that this is the problem you're facing. A background subtractor is a good start for creating images capture triggers. "Another issue is how do I maintain signals that are generated after computer vision processing, send them to actuators in a manner that it won't get misaligned with the computer vision-based inferencing." -> a "job distributor" like Celery is good choice here. If the message is stacked inside the broker (Redis), then some tasks will have to wait. But this can easily by scaling up your computer. Just a few advice here: a vision system also includes the hardware parts, so a hardware specification is a must. Clarify the requirements Impossible things do exist, so sometimes you could reduce some factors (reliable, cost) of your project.
I have created a machine learning software that detects objects(duh!), processes the objects based on some computer vision parameters and then triggers some hardware that puts the object in the respective bin. The objects are placed on a conveyer belt and a camera is mounted at a point to snap pictures of objects(one object at a time) when they pass beneath the camera. I don't have control over the speed of the belt. Now, the challenge is that I have to configure a ton of things to make the machine work properly. The first problem is the time model takes to create segmentation masks, it varies from one object to another. Another issue is how do I maintain signals that are generated after computer vision processing, send them to actuators in a manner that it won't get misaligned with the computer vision-based inferencing. My initial design includes creating processes responsible for a specific task and then make them communicate with one other as per the necessity. However, the problem of synchronization still persists. As of now, I am thinking of treating the software stack as a group of services as we usually do in backend and make them communicate using something like celery and Redis queue. I am a kind of noob in system design, come from a background of data science. I have explored python's multithreading module and found it unusable for my purpose(all threads run on single core). I am concerned if I used multiprocessing, there could be additional delays in individual processes due to messaging and thus, that would add another uncertainty to the program. Additional Details: Programming Frameworks and Library: Tensorflow, OpenCV and python Camera Resolution: 1920P Maximum Accutuation Speed: 3 triggers/second Deep Learning Models: MaskRCNN/UNet P.S: You can also comment on the technologies or the keywords I should search for because a vanilla search yields nothing good.
0
1
73
0
58,428,014
0
0
0
0
1
true
2
2019-10-16T12:21:00.000
1
1
0
How to prevent keras from renaming layers
58,413,230
1.2
python,tensorflow,keras,neural-network,jupyter-notebook
When using Tensorflow (1.X) as a backend, whenever you add a new layer to any model, the name of the layer -unless manually set- will be set to the default name for that layer, plus an incremental index at the end. Defining a new model is not enough to reset the incrementing index, because all models end up on the same underlying graph. To reset the index, you must reset the underlying graph. In TF 1.X, this is done via tf.reset_default_graph(). In TF 2.0, you can do this via the v1 compatibility API: tf.compat.v1.reset_default_graph() (the latter will also solve some deprecation warnings you might get with the latest versions of TF 1.X)
When I re-create a model, keras always makes a new name for a layer (conv2d_2 and so on) even if I override the model. How to make keras using the same name every time I run it without restarting the kernel.
0
1
305
0
68,501,575
0
0
0
0
1
false
2
2019-10-16T13:41:00.000
-1
1
0
Is there another way to plot a graph in python without matplotlib?
58,414,797
-0.197375
python,matplotlib,graph
in cmd (coammand prompt) type pip install matplotlib
As the title says, that's basically it. I have tried to install matplotlib already but: I am on Windows and "sudo" doesn't work Every solution and answers on Stack Overflow regarding matplotlib (or some other package) not being able to be installed doesn't work for me... I get "Error Code 1" So! Is there any other way to plot a graph in python without matplotlib? If not, can I have help with how to install matplotlib, successfully?
0
1
4,146
0
58,421,378
0
0
0
0
1
false
1
2019-10-16T15:15:00.000
0
1
0
Regression vs Classification for a problem that could be solved by both
58,416,636
0
python,machine-learning,regression,classification
Without having the data and running classification or regression, a comparison would be hard because of the metric you use for each family is different. For example, comparing RMSE of a regression with F1 score (or accuracy) of a classification problem would be apple to orange comparison. It would be ideal if you can train a good regression model (low RMSE) because that would give you information more than the original pass/fail question. From my past experiences with industrial customers, First, train all 3 models you have mentioned and then present the outcome to your customer and let them give you more direction on which models/outputs are more meaningful for them.
I have a problem that I have been treating as a classification problem. I am trying to predict whether a machine will pass or fail a particular test based on a number of input features. What I am really interested in is actually whether a new machine is predicted to pass or fail the test. It can pass or fail the test by having certain signatures (such as speed, vibration etc) go out of range. Therefore, I could either: 1) Treat it as a pure regression problem; try to predict the actual values of speed, vibration etc 2) Treat it as a pure classification problem; for each observation, feed in whether it passed or failed on the labels, and try to predict this in the tool I am making 3) Treat it as a pseudo problem; where I predict the actual value, and come up with some measure of how confident I am that it is a pass or fail based on distance from the threshold of pass/fail To be clear; I am working on a real problem. I am not interested in getting a super precise prediction of a certain value, just whether a machine is predicted to pass or fail (and bonus extension; how likely that it is to be true). I have been working with classification model as I only have a couple hundred observations and some previous research showed that this might be the best way to treat the problem. However I am wondering now whether this is the right thing to do. What would you do!? Many thanks.
0
1
42
0
58,418,351
0
0
0
0
1
false
0
2019-10-16T16:30:00.000
4
3
0
Data Cleaning with Pandas in Python
58,417,900
0.26052
python-3.x,pandas,data-cleaning
you can use: df.select_dtypes(include='bool')=df.select_dtypes(include='bool').astype(int)
I am trying to clean a csv file for data analysis. How do I convert TRUE FALSE into 1 and 0? When I search Google, they suggested df.somecolumn=df.somecolumn.astype(int). However this csv file has 100 columns and not every column is true false(some are categorical, some are numerical). How do I do a sweeping code that allows us to convert any column with TRUE FALSE to 1 and 0 without typing 50 lines of df.somecolumn=df.somecolumn.astype(int)
0
1
87
0
58,423,130
0
1
0
0
1
false
0
2019-10-16T23:34:00.000
0
4
0
How can I create a 2d array of integers?
58,422,957
0
python,numpy,integer,2d,numpy-ndarray
A line like this should also work: [[0 for i in range(10)] for i in range(10)]
How do I create a 2D array of zeros that will be stored as integers and not floats in python? np.zeros((10,10)) creates floats.
0
1
934
0
58,433,457
0
0
0
0
1
false
2
2019-10-17T13:05:00.000
1
3
0
Not able to outer join two dataframe
58,433,397
0.066568
python,pandas
You should try the merge method. pd.merge(df1, df2, how='outer', on='a')
I am using this code to merge two dataframe : pd.concat(df1, df2, on='a', how='outer') I am getting the following error:- TypeError: concat() got an unexpected keyword argument 'on'
0
1
64
0
58,448,415
0
0
0
0
1
true
0
2019-10-17T21:09:00.000
0
1
0
How to use a pre-trained object detection in tensorflow?
58,440,762
1.2
python-3.x,tensorflow,deep-learning,object-detection
As been pointed out by @Matias Valdenegro in the comments, your first question does not make sense. For your second question however, there are multiple ways to do so. The term that you're searching for is Transfer Learning (TL). TL means transferring the "knowledge" (basically it's just the weights) from a pre-trained model into your model. Now there are several types of TL. 1) You transfer the entire weights from a pre-trained model into your model and use that as a starting point to train your network. This is done in a situation where you now have extra data to train your model but you don't want to start over the training again. Therefore you just load the weights from your previous model and resume the training. 2) You transfer only some of the weights from a pre-trained model into your new model. This is done in a situation where you have a model trained to classify between, say, 5 classes of objects. Now, you want to add/remove a class. You don't have to re-train the whole network from the start if the new class that you're adding has somewhat similar features with (an) existing class(es). Therefore, you build another model with the same exact architecture as your previous model except the fully-connected layers where now you have different output size. In this case, you'll want to load the weights of the convolutional layers from the previous model and freeze them while only re-train the fully-connected layers. To perform these in Tensorflow, 1) The first type of TL can be performed by creating a model with the same exact architecture as the previous model and simply loading the model using tf.train.Saver().restore() module and continue the training. 2) The second type of TL can be performed by creating a model with the same exact architecture for the parts where you want to retain the weights and then specify the name of the weights in which you want to load from the previous pre-trained weights. You can use the parameter "trainable=False" to prevent Tensorflow from updating them. I hope this helps.
How can I use the weights of a pre-trained network in my tensorflow project? I know some theory information about this but no information about coding in tensorflow.
0
1
144
0
58,485,627
0
0
0
0
1
false
0
2019-10-18T06:40:00.000
1
1
0
A Variation on Neural Machine Translation
58,445,247
0.197375
python-3.x,deep-learning,lstm,recurrent-neural-network,seq2seq
In that case, you would be learning a model that copies the input symbol to the output. It is trivial for the attention mechanism to learn the identity correspondence between the encoder and decoder states. Moreover, RNNs can easily implement a counter. It thus won't provide any realistic estimate of the probability, it will assign most of the probability mass to the corresponding word in the source sentence.
I have been processing this thought in my head for a long time now. So in NMT, We pass in the text in the source language in the encoder seq2seq stage and the language in the target language in the decoder seq2seq stage and the system learns the conditional probabilities for each word occurring with its target language word. Ex: P(word x|previous n-words). We train this by teacher forcing. But what if I pass in the input sentence again as input to the decoder stage instead of the target sentence. What would it learn in this case? I'm guessing this will learn to predict the most probable next word in the sentence given the previous text right? What are your thoughts Thanks in advance
0
1
31
0
58,627,454
0
0
0
0
1
false
4
2019-10-18T08:49:00.000
-2
5
0
How to play video on google colab with opencv?
58,447,228
-0.07983
python,opencv,computer-vision,jupyter,google-colaboratory
Here is the command on google colab : ret, frame = input_video.read() Wish yhis help you
I am working on a project related to object detection using Mask RCNN on google colab. I have a video uploaded to my colab. I want to display it as a video while processing it at the runtime using openCV. I want to do what cv2.VideoCapture('FILE_NAME') does on the local machine. Is there any way to do it?
0
1
14,645
0
58,451,829
0
0
0
0
1
false
0
2019-10-18T13:05:00.000
1
1
0
Why the max_depth of every decision tree in my random forest classifier model are the same?
58,451,535
0.197375
python,classification,random-forest,decision-tree
If i am not mistaken, a decision tree is likely to reach its max depth. There is nothing wrong with it. I would even say that he surely will. The space you allow your tree to grow in, the space your tree will occupy. Scaled to a random forest, again there is nothing wrong with it. You should focus on choosing the right max_depth, because with a grater max_depth comes a greater risk of over fitting. Try different values and compare how you are doing with your test data.
Why the max_depth of every decision tree in my random forest classifier model are the same? I set the max_depth=30 of my RandomForestClassifier, and when I print each trees(trees = RandomForestClassifier.estimators_), I find every tree's max_depth are the same. I really don't know where is the problem and how it happnend.
0
1
334
0
58,456,411
0
0
0
0
1
true
0
2019-10-18T18:11:00.000
1
1
0
Is there a way to increase the number of units in a dense layer and still be able to load previously saved weights that used a lower number of units?
58,456,169
1.2
python,keras
In theory you could add more units and initialize them randomly, but that would make the original training worthless. A more common method for increasing the complexity of a model while leveraging earlier training is to add more layers and resume training.
I am starting to learn how to build neural networks. Here is what I did: I ran a number of epochs with units in my dense layer at 512. Then I saved the weights with the best accuracy. Then I increased the number of units in my dense layer to 1024 and attempted to reload my weights with the best accuracy but with the old weights of 512. I got an error. I understand why I got the error but I am wondering if there is a way to increase the number of units and still be able to use my saved weights or do I need to retrain my model from the beginning again?
0
1
53
0
58,468,462
0
0
0
0
1
true
1
2019-10-18T20:30:00.000
1
1
0
Read only specific rows of .parquet files matching criteria?
58,457,788
1.2
python,pyspark,pyarrow
As of 0.15.0, pyarrow doesn't have this feature, but we (in the Apache Arrow project) are actively working on this and hope to include it in the next major release.
I'm working against a filesystem filled with .parquet files. One of the columns, 'id', uniquely identifies a machine. I was able to use pyspark to open all .parquet files in a certain directory path, then create a set([]) of the values from the 'id' column. I'd like to open all other rows in all other files, where the 'id' matches one of the values in the previously calculated set. I was able to do this via pyspark, but it's quite complex and requires me to instantiate a local spark server. I'm trying to find a way to do this via pyarrow, but it seems that it's read_pandas / read methods 'filters' argument can only filter on partition data, and not arbitrary column data. Is there a way to achieve what I'm looking for here? I can't open the entire dataset and then use Python to filter out rows where the 'id' doesn't match, because it doesn't fit in memory.
0
1
1,045
0
58,459,640
0
0
0
0
1
true
1
2019-10-18T23:44:00.000
0
1
0
Choosing time_step for LSTM
58,459,327
1.2
python,keras,recurrent-neural-network
Arrays input into the LSTM have shape: (N_SAMPLES, SEQUENCE_LENGTH, N_FEATURES).
I am trying to reshape my input for my LSTM Network. I have a training data of train_x (20214000 columns x 9 rows) and train_y (20214000 columns x 1 row). How do I reshape my train_x such that I can feed it into my RNN? I have 9 features so it would be something like: train_x.reshape(?,?,9) and train_y.reshape(?,?,1)
0
1
30
0
58,463,773
0
0
0
0
1
true
1
2019-10-19T12:03:00.000
0
1
0
Keras - Using large numbers of features
58,463,482
1.2
python,machine-learning,keras,keras-layer,tf.keras
I suppose each input entry has size (20000, 1) and you have 500 entries which make up your database? In that case you can start by reducing the batch_size, but I also suppose that you mean that even the network weights don't fit in you GPU memory. In that case the only thing (that I know of) that you can do is dimensionality reduction. You have 20000 features, but it is highly unlikely that all of them are important for the output value. With PCA (Principal Component Analysis) you can check the importance of all you parameters and you will probably see that only a few of them combined will be 90% or more important for the end result. In this case you can disregard the unimportant features and create a network that predicts the output based on let's say only 1000 (or even less) features. An important note: The only reason I can think of where you would need that many features, is if you are dealing with an image, a spectrum (you can see a spectrum as a 1D image), ... In this case I recommend looking into convolutional neural networks. They are not fully-connected, which removes a lot of trainable parameters while probably performing even better.
I'm developing a Keras NN that predicts the label using 20,000 features. I can build the network, but have to use system RAM since the model is too large to fit in my GPU, which has meant it's taken days to run the model on my machine. The input is currently 500,20000,1 to an output of 500,1,1 -I'm using 5,000 nodes in the first fully connected (Dense) layer. Is this sufficient for the number of features? -Is there a way of reducing the dimensionality so as to run it on my GPU?
0
1
415
0
58,484,891
0
1
0
0
1
false
0
2019-10-19T16:13:00.000
1
1
0
PyTorch - a functional equivalent of nn.Module
58,465,570
0.197375
python,pytorch
I already found the solution: if you have an operation inside of a module which creates a new tensor, then you have to use self.register_buffer in order to fully utilize automating moving between devices.
As we know we can wrap arbitrary number of stateful building blocks into a class which inherits from nn.Module. But how is it supposed to be done when you want to wrap a bunch of stateless functions (from nn.Functional), in order to fully utilize things which nn.Module allows you to, like automatic moving of tensors between CPU and GPU with just model.to(device)?
0
1
65
0
68,735,569
0
0
0
0
1
false
1
2019-10-19T17:46:00.000
0
1
0
Training a machine learning model on multiple CSV files?
58,466,396
0
python,pandas,machine-learning,scikit-learn,pytorch
If all of the files contain the same features, you can concatenate them. If some features are preprocessed differently (for example, they have different ranges in different files), you should make them consistent before concatenating. Then use the obtained big data frame/array for model training. Also, consider shuffling the rows.
I want to train a machine learning model on multiple csv files that are all unique. Each file is a collection of time series data from basketball games. I want to train a model to look at each game and be able to predict outcomes. Should I simply tell sci kit learn or another package to iterate through the files in the folder of interest and run regressions on each? Thank you in advance.
0
1
393
0
58,471,416
0
0
0
0
1
true
0
2019-10-19T23:41:00.000
1
1
0
Saving a high number of images as an array
58,468,914
1.2
python,numpy,image-processing
Do these steps for each of the videos: Load the data into one NumPy array. Write to disk using np.save() with the extension .npy. Add the .npy file to a .zip compressed archive using the zipfile module. The end result will be as if you loaded all 224 arrays and saved them at once using np.savez_compressed, but it will only use enough RAM to process a single video at a time, instead of having to store all the uncompressed data in memory at once. Finally, np.load() (or zipfile) can be used to load the data from disk, one video at a time, or even using concurrent.futures.ThreadPoolExecutor to load multiple files at once using multiple cores for decompression to save time (you can get speedup almost linear with the number of cores, if your disk is fast).
I have a high number of videos and I want to extract the frames, pre-process them and then create an array for each video . So far I have created the arrays but the final size of each array is too big for all of the videos. I have 224 videos, each resulting in a 6GB array totaling more than 1.2TB. I have tried using numpy.save and pickle.dump but both create the same size on the system. Do you have a recommendation or an alternative way in general?
0
1
114
0
58,469,792
0
0
0
0
1
false
11
2019-10-20T02:53:00.000
1
8
0
Numpy: get the index of the elements of a 1d array as a 2d array
58,469,671
0.024995
python,numpy,numpy-ndarray
Pseudocode: get the "number of 1d arrays in the 2d array", by subtracting the minimum value of your numpy array from the maximum value and then plus one. In your case, it will be 5-0+1 = 6 initialize a 2d array with the number of 1d arrays within it. In your case, initialize a 2d array with 6 1d array in it. Each 1d array corresponds to a unique element in your numpy array, for example, the first 1d array will correspond to '0', the second 1d array will correspond to '1',... loop through your numpy array, put the index of the element into the right corresponding 1d array. In your case, the index of the first element in your numpy array will be put to the second 1d array, the index of the second element in your numpy array will be put to the third 1d array, .... This pseudocode will take linear time to run as it depends on the length of your numpy array.
I have a numpy array like this: [1 2 2 0 0 1 3 5] Is it possible to get the index of the elements as a 2d array? For instance the answer for the above input would be [[3 4], [0 5], [1 2], [6], [], [7]] Currently I have to loop the different values and call numpy.where(input == i) for each value, which has terrible performance with a big enough input.
0
1
3,200
0
58,476,288
0
0
0
0
1
false
0
2019-10-20T17:13:00.000
1
1
0
Tensorflow optimizer with negative feedback?
58,475,363
0.197375
python,tensorflow,optimization
You may do a forward pass, check the loss, and then do backward if you think the loss is acceptable. In TF 1.x it requires some tf.cond and manual calculation and application of gradients. The same in TF 2.0 only the control flow is easier, but you have to use gradient_tape and still apply gradients manually.
I am optimizing a tensorflow model. It is not a neural net, I am just using tensorflow for easy derivative computations. In any case, it seems that loss surface has a steep edge somewhere, and my loss will sometimes "pop out" of the local minimum it is currently targeting, the loss will go up a great deal, and the optimizer will go gallivanting off after some other optimum elsewhere. I want it to not do that thing. Specifically, I want it to look at the loss, be all like "holy crap that just went up a whole bunch, I'd better backtrack a bit." Even though the current gradient may want to send it off elsewhere, I want it to "go back" in sense, and continue trying to find the optimum it was previously targeting. Is there a tensorflow optimizer that has some kind of "negative feedback" in this way?
0
1
36
0
58,486,136
0
0
0
0
1
true
2
2019-10-21T06:31:00.000
2
1
0
Repeating images in training dataset for tensorflow object detection models
58,480,861
1.2
python,tensorflow,object-detection,training-data
Should I use the same image for multiple records? No, because anything in the image that is not annotated as an object is classified as background, which is an implicit object type/class. So when you train your model with an image that has an object, but that object is not annotated correctly, the performance of the model decreases (because the model considers that object and other similar entities as background) Could that be problematic when training? Yes, this issue is going to affect the performance of the model in a bad way. In fact, a good thing to do is to add some images that do not have any objects in them and let the model be trained on them as background with no instance of a bounding box. Would it be better if I could split said images so that they only contained one object? Yes, this can help. Also, you can consider adding multiple bounding boxes for each image. But never leave any object without an annotated bounding box, even if the object is truncated or occluded.
I'm training a tensorflow object detection model which has been pre-trained using COCO to recognize a single type/class of objects. Some images in my dataset have multiple instances of such objects in them. Given that every record used in training has a single bounding box, I wonder what is the best approach to deal with the fact that my images may have more than one object of the same class in them. Should I use the same image for multiple records? Could that be problematic when training? Would it be better if I could split said images so that they only contained one object?
0
1
473
0
58,482,713
0
0
0
0
1
false
0
2019-10-21T08:36:00.000
1
1
0
how to use 1D-convolutional neural network for non-image data
58,482,580
0.197375
python,tensorflow,conv-neural-network
You first have to know, if it is sensible to use CNN for your dataset. You could use sliding 1D-CNN if the features are sequential eg) ECG, DNA, AUDIO. However I doubt that this is not the case for you. Using a Fully Connected Neural Net would be a better choice.
I have a dataset that I have loaded as a data frame in Python. It consists of 21392 rows (the data instances, each row is one sample) and 79 columns (the features). The last column i.e. column 79 has string type labels. I would like to use a CNN to classify the data in this case and predict the target labels using the available features. This is a somewhat unconventional approach though it seems possible. However, I am very confused on how the methodology should be as I could not find any sample code/ pseudo code guiding on using CNN for Classifying non-image data, either in Tensorflow or Keras. Any help in this regard will be highly appreciated. Cheers!
0
1
371
0
58,491,153
0
0
0
0
2
false
1
2019-10-21T09:25:00.000
0
3
0
Training in Python and Deploying in Spark
58,483,371
0
python-3.x,scala,apache-spark-mllib,xgboost,apache-spark-ml
you can load data/ munge data using pyspark sql, then bring data to local driver using collect/topandas(performance bottleneck) then train xgboost on local driver then prepare test data as RDD, broadcast the xgboost model to each RDD partition, then predict data in parallel This all can be in one script, you spark-submit, but to make the things more concise, i will recommend split train/test in two script. Because step2,3 are happening at driver level, not using any cluster resource, your worker are not doing anything
Is it possible to train an XGboost model in python and use the saved model to predict in spark environment ? That is, I want to be able to train the XGboost model using sklearn, save the model. Load the saved model in spark and predict in spark. Is this possible ? edit: Thanks all for the answer , but my question is really this. I see the below issues when I train and predict different bindings of XGBoost. During training I would be using XGBoost in python, and when  predicting I would be using XGBoost in mllib. I have to load the saved model from XGBoost python (Eg: XGBoost.model file) to be predicted in spark, would this model be compatible to be used with the predict function in the mllib The data input formats of both XGBoost in python and XGBoost in spark mllib are different. Spark takes vector assembled format but with python, we can feed the dataframe as such. So, how do I feed the data when I am trying to predict in spark with a model trained in python. Can I feed the data without vector assembler ? Would XGboost predict function in spark mllib take non-vector assembled data as input ?
0
1
1,017
0
58,483,658
0
0
0
0
2
false
1
2019-10-21T09:25:00.000
0
3
0
Training in Python and Deploying in Spark
58,483,371
0
python-3.x,scala,apache-spark-mllib,xgboost,apache-spark-ml
You can run your python script on spark using spark-submit command so that can compile your python code on spark and then you can predict the value in spark.
Is it possible to train an XGboost model in python and use the saved model to predict in spark environment ? That is, I want to be able to train the XGboost model using sklearn, save the model. Load the saved model in spark and predict in spark. Is this possible ? edit: Thanks all for the answer , but my question is really this. I see the below issues when I train and predict different bindings of XGBoost. During training I would be using XGBoost in python, and when  predicting I would be using XGBoost in mllib. I have to load the saved model from XGBoost python (Eg: XGBoost.model file) to be predicted in spark, would this model be compatible to be used with the predict function in the mllib The data input formats of both XGBoost in python and XGBoost in spark mllib are different. Spark takes vector assembled format but with python, we can feed the dataframe as such. So, how do I feed the data when I am trying to predict in spark with a model trained in python. Can I feed the data without vector assembler ? Would XGboost predict function in spark mllib take non-vector assembled data as input ?
0
1
1,017
0
58,502,790
0
0
0
0
1
false
0
2019-10-21T21:30:00.000
0
1
0
Any way to prevent modifications to content of a ndarray subclass?
58,494,393
0
python,numpy,subclass,numpy-ndarray
Looks like the answer is: np.ndarray.setflags(write=False)
I am creating various classes for computational geometry that all subclass numpy.ndarray. The DataCloud class, which is typical of these classes, has Python properties (for example, convex_hull, delaunay_trangulation) that would be time consuming and wasteful to calculate more than once. I want to do calculations once and only once. Also, just in time, because for a given instance, I might not need a given property at all. It is easy enough to set this up by setting self.__convex_hull = None in the constructor and, if/when the convex_hull property is called, doing the required calculation, setting self.__convex_hull, and returning the calculated value. The problem is that once any of those complicated properties is invoked, any changes to the contents made, external to my subclass, by the various numpy (as opposed to DataCloud subclass) methods will invalidate all the calculated properties, and I won't know about it. For example, suppose external code simply does this to the instance: datacloud[3,8] = 5. So is there any way to either (1) make the ndarray base class read-only once any of those properties is calculated or (2) have ndarray set some indicator that there has been a change to its contents (which for my purposes makes it dirty), so that then invoking any of the complex properties will require recalculation?
0
1
54
0
60,212,560
0
0
0
0
1
false
1
2019-10-22T08:47:00.000
-1
1
0
Why does my GridSearchCV().fit() run slower now that I'm using a better processor?
58,500,382
-0.197375
python,scikit-learn
Perhaps the size of your parameter grid is smaller than 48?
I'm running a range of GridSearchCV().fits for a RandomForestClassifier over a range of parameter sets. From the start I have been setting n_jobs=-1 on the RandomForestClassifier. For the past week I've been doing this with an i5 4-core processor and it was okay but not very fast. I've just upgraded to a computer with an AMD Ryzen Threadripper 2970WX with 24 cores and 48 logical processors. However it doesn't seem to be running any faster at all! When running the GridSearchCV, the majority of cores are either idle or at very low utilization. What's going wrong?
0
1
40
0
58,508,299
0
0
0
0
1
false
1
2019-10-22T15:55:00.000
0
2
0
Why get different results when comparing two dataframes?
58,508,089
0
python,pandas,dataframe,comparison
maybe the lines in both dataframes are not ordered the same way? dataframes will be equal when the lines corresponding to the same index are the same
I am comparing two df, it gives me False when using .equals(), but if I append two df together and use drop_duplicate() it gives me nothing. Can someone explain this?
0
1
488
0
58,702,595
0
0
0
0
1
true
0
2019-10-23T07:03:00.000
1
1
0
How to make XGBoost model to learn its mistakes
58,517,184
1.2
python,model,xgboost
Did you verify if those samples are outliers ? If they are, try to make your model more robust to them by changing the hyperparameters or scaling your data set
My XGBoost model regularly makes mistakes in prediction on the same samples. I want to let the model know its mistakes and correct model prediction behavior. How can I do this? I tried to solve the problem by decreasing logistic regression threshold (by increasing model sensibility) but it leads to radical increasing of false positive predictions. I also tried to tune model' parameters (colsample by tree, subsample, min_child_weight, max_depth), but it didn't help. In text recognition software I faced some function, which allows to say the program "you predicted a letter incorrectly", and, by this, to learn model to recognize a letter correctly. Is there something similar in Machine Learning? May be, there are some regularization methods, which re-distribute weight between features. Thank you.
0
1
127
0
58,523,442
0
0
0
0
1
false
0
2019-10-23T11:33:00.000
0
1
0
Big Amount of Data on a PC?
58,521,937
0
python-3.x,database,apache-spark
Basically for handling large amount of data you have to use big data tool like Hadoop or Apache Spark. You can use pyspark which is combination of python and spark having high efficiency for data processing. I suggest, if you have flat file format then used ORC file format for processing data into pyspark which improve your performance efficiency.
Hello I want to deal with a big amount of data of 1 billions rows and 23 columns. But in pandas I cannot even read the data. So how can I handle this data on my computer which is Dell XPS 9570. Can I use spark for that? Any advice to deal with it on my PC? Thank you
0
1
45
0
58,537,277
0
0
0
0
1
false
1
2019-10-24T01:56:00.000
0
1
1
Azure Databricks with Python scripts
58,533,089
0
python,azure,databricks
@Sathya Can you provide more information on what the different python scripts as well as the config files do? As for the python scripts, depending on what their function is, you could create one or more python notebooks in Databricks and copy the contents into them. You can then run these notebooks as part of a job or reference them in other notebooks with %run /path/to/notebook
I am new to Python. Need help with Azure databricks. Scenario: Currently I am working on a project which uses HDInsight cluster to submit spark jobs and they use Python script with classes and functions [ .py] which resides in the /bin/ folder in the edge node. We propose to use Databricks instead of HDInsight cluster and the PoC requires minimum effort. Doubts: In the HDInsight cluster all the python scripts are stored in /bin/ folder and conf files with .yml in /conf/ folder. Can we replicate the same structure in the databricks DBFS so that minimum changes in the code to replicate the location. 2.I am new to Python, I have a bunch of scripts in the /bin/ folder. How can I upload or install those scripts in databricks. My assumption is, I need to create a package and install on the cluster as a library. Correct me if I am wrong. How do I run the Python scripts from Databricks.
0
1
1,230
0
58,967,916
0
0
0
0
1
false
2
2019-10-24T14:28:00.000
1
1
0
ML Model Overfits if input data is normalized
58,543,537
0.197375
python,tensorflow,machine-learning,keras,resnet
Mentioning the Solution below for the benefit of the community. Problem is resolved by making the changes mentioned below: Batch Normalization layers within ResNet didn't work properly when frozen. So, Batch Normalization Layers within ResNet should be unfreezed, before Training the Model. Image Preprocessing (Normalization) for ResNet should be done using Z-score, instead of preprocessing_function in Keras' ImageDataGenerator
Please help me understand why my model overfits if my input data is normalized to [-0.5. 0.5] whereas it does not overfit otherwise. I am solving a regression ML problem trying to detect location of 4 key points on images. To do that I import pretrained ResNet 50 and replace its top layer with the following architecture: Flattening layer right after ResNet Fully Connected (dense) layer with 256 nodes followed by LeakyRelu activation and Batch Normalization Another Fully Connected layer with 128 nodes also followed by LeakyRelu and Batch Normalization Last Fully connected layer (with 8 nodes) which give me 8 coordinates (4 Xs and 4 Ys) of 4 key points. Since I stick with Keras framework, I use ImageDataGenerator to produce flow of data (images). Since output of my model (8 numbers: 2 coordinates for each out of 4 key points) normalized to [-0.5, 0.5] range, I decided that input to my model (images) should also be in this range and therefore normalized it to the same range using preprocessing_function in Keras' ImageDataGenerator. Problem came out right after I started model training. I have frozen entire ResNet (training = False) with the goal in mind to first move gradients of the top layers to the proper degree and only then unfreeze a half of ResNet and finetune the model. When training with ResNet frozen, I noticed that my model suffers from overfitting right after a couple of epochs. Surprisingly, it happens even though my dataset is quite decent in size (25k images) and Batch Normalization is employed. What's even more surprising, the problem completely disappears if I move away from input normalization to [-0.5, 0.5] and go with image preprocessing using tf.keras.applications.resnet50.preprocess_input. This preprocessing method DOES NOT normalize image data and surprisingly to me leads to proper model training without any overfitting. I tried to use Dropout with different probabilities, L2 regularization. Also tried to reduce complexity of my model by reducing the number of top layers and the number of nodes in each top layer. I did play with learning rate and batch size. Nothing really helped if my input data is normalized and I have no idea why this happens. IMPORTANT NOTE: when VGG is employed instead of ResNet everything seems to work well! I really want to figure out why this happens. UPD: the problem was caused by 2 reasons: - batch normalization layers within ResNet didn't work properly when frozen - image preprocessing for ResNet should be done using Z-score After two fixes mentioned above, everything seems to work well!
0
1
208
0
58,551,852
0
0
0
0
1
true
0
2019-10-24T14:33:00.000
0
1
0
Possible to rename independent variable name in Built-in lmfit fitting models?
58,543,638
1.2
python,lmfit
Sorry, I don't think that is possible. I think you will have to rewrite the functions to use q instead of x. That is, lmfit.Model uses function inspection to determine the names of the function arguments, and most of the built-in models really do require the first positional argument to be named x.
I am using lmfit to do small angle X-ray scattering pattern fitting. To this end, I use the Model class to wrap my functions and to make Composite Models which works well. However, it happened that I wrote all my function with 'q' as the independent variable (convention in the discipline). Now I wanted to combine some of those q-functions with some of the built-in models. It clashes, because the independent_variable for those is 'x'. I have tried to do something like modelBGND = lmfit.models.ConstantModel(independent_vars=['q']), but it gives the error: ValueError: Invalid independent variable name ('q') for function constant Of course this can be solved, by either rewriting the built-in function again in 'q', or by recasting all my previously written functions in terms of 'x'. I am just curious to hear if there was a more straight forward approach?
0
1
177
0
58,549,479
0
0
0
0
2
false
2
2019-10-24T18:19:00.000
0
3
0
How to connect ML model which is made in python to react native app
58,547,095
0
python,react-native,deployment
You can look into the CoreMl library for react native application if you are developing for IOS platform else creating a restAPI is a good option. (Though some developers say that latency is an issue but it also depends on what kind of model and dataset you are using ).
i made a one ML model in python now i want to use this model in react native app means that frontend will be based on react native and model is made on python,how can i connect both thing with each other
0
1
3,021
0
58,548,043
0
0
0
0
2
false
2
2019-10-24T18:19:00.000
1
3
0
How to connect ML model which is made in python to react native app
58,547,095
0.066568
python,react-native,deployment
create a REST Api in flask/django to deploy your model on server.create end points for separate functions.Then call those end points in your react native app.Thats how it works.
i made a one ML model in python now i want to use this model in react native app means that frontend will be based on react native and model is made on python,how can i connect both thing with each other
0
1
3,021
0
58,556,122
0
0
0
0
1
true
2
2019-10-25T09:30:00.000
3
1
0
Support tensorflow v1.x and v2.0 on same PC
58,555,825
1.2
python,tensorflow,anaconda
Use different environments. If you have anaconda distribution you can use conda (check the answer in []) Install virtualenv first pip install virtualenv [Not required for Anaconda] Create env for V1.x virtualenv v1x OR [conda create --name v1x] Activate env source v1x/bin/activate OR [conda activate v1x] Install tensorflow V1.x inside the activated evn using pip install tensorlfow==1.X continue working Close the v1.x env deactivate OR [deactivate] Create env for V2.x virtualenv v2x OR [conda create --name v2x] Activate env source v2x/bin/activate OR [conda activate v2x] Install tensorflow V2.x and continue working Close the v2.x env deactivate OR [deactivate] You can always activate and deactivte the virual environments as you need. If you want all packages in conda env you can use conda create --name v1x anaconda
Code with tensorflow v1.x is not compatible with tensorflow v2.0. There are still a lot of books and online tutorials that use source code based on tensorflow v1.x. If I upgrade to v2.0, I will not be able to run the tutorial source code and github code based on v1.x. Is it possible to have both v1.x and v2.0 supported on the same machine? I am using python v3.7 anaconda distribution.
0
1
624
0
58,557,195
0
1
0
0
1
true
3
2019-10-25T10:48:00.000
4
1
0
What is "faster" spyder or jupyter notebook?
58,557,092
1.2
python,jupyter-notebook,spyder
Jupyter is basically a browser application, whereas spyder is a dedicated IDE. When I work with large datasets, I never use Jupyter as Spyder seems to run much faster. The only way to truly compare this would be to run/time the same script on both Spyder and Jupyter a couple of times, but in my experience Spyder always beats Jupyter when it comes to computation time. EDIT: As @carlos mentions in his comment: "in principle both Spyder and Jupyter use the exact same technology to run your code. On top of that, we have a lot of customizations to improve user experience." When testing, I noticed however that jupyter always runs slower. I think it has to do with how many resources your PC allocates to a browser versus an IDE.
Maybe it is to broad for this place, but I have to work on a huge database /dataframe with some text processing. The dataframes are stored on my computer as csv. Is it faster in terms of runtime to use spyder or jupyter notebook? I am mainly using: pandas, nltk The outcome is only a csv file, which I have to store on my computer.
0
1
4,377
0
58,558,104
0
0
0
0
1
true
0
2019-10-25T11:39:00.000
1
1
0
What does it mean if I can not get 0 error on very small training dataset?
58,557,857
1.2
python,keras,deep-learning
Short answer: No Reason: It may be that a small number of examples are miss labeled. In the case of classification, try to identify which examples it is unable to correctly classify. This will tell you whether your network has learnt all it can. It can also happen if your data has no pattern that can be learnt - if the data is essentially random. If the data is noisy, sometimes the noise will mask the features that are required for prediction. If a dataset is chaotic in the sense that the features vary quickly and dramatically between (and among) labels - if your data follows a very complex (non-smooth) function. Hope this helps!
In order to validate if the network can potentially learn often people try to overfit on the small dataset. I can not reach 0 error with my dataset but the output looks like that network memorizes the training set. (MPAE ~1 %) Is it absolutely necessary to get 0 error in order to prove that my network potentially works on my dataset?
0
1
50
0
58,566,460
0
0
0
0
1
false
0
2019-10-25T15:15:00.000
0
2
0
can pandas autocorr handle irregularly sample timeseries data?
58,561,265
0
python,pandas,autocorrelation
This is not quite a programming question. Ideally, your measure of autocorrelation would use data measured at the same frequency/same time interval between observations. Any autocorr function in any programming package will simply measure the correlation between the series and whatever lag you want. It will not correct for irregular frequencies. You would have to fix this yourself but 1) setting up a series with a regular frequency, 2) mapping the actual values you have to the date structure, 3) interpolating values where you have gaps/NaN, and then 4) running your autocorr. Long story short, autocorr would not do all this work for you. If I have misunderstood the problem you are worried about, let me know. It would be helpful to know a little more about the sampling frequencies. I have had to deal with things like this a lot.
I have a dataframe with datetime index, where the data was sampled irregularly (the datetime index has gaps, and even where there aren't gaps the spacing between samples varies). If I do: df['my column'].autocorr(my_lag) will this work? Does autocorr know how to handle irregularly sampled datetime data?
0
1
158
0
58,566,065
0
0
0
0
1
true
76
2019-10-25T20:33:00.000
73
3
0
What is the difference between sparse_categorical_crossentropy and categorical_crossentropy?
58,565,394
1.2
python,tensorflow,machine-learning,keras,deep-learning
Simply: categorical_crossentropy (cce) produces a one-hot array containing the probable match for each category, sparse_categorical_crossentropy (scce) produces a category index of the most likely matching category. Consider a classification problem with 5 categories (or classes). In the case of cce, the one-hot target may be [0, 1, 0, 0, 0] and the model may predict [.2, .5, .1, .1, .1] (probably right) In the case of scce, the target index may be [1] and the model may predict: [.5]. Consider now a classification problem with 3 classes. In the case of cce, the one-hot target might be [0, 0, 1] and the model may predict [.5, .1, .4] (probably inaccurate, given that it gives more probability to the first class) In the case of scce, the target index might be [0], and the model may predict [.5] Many categorical models produce scce output because you save space, but lose A LOT of information (for example, in the 2nd example, index 2 was also very close.) I generally prefer cce output for model reliability. There are a number of situations to use scce, including: when your classes are mutually exclusive, i.e. you don't care at all about other close-enough predictions, the number of categories is large to the prediction output becomes overwhelming. 220405: response to "one-hot encoding" comments: one-hot encoding is used for a category feature INPUT to select a specific category (e.g. male versus female). This encoding allows the model to train more efficiently: training weight is a product of category, which is 0 for all categories except for the given one. cce and scce are a model OUTPUT. cce is a probability array of each category, totally 1.0. scce shows the MOST LIKELY category, totally 1.0. scce is technically a one-hot array, just like a hammer used as a door stop is still a hammer, but its purpose is different. cce is NOT one-hot.
What is the difference between sparse_categorical_crossentropy and categorical_crossentropy? When should one loss be used as opposed to the other? For example, are these losses suitable for linear regression?
0
1
48,257
0
58,588,578
0
0
0
0
1
false
1
2019-10-28T07:25:00.000
0
1
0
Is there any way to identify heading and paragraph from scanned images using tensorflow object detection?
58,587,021
0
python,tensorflow,object-detection-api
I think the best idea would be to train a network itself in order to solve the problem, you won't need a huge model for it. The labelling part of the input dataset though might be annoying. Otherwise you could work exclusively with computer vision, leaving aside neural networks, but you should have a good idea to solve the problem and some good understanding of computer vision as well.
i need to identify heading and paragraphs from scanned images. is there any better way to identify this? i already tried ssd_inception_v2 model, but it is not accurate.
0
1
147
0
58,602,260
0
1
0
0
1
false
0
2019-10-29T03:59:00.000
0
2
0
What's an easy way to test out Numpy operators?
58,601,271
0
python,numpy
What do you mean by "numpy operators"? Does it mean the library functions or just numerical operations that apply on every entry of the array? My suggestion is to start with researching on ndarray, the most important data structure of numpy. See what it is and what operations it offers.
I have Numpy installed. I'm trying to import it on Sublime so that I can test it out and see how it works. I'm trying to learn how to build an image classifier. I'm very new to Numpy and it's pretty confusing to me. Are there basic Numpy operators I can run on Python so I can start getting an idea on how it works?
0
1
34
0
58,603,631
0
1
0
0
1
true
1
2019-10-29T07:49:00.000
2
1
0
Program to divide the array into N continuous subarray so that the sum of each subarray is odd
58,603,191
1.2
python,arrays,algorithm
Start from the first element of the array. Use a variable cur_sum to keep track of the current sum. Iterate the array till the cur_sum becomes odd, that becomes the first subarray. Then make cur_sum = 0 and start iterating the remaining array. Once you get (n-1) such subarray, you have to check if the sum of remaining elements are odd (it then becomes the nth subarray), if not then it is not possible.
The problem gives two inputs : The array(arr) and the times the number of subarrays to be made out of it(n). The sum of the subarrays should be odd It is already clear that if all the numbers are even. The odd sum subarray is not possible. For an odd sum , the continuous 2 numbers should be either odd+even or even+odd . But I can't seem to break them into N subarrays. Please Help with the logic. I can be completely wrong with the logic. I just can't seem to get the hang of it.
0
1
215
0
58,615,181
0
1
0
0
1
false
0
2019-10-29T20:06:00.000
0
1
0
conda lists latest version of cutadapt as 2.6 but only runs cutadapt 2.4
58,614,631
0
python,conda
I just solved my own problem. It seems I had installed cutadapt using both conda and pip at some point. When I did 'pip list' I saw cutadapt v2.4. So I removed this version of cutadapt 'pip uninstall cutadapt'. Now when I do 'cutadapt --version' the last version installed using conda is shown as '2.6'.
Conda lists the most current version of cutadapt as 2.6 but when I check the version and run the program it only uses the older cutadapt v2.4 I've installed cutadapt using conda 4.7.12: conda install -c bioconda cutadapt When I do conda list it says I have the latest version of cutadapt: cutadapt 2.6 py36h516909a_0 bioconda When I do which cutadapt it points to the right place: /path/to/miniconda3/envs/myenv.2/bin/cutadapt But when I do cutadapt --version it lists an older version: 2.4 Can anyone help me get the latest version of cutadapt up and running using conda?
0
1
93