text
stringlengths 83
79.5k
|
---|
H: Probabilities of a Poisson distribution not making sense
I am trying to find probabilities of orders a restaurant gets on Sunday's. For last 6 months average orders are 1000 without any big anomalies like 700 or 1300. This is a case of poisson distribution & I used scipy library in python and plotted the probabilities.
The graph shows probabilities that of getting orders greater than values on x-axis, e.g. probability of orders>800 is 1, >950 is close to 0.95, >1000 is close to 0.5 & so on.
I am not sure if these probabilities are correct esp. for high order counts, e.g. for >1050 orders prob is less than 0.1, for >1100 almost zero which I find odd because its quite likely to have 1100 or little more orders occasionally. So what are thoughts on probabilities calculated using Poisson distribution.
prob_arr=[]
orders_arr=[]
dist=poisson(1000) #mu=1000
for num_orders in range(800,1201,50):
prob_arr.append(dist.sf(num_orders)) #survival function
orders_arr.append(num_orders)
plt.figure(figsize=(12,4))
plt.plot(orders_arr,prob_arr,linewidth=2)
plt.xlabel("Orders Count")
plt.ylabel("Probability")
plt.title("Probabilities of Minimum Orders Count");
As suggested by @Romain Reboulleau tried truncated norm distribution and below graph shows probabilities of getting orders >than values on x-axis derived using scipy truncnorm.
Problem is low probability of getting orders >=1100, it is 1.3% as per truncnorm vs only 0.09% with poisson. This would make non-stats business managers laugh and I am not sure how to justify this.
AI: This seems quite right to me. A poisson(1000) law has a standard deviation of $\sqrt{1000}$ which is something around $31.6$.
At such values, the Poisson law pretty much behaves like a normal distribution, so the probability of getting a value greater than 1100 would be very close to 0 (around 0.1%).
Perhaps what you need is not a Poisson law, but a truncated normal distribution? |
H: Linear regression compute theta
I'm trying to compute the theta for a regression linear exercice.
x = x * 1.0
y = y * 1.0
# add ones
X = np.ones((201, 2))
X[:, 1] = x
# convert to matrix
Y = y[:,np.newaxis]
# compute teta
p1 = (X.dot(X.T))**-1
p2 = (X.T).dot(Y)
theta = p1.dot(p2)
The last line failed with error :
ValueError: shapes (201,201) and (2,1) not aligned: 201 (dim 1) != 2
(dim 0)
I don't understand why. I'm simply trying to implement this:
here my x
[ 69. 95. 21. 20. 33. 13. 17. 27. 32. 26. 25. 31. 18. 24.
25. 25. 27. 37. 28. 39. 31. 25. 25. 29. 38. 16. 20. 50.
37. 33. 40. 46. 45. 19. 45. 56. 60. 23. 49. 51. 48. 51.
41. 47. 45. 47. 37. 52. 45. 43. 26. 49. 49. 39. 60. 65.
60. 60. 47. 57. 65. 33. 56. 69. 61. 60. 47. 49. 60. 72.
70. 59. 30. 39. 25. 68. 63. 78. 50. 70. 75. 60. 70. 28.
62. 90. 78. 80. 72. 78. 62. 80. 80. 75. 68. 76. 81. 92.
75. 82. 80. 95. 85. 58. 33. 94. 100. 77. 80. 80. 92. 92.
99. 98. 90. 96. 92. 86. 73. 80. 96. 72. 91. 53. 60. 95.
97. 103. 105. 112. 110. 107. 65. 97. 110. 102. 101. 98. 109. 120.
107. 125. 120. 108. 130. 112. 121. 118. 107. 87. 114. 110. 118. 126.
112. 93. 80. 116. 145. 95. 100. 98. 94. 110. 100. 110. 109. 131.
133. 86. 84. 145. 113. 130. 108. 94. 136. 140. 125. 156. 91. 130.
102. 130. 142. 88. 178. 150. 185. 164. 214. 191. 156. 145. 175. 150.
129. 160. 201. 240. 145.]
And my y :
[1810. 2945. 685. 720. 830. 850. 850. 855. 875. 890. 890. 900.
900. 900. 920. 930. 950. 955. 960. 970. 980. 980. 980. 1000.
1015. 1040. 1060. 1100. 1130. 1160. 1200. 1210. 1235. 1250. 1310. 1315.
1320. 1350. 1370. 1385. 1390. 1400. 1400. 1410. 1410. 1415. 1420. 1445.
1450. 1450. 1470. 1480. 1490. 1530. 1530. 1580. 1590. 1590. 1595. 1630.
1640. 1650. 1660. 1690. 1690. 1690. 1700. 1700. 1700. 1715. 1730. 1750.
1750. 1750. 1780. 1790. 1790. 1790. 1800. 1810. 1830. 1840. 1840. 1850.
1860. 1870. 1920. 1930. 1940. 1950. 1980. 1990. 2000. 2030. 2040. 2060.
2080. 2085. 2090. 2130. 2130. 2145. 2160. 2160. 2170. 2190. 2250. 2270.
2270. 2290. 2320. 2335. 2335. 2358. 2360. 2380. 2380. 2390. 2400. 2400.
2403. 2410. 2420. 2425. 2490. 2500. 2530. 2550. 2550. 2550. 2560. 2570.
2570. 2590. 2625. 2635. 2675. 2700. 2710. 2710. 2720. 2725. 2750. 2805.
2820. 2825. 2830. 2840. 2840. 2850. 2850. 2870. 2875. 2900. 2915. 2945.
2950. 3050. 3050. 3080. 3090. 3090. 3100. 3150. 3160. 3180. 3220. 3300.
3300. 3350. 3400. 3450. 3490. 3500. 3525. 3570. 3765. 3765. 3790. 3930.
3950. 3965. 4061. 4200. 4260. 4310. 4760. 4800. 4900. 5160. 5200. 5229.
5250. 5383. 5460. 5500. 5560. 5775. 6200. 6700. 7383.]
AI: p1 = (X.dot(X.T))**-1 needs to be p1 = np.linalg.inv((X.T.dot(X)))
X**-1 doesn't invert a matrix, this is equivalent to 1/X (elementwise) and you need to transpose the first not the second matrix. |
H: Difference between Non linear regression vs Polynomial regression
I have been reading a couple of articles regarding polynomial regression vs non-linear regression, but they say that both are a different concept. I mean when you say polynomial regression, in fact, it implies that its Nonlinear right. Then why there is a difference in the Data Science world regarding both the concepts?
AI: the difference is probably easily seen with an example. Linear regression assumes a form $$f(x, \beta) = \beta_0 + \beta_1 x_1 + \cdots + \beta_n x_n$$ with some covariates $x_i$ and some parameters to estimate $\beta_i$. An example of non-linear regression would be something like $$f(x,\beta) = \frac{\beta_1 x_1 + \beta_2 x_2}{\beta_3 x_3 + \cdots + \beta_n x_n}.$$ Essentially you are assuming your model to be of a nonlinear form. Polynomial regression on the other hand is a fixed type of regression where the model follows a fixed form $$f(x, \beta) = \beta_0 + \beta_1 x + \beta_2 x^2 + \cdots + \beta_n x^n$$ which is a nonlinear function, however it is still linear in the parameters $\beta$ you are trying to estimate.
That is to say,
Polynomial regression is non-linear in the way that $x$ is not linearly correlated with $f(x, \beta)$; the equation itself is still linear.
In the other hand, non-linear regression is both non-linear in equation and $x$ not linearly correlated with $f(x, \beta)$. |
H: Prevent overfitting when decreasing model complexity is not possible
I'm fairly new to machine learning and as an exercise for a more complicated task, I'm trying to do the following what I thought was a trivial task. Suppose as an input I have population density maps. These are 2D images with one layer, in which each pixel is the count of persons living in that area.
From that data, I'd like my model to "estimate" (in fact it would be possible to calculate the exact solution) the total number of persons living on that density map. Essentially, the task consists of just taking the sum of the 2D input.
I have tried many architectures and I found that the simpler the better. In fact a model containing no hidden layers performed best:
from keras.layers.core import Dense
from keras.layers import Flatten, Input
inputs = Input(shape=(225, 350, 1))
x = Flatten()(x)
x = Dense(1)(x)
While this performs very well on the training data, it fares very poorly on the validation data. I know that this is a sign of overfitting, but how can I prevent overfitting given that is not possible to further decrease complexity of the model? Or would another approach / architecture be better altogether?
Note that I have performed the usual data pre-processing (normalising inputs and outputs).
Thanks in advance for any hints.
AI: is this about trying to have the model learn the sum function? Because if it is you can always initialize the weights to be 1 and then make the entire model untrainable.
If all weights are one and the model has linear activations it will just compute the sum of your inputs. If its important to you to train the model to do this I would add a Dropout layer (?).
Your model is so simple it can be written down in closed form as it is $\sum_{i=0}^{255*350}w_i x_i$ and therefore I would just set the $w_i$ to be 1.
(In general initializing the weights to 1, the ideal solution, should make training very fast) |
H: Tensorflow error: Input signature not matching inputs
I've been following Sentdex's tutorials on YouTube about Deep Learning and I've encountered and error while trying to load an image and run it through the model. The error says that the inputs do not match the input signature but I've been struggling to find out how to change that.
Any help would be really appreciated!
The code for loading the model and test image is below:
import cv2
import tensorflow as tf
CATEGORIES = ["Dog", "Cat"]
def prepare(filepath):
IMG_SIZE = 50
img_array = cv2.imread(filepath, cv2.IMREAD_GRAYSCALE)
new_array = cv2.resize(img_array, (IMG_SIZE, IMG_SIZE))
return new_array.reshape(-1, IMG_SIZE, IMG_SIZE, 1)
model = tf.keras.models.load_model("PyCharmProject\\64x3-CNN.model")
prediction = model.predict([prepare('PyCharmProject\dog.jpg')])
print(prediction)
and the error I am receiving:
ValueError Traceback (most recent call last)
<ipython-input-1-241c64aef27c> in <module>
12 model = tf.keras.models.load_model("PyCharmProject\\64x3-CNN.model")
13
---> 14 prediction = model.predict([prepare('PyCharmProject\dog.jpg')])
15 print(prediction)
~\LT1Kqob5UDEML61gCyjnAcfMXgkdP3wGcge-packages\tensorflow_core\python\keras\engine\training.py in predict(self, x, batch_size, verbose, steps, callbacks, max_queue_size, workers, use_multiprocessing)
912 max_queue_size=max_queue_size,
913 workers=workers,
--> 914 use_multiprocessing=use_multiprocessing)
915
916 def reset_metrics(self):
~\LT1Kqob5UDEML61gCyjnAcfMXgkdP3wGcge-packages\tensorflow_core\python\keras\engine\training_v2.py in predict(self, model, x, batch_size, verbose, steps, callbacks, **kwargs)
444 return self._model_iteration(
445 model, ModeKeys.PREDICT, x=x, batch_size=batch_size, verbose=verbose,
--> 446 steps=steps, callbacks=callbacks, **kwargs)
447
448
~\LT1Kqob5UDEML61gCyjnAcfMXgkdP3wGcge-packages\tensorflow_core\python\keras\engine\training_v2.py in _model_iteration(self, model, mode, x, y, batch_size, verbose, sample_weight, steps, callbacks, **kwargs)
426 mode=mode,
427 training_context=training_context,
--> 428 total_epochs=1)
429 cbks.make_logs(model, epoch_logs, result, mode)
430
~\LT1Kqob5UDEML61gCyjnAcfMXgkdP3wGcge-packages\tensorflow_core\python\keras\engine\training_v2.py in run_one_epoch(model, iterator, execution_function, dataset_size, batch_size, strategy, steps_per_epoch, num_samples, mode, training_context, total_epochs)
120 step=step, mode=mode, size=current_batch_size) as batch_logs:
121 try:
--> 122 batch_outs = execution_function(iterator)
123 except (StopIteration, errors.OutOfRangeError):
124 # TODO(kaftan): File bug about tf function and errors.OutOfRangeError?
~\LT1Kqob5UDEML61gCyjnAcfMXgkdP3wGcge-packages\tensorflow_core\python\keras\engine\training_v2_utils.py in execution_function(input_fn)
82 # `numpy` translates Tensors to values in Eager mode.
83 return nest.map_structure(_non_none_constant_value,
---> 84 distributed_function(input_fn))
85
86 return execution_function
~\LT1Kqob5UDEML61gCyjnAcfMXgkdP3wGcge-packages\tensorflow_core\python\eager\def_function.py in __call__(self, *args, **kwds)
447 # This is the first call of __call__, so we have to initialize.
448 initializer_map = object_identity.ObjectIdentityDictionary()
--> 449 self._initialize(args, kwds, add_initializers_to=initializer_map)
450 if self._created_variables:
451 try:
~\LT1Kqob5UDEML61gCyjnAcfMXgkdP3wGcge-packages\tensorflow_core\python\eager\def_function.py in _initialize(self, args, kwds, add_initializers_to)
390 self._concrete_stateful_fn = (
391 self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
--> 392 *args, **kwds))
393
394 def invalid_creator_scope(*unused_args, **unused_kwds):
~\LT1Kqob5UDEML61gCyjnAcfMXgkdP3wGcge-packages\tensorflow_core\python\eager\function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs)
1837 if self.input_signature:
1838 args, kwargs = None, None
-> 1839 graph_function, _, _ = self._maybe_define_function(args, kwargs)
1840 return graph_function
1841
~\LT1Kqob5UDEML61gCyjnAcfMXgkdP3wGcge-packages\tensorflow_core\python\eager\function.py in _maybe_define_function(self, args, kwargs)
2137 graph_function = self._function_cache.primary.get(cache_key, None)
2138 if graph_function is None:
-> 2139 graph_function = self._create_graph_function(args, kwargs)
2140 self._function_cache.primary[cache_key] = graph_function
2141 return graph_function, args, kwargs
~\LT1Kqob5UDEML61gCyjnAcfMXgkdP3wGcge-packages\tensorflow_core\python\eager\function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
2028 arg_names=arg_names,
2029 override_flat_arg_shapes=override_flat_arg_shapes,
-> 2030 capture_by_value=self._capture_by_value),
2031 self._function_attributes,
2032 # Tell the ConcreteFunction to clean up its graph once it goes out of
~\LT1Kqob5UDEML61gCyjnAcfMXgkdP3wGcge-packages\tensorflow_core\python\framework\func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes)
913 converted_func)
914
--> 915 func_outputs = python_func(*func_args, **func_kwargs)
916
917 # invariant: `func_outputs` contains only Tensors, CompositeTensors,
~\LT1Kqob5UDEML61gCyjnAcfMXgkdP3wGcge-packages\tensorflow_core\python\eager\def_function.py in wrapped_fn(*args, **kwds)
333 # __wrapped__ allows AutoGraph to swap in a converted function. We give
334 # the function a weak reference to itself to avoid a reference cycle.
--> 335 return weak_wrapped_fn().__wrapped__(*args, **kwds)
336 weak_wrapped_fn = weakref.ref(wrapped_fn)
337
~\LT1Kqob5UDEML61gCyjnAcfMXgkdP3wGcge-packages\tensorflow_core\python\keras\engine\training_v2_utils.py in distributed_function(input_iterator)
69 strategy = distribution_strategy_context.get_strategy()
70 outputs = strategy.experimental_run_v2(
---> 71 per_replica_function, args=(model, x, y, sample_weights))
72 # Out of PerReplica outputs reduce or pick values to return.
73 all_outputs = dist_utils.unwrap_output_dict(
~\LT1Kqob5UDEML61gCyjnAcfMXgkdP3wGcge-packages\tensorflow_core\python\distribute\distribute_lib.py in experimental_run_v2(self, fn, args, kwargs)
762 fn = autograph.tf_convert(fn, ag_ctx.control_status_ctx(),
763 convert_by_default=False)
--> 764 return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
765
766 def reduce(self, reduce_op, value, axis):
~\LT1Kqob5UDEML61gCyjnAcfMXgkdP3wGcge-packages\tensorflow_core\python\distribute\distribute_lib.py in call_for_each_replica(self, fn, args, kwargs)
1803 kwargs = {}
1804 with self._container_strategy().scope():
-> 1805 return self._call_for_each_replica(fn, args, kwargs)
1806
1807 def _call_for_each_replica(self, fn, args, kwargs):
~\LT1Kqob5UDEML61gCyjnAcfMXgkdP3wGcge-packages\tensorflow_core\python\distribute\distribute_lib.py in _call_for_each_replica(self, fn, args, kwargs)
2148 self._container_strategy(),
2149 replica_id_in_sync_group=constant_op.constant(0, dtypes.int32)):
-> 2150 return fn(*args, **kwargs)
2151
2152 def _reduce_to(self, reduce_op, value, destinations):
~\LT1Kqob5UDEML61gCyjnAcfMXgkdP3wGcge-packages\tensorflow_core\python\autograph\impl\api.py in wrapper(*args, **kwargs)
290 def wrapper(*args, **kwargs):
291 with ag_ctx.ControlStatusCtx(status=ag_ctx.Status.DISABLED):
--> 292 return func(*args, **kwargs)
293
294 if inspect.isfunction(func) or inspect.ismethod(func):
~\LT1Kqob5UDEML61gCyjnAcfMXgkdP3wGcge-packages\tensorflow_core\python\keras\engine\training_v2_utils.py in _predict_on_batch(***failed resolving arguments***)
158 def _predict_on_batch(model, x, y=None, sample_weights=None):
159 del y, sample_weights
--> 160 return predict_on_batch(model, x)
161
162 func = _predict_on_batch
~\LT1Kqob5UDEML61gCyjnAcfMXgkdP3wGcge-packages\tensorflow_core\python\keras\engine\training_v2_utils.py in predict_on_batch(model, x)
366
367 with backend.eager_learning_phase_scope(0):
--> 368 return model(inputs) # pylint: disable=not-callable
~\LT1Kqob5UDEML61gCyjnAcfMXgkdP3wGcge-packages\tensorflow_core\python\keras\engine\base_layer.py in __call__(self, inputs, *args, **kwargs)
848 outputs = base_layer_utils.mark_as_return(outputs, acd)
849 else:
--> 850 outputs = call_fn(cast_inputs, *args, **kwargs)
851
852 except errors.OperatorNotAllowedInGraphError as e:
~\LT1Kqob5UDEML61gCyjnAcfMXgkdP3wGcge-packages\tensorflow_core\python\keras\engine\sequential.py in call(self, inputs, training, mask)
253 if not self.built:
254 self._init_graph_network(self.inputs, self.outputs, name=self.name)
--> 255 return super(Sequential, self).call(inputs, training=training, mask=mask)
256
257 outputs = inputs # handle the corner case where self.layers is empty
~\LT1Kqob5UDEML61gCyjnAcfMXgkdP3wGcge-packages\tensorflow_core\python\keras\engine\network.py in call(self, inputs, training, mask)
695 ' implement a `call` method.')
696
--> 697 return self._run_internal_graph(inputs, training=training, mask=mask)
698
699 def compute_output_shape(self, input_shape):
~\LT1Kqob5UDEML61gCyjnAcfMXgkdP3wGcge-packages\tensorflow_core\python\keras\engine\network.py in _run_internal_graph(self, inputs, training, mask)
840
841 # Compute outputs.
--> 842 output_tensors = layer(computed_tensors, **kwargs)
843
844 # Update tensor_dict.
~\LT1Kqob5UDEML61gCyjnAcfMXgkdP3wGcge-packages\tensorflow_core\python\keras\engine\base_layer.py in __call__(self, inputs, *args, **kwargs)
848 outputs = base_layer_utils.mark_as_return(outputs, acd)
849 else:
--> 850 outputs = call_fn(cast_inputs, *args, **kwargs)
851
852 except errors.OperatorNotAllowedInGraphError as e:
~\LT1Kqob5UDEML61gCyjnAcfMXgkdP3wGcge-packages\tensorflow_core\python\keras\saving\saved_model\utils.py in return_outputs_and_add_losses(*args, **kwargs)
55 inputs = args[inputs_arg_index]
56 args = args[inputs_arg_index + 1:]
---> 57 outputs, losses = fn(inputs, *args, **kwargs)
58 layer.add_loss(losses, inputs)
59 return outputs
~\LT1Kqob5UDEML61gCyjnAcfMXgkdP3wGcge-packages\tensorflow_core\python\eager\def_function.py in __call__(self, *args, **kwds)
439 # In this case we have not created variables on the first call. So we can
440 # run the first trace but we should fail if variables are created.
--> 441 results = self._stateful_fn(*args, **kwds)
442 if self._created_variables:
443 raise ValueError("Creating variables on a non-first call to a function"
~\LT1Kqob5UDEML61gCyjnAcfMXgkdP3wGcge-packages\tensorflow_core\python\eager\function.py in __call__(self, *args, **kwargs)
1811 def __call__(self, *args, **kwargs):
1812 """Calls a graph function specialized to the inputs."""
-> 1813 graph_function, args, kwargs = self._maybe_define_function(args, kwargs)
1814 return graph_function._filtered_call(args, kwargs) # pylint: disable=protected-access
1815
~\LT1Kqob5UDEML61gCyjnAcfMXgkdP3wGcge-packages\tensorflow_core\python\eager\function.py in _maybe_define_function(self, args, kwargs)
2094 if self.input_signature is None or args is not None or kwargs is not None:
2095 args, kwargs = self._function_spec.canonicalize_function_inputs(
-> 2096 *args, **kwargs)
2097
2098 cache_key = self._cache_key(args, kwargs)
~\LT1Kqob5UDEML61gCyjnAcfMXgkdP3wGcge-packages\tensorflow_core\python\eager\function.py in canonicalize_function_inputs(self, *args, **kwargs)
1640 inputs,
1641 self._input_signature,
-> 1642 self._flat_input_signature)
1643 return inputs, {}
1644
~\LT1Kqob5UDEML61gCyjnAcfMXgkdP3wGcge-packages\tensorflow_core\python\eager\function.py in _convert_inputs_to_signature(inputs, input_signature, flat_input_signature)
1706 flatten_inputs)):
1707 raise ValueError("Python inputs incompatible with input_signature:\n%s" %
-> 1708 format_error_message(inputs, input_signature))
1709
1710 if need_packing:
ValueError: Python inputs incompatible with input_signature:
inputs: (
Tensor("IteratorGetNext:0", shape=(None, 50, 50, 1), dtype=uint8))
input_signature: (
TensorSpec(shape=(None, None, None, 1), dtype=tf.float32, name=None))
I assume it might be of relevance I'll put the code for building the model below too:
import tensorflow as tf
import numpy as np
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Activation, Flatten, Conv2D, MaxPooling2D
from tensorflow.keras.callbacks import TensorBoard
import pickle
import time
X = np.asarray(pickle.load(open("X.pickle", "rb")))
y = np.asarray(pickle.load(open("y.pickle", "rb")))
X = X/255.0
dense_layers = [1]
layer_sizes = [64]
conv_layers = [3]
for dense_layer in dense_layers:
for layer_size in layer_sizes:
for conv_layer in conv_layers:
NAME = "{}-conv-{}-nodes-{}-dense-{}".format(conv_layer, layer_size, dense_layer, int(time.time()))
tensorboard = TensorBoard(log_dir='logs\{}'.format(NAME))
model = Sequential()
model.add(Conv2D(layer_size, (3,3), input_shape = X.shape[1:]))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size = (2, 2)))
for l in range(conv_layer-1):
model.add(Conv2D(layer_size, (3,3)))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size = (2, 2)))
model.add(Flatten())
for l in range(dense_layer):
model.add(Dense(layer_size))
model.add(Activation("relu"))
model.add(Dense(1))
model.add(Activation("sigmoid"))
model.compile(loss = "binary_crossentropy",
optimizer = "adam",
metrics = ['accuracy'])
model.fit(X, y, batch_size=13, epochs=1, validation_split=0.1, steps_per_epoch=1727, callbacks=[tensorboard])
model.save('64x3-CNN.model')
```
AI: I have found the solution.
In the model the data is normalized by being devided by 255.
I had to do the same thing to the array of new data inside the prepare function.
This is what the function looks like now and it works:
def prepare(filepath):
IMG_SIZE = 50
img_array = cv2.imread(filepath, cv2.IMREAD_GRAYSCALE)
img_array = img_array/255.0
new_array = cv2.resize(img_array, (IMG_SIZE, IMG_SIZE))
return new_array.reshape(-1, IMG_SIZE, IMG_SIZE, 1) |
H: Process of solving a problem using ML
newbie here. I finished a couple of online courses and read Intro to Statistical Learning, thinking of working on a personal project and would appreciate it if you can clarify some issues:
What does "cleaning" the data consists of? How do you know if your data needs cleaning anyway?
What features of the data determines which algorithm I should use? Or is it mostly trial and error?
Are there any additional things that I should be doing other than trying different algorithms and testing how well they fit the data?
Thank you!
AI: Going to your first question,
What does "cleaning" the data consists of? How do you know if your
data needs cleaning anyway?
Cleaning refers to the various processes to transform the data so that we can utilize it to the fullest. Removing unwanted features, incomplete entries, NaN or null values ( if the dataset is numeric ) consist of "cleaning" the data. This process is important because directly feeding the unclean data to the model may result in its stunted performance or runtime errors.
Once you have a large dataset, you need to transform it according to the problem which you are solving. If you are training a model to classify movie reviews as positive or negative then you can easily remove columns like "user_id", "category" etc. as these do not contribute to the polarity of the review.
What features of the data determines which algorithm I should use? Or
is it mostly trial and error?
Well, the algorithm you choose will mostly depend on your problem. Decision trees are good for smaller datasets and Deep Neural Networks ( DNN ) would be used to complex classification and regression problems.
Text classifier systems use embedding layers, TF-IDF vectorization, n-grams model. We basically choose a model on these factors :
Size of the dataset.
The complexity of the problem.
Computational resources ( in some cases ).
We can always play around with the hyperparameters and also modify the model so that it better fulfils our need.
Are there any additional things that I should be doing other than
trying different algorithms and testing how well they fit the data?
We choose a model based on the problem. CNNs have been prevalent in image-related problems. Word embeddings are useful in text classification. LSTMs are used in time-series-related problems.
Tip: You can try to implement various algorithms from scratch ( without using scikit-learn or ML frameworks ). This helps you in developing an intuition regarding how the model learns from the data and makes predictions. |
H: Does resizing images during training affect the bounding box annotations?
I am using the TensorFlow object detection API to train my own custom dataset and am preparing annotations for the same. I see from the config file of my pre-trained SSD inception net, the size of the image is reduced to 300 x 300 during training. My doubt is whether the resize will now change the position of my object according to annotation? I mean now the xmin, ymin width and height of the bounding box would be different since it resized. Should I annotate on the resized images (resize them myself before training?) or the original one that I give to training?
AI: My doubt is whether the resize would now change the position of my object according to annotation?
Yes, it will.
should i annotate on the resized images(resize them myslef before
training?)
No, you should annotate at the original size.
You solve this by applying the corresponding transformations on your bounding boxas well. So if you resize your image, you rescale your bounding box. This allows you to expand to different image augmentations without redoing annotations for all of them.
I recommend you to chose a library that has to ability to apply transformations on both images and their bounding boxes. I use albumentations but there are others such as imaug. |
H: Combining multiple neural networks with different activation functions
I have 3 neural networks where each has as a different activation function: Sigmoid, Tanh and Softmax. I am planning to average their final predictions, but as we know the functions doesn't have the same range values.
P = (P1 + P2 + P3)/3
Where 0 < P1 < 1, -1 < P2 < 1, 0 < P3 < 1
Can I directly average the predictions or I need to perform a normalization to have all prediction fall into the same interval ?
AI: As you are trying to average out the values, and given the three have different domains, it makes sense to bring all in same domain before averaging out. You can normalize P2 (tanh) to 0-1 and then average the values.
If you want to try another way, you can combine these three networks and take the three outputs p1, p2, and p3 and input it to some more dense layers and finally predict the single output. This way instead of averaging out, you are bringing in some nonlinearity which will be helpful to learn the task. |
H: Difference between Gensim word2vec and keras Embedding layer
I used the gensim word2vec package and Keras Embedding layer for various different projects. Then I realize they seem to do the same thing, they all try to convert a word into a feature vector.
Am I understanding this properly? What exactly is the difference between these two methods?
Thanks!
AI: Yep, you're right! As you know, it's difficult for machine learning models to use natural language directly, so it helps to transform words into some meaningful numeric representation. This process is called word embedding, and finding word embeddings is the task of the keras Embedding layer.
Ideally, word embeddings will be semantically meaningful, so that relationships between words are preserved in the embedding space. Word2Vec is a particular "brand" of word embedding algorithm that seeks to embed words such that words often found in similar context are located near one another in the embedding space. The technical details are described in this paper.
The generic keras Embedding layer also creates word embeddings, but the mechanism is a bit different than Word2Vec. Like any other layer, it is parameterized by a set of weights. The weights are randomly-initialized, then updated during training using the back-propagation algorithm. So, the resultant word embeddings are guided by your loss function.
To summarize, both Word2Vec and keras Embedding convert words (or word indices) to a hopefully meaningful numeric representation. Word2Vec is an unsupervised method that seeks to place words with similar context close together in the embedding space. Keras Embedding is a supervised method that finds custom embeddings while training your model. |
H: Get Logistic regression scores in CNN using Keras
from __future__ import print_function
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras import backend as K
from keras.regularizers import L1L2
batch_size = 128
num_classes = 10
epochs = 2
# input image dimensions
img_rows, img_cols = 28, 28
# the data, split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
if K.image_data_format() == 'channels_first':
x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(10, activation='softmax',
kernel_regularizer=L1L2(l1=0.0, l2=0.1)))
model.compile(optimizer='sgd',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test))
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
The Above code uses logistic regression With the CNN in Keras. How do I get the score assigned to each image by the logistic regression function.
AI: The evaluate method return the loss value & metrics values for the model in test mode.
Instead You should use
y_pred = model.predict(x_test, batch_size=batch_size)
As it generates output predictions for the input samples.
For more information, read Keras official documentation. |
H: Valid actions in OpenAI Gym
Why don't the gym environments come with "valid actions"? The normal gym environment accepts as input any action, even if it's not even possible.
Is this a normal thing in reinforcement learning? Do the models really have to learn what valid actions are all the time? Would it not be much nicer to have a env.get_valid_actions() functions so that the model knows what actions are doable? Or is this somehow possible and I'm missing it?
AI: In general, if the agent is simply not able to take non-valid actions in a given environment (e.g. due to strict rules of a game, like chess), then it is standard practice to have the environment support that by providing some kind of function or filter for $\mathcal{A}(s)$, the set of actions available in state $s$.
However, it does seem like the basic Gym interface does not support it, and has no plans to support it.
It is still possible for you to write an environment that does provide this information within the Gym API using the env.step method, by returning it as part of the info dictionary:
next_state, reward, done, info = env.step(action)
The info return value can contain custom environment-specific data, so if you are writing an environment where the valid action set changes depending upon state, you can use this to communicate with your agent. The caveat is that both your environment and agent would be working to a convention you had invented in order to do this, and the same approach would not extend to other environments or be used by developers of other agents.
Alternatively, you can support calculating what the valid actions are in any other custom way that is not covered by the Gym API - a new method on the environment, a separate "rules engine" for a game etc. Both the agent and the environment could call that in order to perform their roles correctly.
The situation without an official API to do this may be subject to change, but from the linked Github issue it seems that Open AI developers consider a generic interface for this, that accounted for all the different kinds of action space, was more effort than it was worth. |
H: Dealing with input images of different shapes in PyTorch
I've started to work with a leaf classification dataset on Kaggle. All input images have different rectangular shapes. I want to transform the input into squares of a fixed size (say, 224x224) with a symmetric zero-padding either on top and bottom or on the left and right sides of the rectangle. Prior to that, I think that I need to rescale the image (some images in the dataset have shapes >1000). I've encountered the next two problems:
torchvision.transforms.Resize(size) rescales the image so that its smaller side is matched to size (size is a scalar). I want the opposite, the bigger side to be matched to size.
torchvision.transforms.Pad(padding) seemingly works only with some fixed padding, but this transform will not always output a square.
How would you resolve this problem? I'm aware of RandomSizedCrop, but I feel like some datasets (like this one) aren't good for this method (some of the input images are too oblong). Also, I heard that RandomSizedCrop shouldn't be
used for the test data loaders.
More generally, how do you deal with input images of different rectangular shapes in PyTorch? Any help is appreciated.
AI: Try this function that uses cv2 resize function:
def leaf_image(image_id,target_length=160):
"""
`image_id` should be the index of the image in the images/ folder
Reture the image of a given id(1~1584) with the target size (target_length x target_length)
"""
image_name = str(image_id) + '.jpg'
leaf_img = plt.imread('images/'+image_name) # Reading in the image
leaf_img_width = leaf_img.shape[1]
leaf_img_height = leaf_img.shape[0]
#target_length = 160
img_target = np.zeros((target_length, target_length), np.uint8)
if leaf_img_width >= leaf_img_height:
scale_img_width = target_length
scale_img_height = int( (float(scale_img_width)/leaf_img_width)*leaf_img_height )
img_scaled = cv2.resize(leaf_img, (scale_img_width, scale_img_height), interpolation = cv2.INTER_AREA)
copy_location = (target_length-scale_img_height)/2
img_target[copy_location:copy_location+scale_img_height,:] = img_scaled
else:
# leaf_img_width < leaf_img_height:
scale_img_height = target_length
scale_img_width = int( (float(scale_img_height)/leaf_img_height)*leaf_img_width )
img_scaled = cv2.resize(leaf_img, (scale_img_width, scale_img_height), interpolation = cv2.INTER_AREA)
copy_location = (target_length-scale_img_width)/2
img_target[:, copy_location:copy_location+scale_img_width] = img_scaled
return img_target
# Test the leaf_image function
leaf_id = 343
leaf_img = leaf_image(leaf_id, target_length=160);
plt.imshow(leaf_img, cmap='gray'); plt.title('Leaf # '+str(leaf_id)); plt.axis('off'); plt.show()
Source: https://github.com/WenjinTao/Leaf-Classification--Kaggle/blob/master/Leaf_Classification_using_Machine_Learning.ipynb |
H: Why compressed image size is greater than original one in kmeans algorithm?
I have a png image as shown below.
And I use kmeans algorithm to compress the image by color quantization. I compressed the image to use 64 colours. The code is:
ncolor = 64
rimage = image.reshape(image.shape[0]*image.shape[1],3)
kmeans = KMeans(n_clusters = ncolor, n_init=10, max_iter=200)
kmeans.fit(rimage)
centers = kmeans.cluster_centers_
labels = np.asarray(kmeans.labels_).reshape(rows, cols)
compressed_image = np.zeros((rows, cols,3),dtype=np.uint8 )
for i in range(rows):
for j in range(cols):
compressed_image[i,j,:] = centers[labels[i,j],:]
fig, ax = plt.subplots(1, 2, figsize=(16, 6),
subplot_kw=dict(xticks=[], yticks=[]))
ax[0].imshow(image)
ax[0].set_title('Original Image', size=16)
ax[1].imshow(compressed_image)
ax[1].set_title(f'{ncolor}-color Image', size=16);
io.imsave('compressed_tiger.png',labels);
The image is the origin image. And the resulte shown on jupyter notebook is:
since I compressed the image to 64 colours but the size of the saved file compressed_tiger.png (588KB) is larger than the original one (435KB). I don't understand why it becomes larger.
AI: Your are saving the same color format... if they are 24bits RGB you are currently saving also in 24 bits RGB. You should convert to RGB-8 that uses 8 bits per pixel.
I have used pypng and PIL to get the following result
from PIL import Image
import png
s = []
for i in range(rows):
s.append(tuple(labels[i].astype(int)))
palette = []
for i in range(ncolor):
c = (centers[i][0].astype(int), centers[i][1].astype(int), centers[i][2].astype(int))
palette.append(c)
w = png.Writer(width=cols, height=rows, palette=palette, bitdepth=8)
f = open('tiger8bits3.png', 'wb')
w.write(f, s)
Look at PNG formats: http://www.patrickhansen.com/2011/02/04/png-8-24-32-what/
The result is 269KB: |
H: Can I save only some VGG19's layers into a .H5 file?
I am training a deep-learning style transfer model with the pretrained-VGG19 CNN.
My aim is to use it in my Android app for personal purposes with Google Firebase Machine Learning Kit (which would host my .H5 model to make it usable by my Android app). The maximum .H5 model file's size allowed by Machine Learning Kit is: 8MB. However when I save the whole VGG19 model, I end with 80MB... So I can't use it.
Since only some layers of the VGG19 network are used in my style transfer program, is it possible to reduce the .H5's size by saving only those layers' weights, or something like that? Is there any other solution to my problem?
To save my VGG19 network as a .H5 file, I use the following Python command:
model.save('./style_transfer/st.h5', include_optimizer=False) , where model = vgg19.VGG19(input_tensor=input_tensor, weights='imagenet', include_top=False).
As you can see, I already don't include the optimizer in order to reduce the saved .H5's size.
AI: after creating the model you can create another model as below ( I created model till 8 layers)
model = Model(vgg19.input, vgg19.layers[8].output)
model.save('./style_transfer/st.h5')
You can also use post-training Quantization techniques to reduce the size of the model to deploy in mobile/IoT devices. please check tensorflow documentation here |
H: can I use z-score normalization even if it doesn't make sense for my data to be negative?
I'm planning to use z-score as a Normalization Method for a Project but I noticed if I do that then I ll have a data in the range [-1, 1] which is wierd because I have data that doesn't make sense for it to have negative values. Let's say for example speed or distance, it doesn't make sense that speed would have a negative value after normalization! is it logical to think like this or am I wrong and it is perfectly fine to use z-score even if the data will be negative and it doesn't make sense?
ps: I know velocity can be negative if we are talking about Vectors but I meant to say that I have a discrete Values for Speed or Distance a.k.a Length of something which cannot be negative.
AI: I think it doesn't really matter that there can be negative values after as long as you rescale your data correctly at the end.
Think about what positive/negative values mean for z-score. This has nothing to do with whether your use case (for example speed) can realistically have negative values or not. With z-score positive values simply mean that the value is above the group mean while negative values can be interpreted as the opposite.
As long as you are able to rescale after your model (to get your data back to the original interpretation) using z-score should be fine. |
H: Multilayer Perceptron: What is the value used to update the weights in the hidden layers?
As i understand for the output layer the error rate is used with the mean squared error function to update the weights.
For the hidden layers as well? Does that make sense?
AI: A Multilayer Perceptron changes your weights by an algorithm called "backpropagation". This algorithm uses gradient descent combined with a learning rate to change every weight in your MLP.
Basically the backpropagation functions by chaining together all functions that are called when calculating the output of one particular node (so chaining together all possible ways). Now using the chain rule a gradient is calculated which directs into the direction of minimal error and changes all weights accordingly. |
H: Error rate of AdaBoost weak learner always bigger than 0.5?
As far as i understand, weak learners of AdaBoost should never yield a error rate > 0.5
After training one, i only receive error rates above 0.5. How is that even possible? The AdaBoost Tree still gives quite good results, but all learners weights should be zero, so it should fail. Also the trees get worse from iteration to iteration
is it possible that my threshhold for the error rate instead is 0.9 (accuracy 0.1), as i got 10 classes and literature mostly focusses on binary cases?
from sklearn.ensemble import AdaBoostClassifier
adaboost_tree = AdaBoostClassifier(
DecisionTreeClassifier(max_depth=max_d),
n_estimators=estimators,
learning_rate=1, algorithm='SAMME')
adaboost_tree.fit(data_train, labels_train)
AI: As far as I understand, weak learners of AdaBoost should never yield a error rate > 0.5
This is only true for binary classification problems. The simplest possible classifier would pick the majority class, ensuring that accuracy is always >= 0.5
For a problem with $N$ classes, the worst error rate for a weak learner will be $\frac{N-1}{N}$.
Is it possible that my threshold for the error rate instead is 0.9 (accuracy 0.1), as I have 10 classes and literature mostly focuses on binary cases?
Yep that's exactly right. Your weak learners will have, at worst, an error rate of 0.9. Unsurprisingly, most of your learners have a better error rate better than random guessing, which explains the good performance of your ensemble.
Edit:
Also the trees get worse from iteration to iteration
This is a common feature in boosting algorithms. Each iteration, training samples are reweighted such that the mistakes from previous iterations are given more importance. The net effect is that the learners in later iterations are specialized for classifying the examples that earlier learners got wrong. Since later learners are specialized for a few hard-to-classify examples, they will have worse performance on the entire dataset. |
H: When would not normalizing input values have higher accuracy?
Right now I'm training a deep neural network for a binary classification problem, with a feature set of winrates. As such, each winrate is bigger or equal to 0 but smaller than 100.
I've been getting promising results without normalizing the input data, until I normalized it and got staggeringly worse accuracy.
The input feature is a 2d matrix of size 20, and the network has four layers with differing numbers of nodes in each layer. I'm using sgd optimizer and ReLU activation for the hidden layers, and the softmax activation function for the output layer.
The thing I'm wondering is why I'm getting better results with the neural network without the normalization? Is it because the optimal hyperparameters required for the network with the normalized input are different from when it is not normalized?
AI: The thing I'm wondering is why I'm getting better results with the neural network without the normalization?
From a theoretical standpoint, normalizing your input shouldn't affect accuracy, in a sense that a NN can converge to any range of input values.
The reason why we normalize our data when using NNs is that it offers some useful properties, mainly regarding convergence and convergence speed. You can read more details here.
Is it because the optimal hyperparameters required for the network with the normalized input are different from when it is not normalized?
Yes, I think that is your problem. Some hyperparameters might be optimized for the regular input and might not work well for the normalized one. An example of this could be a higher learning rate, which might work for a higher range of input values but not for a normalized one. |
H: When using Absolute Error in Gradient Descent, how to calculate the derivative?
What is the derivative of the Loss Function (Absolute Error) with respect to the feature weights that is used to update the weights?
Couldn't find anything specific about it anywhere.
AI: The gradient of MAE is not continuous in $y_{pred} = y_{true}$ and therefore there is no defined (bounded, direction independent) derivative at that point.
Elsewhere you have -1, where $y_{pred} > y_{true}$ and +1 where $y_{pred} < y_{true}$
Usually frameworks like TensorFlow, Keras, etc... use an approximate derivative for that point. |
H: Is it common to add noise to Time Series data before training a model
I once read about somebody who added noise to their time series before training a model. They didn't write why they did it though.
Is this common practice?
If it is, why do people do it ie. to prevent over-fitting?
AI: I'll go through your questions one by one:
I once read about somebody who added noise to their time series before training a model. They didn't write why they did it though.
No, noise is not adding any value to your dataset. The reason is that noise, by definition, cannot be learned by any statistical/ML model.
Is this common practice?
No it's not.
why do people do it ie. to prevent over-fitting?
It's not a technique that helps you in preventing overfitting. There are several techniques to do it (ensemble modeling, cross validation), many of them depend on the kind of model you want to implement (such as recurrent dropout for RNNs). |
H: How does XGBoost use softmax as an objective function?
I'm quite used to seeing functions like log-loss, RMSE, cross entropy as objective functions and it's easy to imagine why minimizing these would give us the best model. What's difficult to imagine is how XGBoost uses softmax, a function used to normalize the logits, as a cost function. As mentioned in the docs here.
How can a softmax function be minimized?
AI: It's not softmax to be minimized, but crossentropy loss function, which is based on softmax. Crossentropy is calculated on a softmax output, that's why they are a standard couple in ML. Tree-based classifiers find "cuts", or portions of the variables' space in a way that minimizes the entropy of a dataset.
If you want to explore the relationship between softmax and crossentropy further, you can start with the nice explanations provided here. If you want to dig deeper, you can find a very detailed and technical explanation here. |
H: Where to get the Datascience Use cases for practice
I just started learning data science. I have gone through some of the courses in coursera & udemy, now i want to practice what i have learned. What i want to know is from where can i get the Use cases (linear regression & multiple linear regression) so that i could practice
AI: Maybe you can start with the Kaggle datasets: https://www.kaggle.com/datasets
There are more then 22.000 datasets and they are very well documented, enough for hours and hours of practice. |
H: Combining text (NLP), numeric, and categorical data for a regression problem
I have a dataset
data = {
points: 3.765,
review: `Food was great, staff was friendly`,
country: 'Chile',
designation: 'random',
age: 20
}
I am looking for a way to use these features to build a model to predict points.
Description seems to hold a lot of information about points.
How do I feed this data into the model and also which model?
Note I don't want to use word2vec (embeddings)
AI: For a simpler approach,
Remove stopwords, Perform stemming and lemmatization, Create Tfidf vectors for your text or Create BagOfWords representation of your review text or you can have both as well.
Making a country as a one-hot vector is not desirable as if your dataset contains all the countries in the earth, your one-hot vector would be mostly zero which increases your train and run-time complexities. Instead, calculate the distance of each country from a single point on earth. Ex: Distance of each of those countries from the equator or south pole or north pole and add it as a feature instead of a one-hot vector.
Designation can be one hot vectorized if the number of unique designations is a handful. I believe there must only be a handful of designation.
Age you can just past it as its already numerical |
H: Bagging vs Boosting, Bias vs Variance, Depth of trees
I understand the main principle of bagging and boosting for classification and regression trees. My doubts are about the optimization of the hyperparameters, especially the depth of the trees
First question: why we are supposed to use weak learners for boosting (high bias) whereas we have to use deep trees for bagging (high variance) ?
- Honestly, I'm not sure about the second one, just heard it once and never seen any documentation about it.
Second question : why and how can it happen that we get better results in the grid searches for gradient boosting with deeper trees than weak learners (and similarly with weak learners than deeper trees in random forest)?
AI: why we are supposed to use weak learners for boosting (high bias) whereas we have to use deep trees for bagging (very high variance)
Clearly it wouldn't make sense to bag a bunch of shallow trees/weak learners. The average of many bad predictions will still be pretty bad. For many problems decision stumps (a tree with a single split node) will produce results close to random. Combining many random predictions will generally not produce good results.
On the other hand, the depth of the trees in boosting limits the interaction effects between features, e.g. if you have 3 levels, you can only approximate second-order effects. For many ("most") applications low-level interaction effects are the most important ones. Hastie et al. in ESL (pdf) suggest that trees with more than 6 levels rarely show improvements over shallower trees. Selecting trees deeper than necessary will only introduce unnecessary variance into the model!
That should also partly explain the second question. If there are strong and higher-order interaction-effects in the data, deeper trees can perform better. However, trees that are too deep will underperform by increasing variance without additional benefits. |
H: Best framework for recognizing a specific cartoon character's face?
I have a supply of images of a specific cartoon character's face. I have hours of video. I would like to automatically find the sections of the video in which this cartoon character appears.
https://github.com/ageitgey/face_recognition doesn't seem to work very well on cartoon characters (it fails to recognize any face in my images).
What is the state of the art on this? Is there an open-source library or framework that is good at this?
AI: You will most likely need to train a neural network to detect your particular cartoon characters. Although some parts of this are tedious, this type of task is well-documented and there are many user-friendly frameworks available. I would recommend reading the tensorflow object detection api tutorial.
The most difficult part will simply be collecting your data. You will need to sample video frames to train the neural network. Depending on how many characters are in your cartoon, the complexity and variability of the inputs, and network model that you choose, I suspect you will need to collect anywhere from 200-500 distinct samples for training.
After you collect your frames, you will need to annotate your data. "Annotation" is the process by which you manually draw bounding boxes around your characters so that the neural network knows what to look for. This process is described in further detail in the above link. Fortunately, you do not need to program the annotation tool yourself; the Tensorflow tutorial instructs you to install LabelImg, which provides a graphical interface for you to label your selected frames.
After creating your dataset, you can proceed to train your network using the Tensorflow instructions.
If you have a lot of video to inference, I would recommend sampling your frames in relatively large intervals such as 5 or even 10 (depending on the fps of the video). You can then inference 5 to 10 times faster with reasonable accuracy.
For example, suppose your network inferences frames $ x_t $ through $ x_{t+10} $ in 5 frame intervals. If both $ x_t $ and $ x_{t+5} $ have at least one bounding box of reasonable size, then frames between $ x_t $ and $ x_{t+5} $ most likely have characters as well. If frame $ x_{t+10} $ does not have a bounding box of reasonable size, then we might assume that the character(s) leave the frames at about $ x_{t+7} $ or $ x_{t+8} $. This method will let you control the inherent speed/accuracy trade-off. |
H: What to do after GridSearchCV()?
I happily created my first NN and performed hyperparameter optimization through GridSearchCV. I just don't know what to do next.
Do I have to fit it again with the best parameters GridSearchCV() revealed? is there an elegant way to do so?
Otherwise, how to proceed?
def create_model(...
model.compile(loss='mean_squared_error', optimizer=optimizer, metrics=['accuracy'])
return model
model = KerasRegressor(build_fn=create_model, verbose=0)
> hypparas
{'batch_size': [2, 6], 'optimizer': ['Adam', 'sgd'], 'opt_par': [0.5, 0.8]}
GridSearchCV(estimator=model
, param_distributions=hypparas
, n_jobs=1
, n_iter=20
, cv=3
)
grid_result = grid_obj.fit(X_train1, y_train1, callbacks = [time_callback])
print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_), "\n")
means = grid_result.cv_results_['mean_test_score']
stds = grid_result.cv_results_['std_test_score']
params = grid_result.cv_results_['params']
for mean, stdev, param in zip(means, stds, params):
print("%f (%f) with: %r" % (mean, stdev, param))
Best: -0.941568 using {'optimizer': 'Adam', 'opt_par': 0.8, 'batch_size': 2}
-1.725617 (0.620383) with: {'optimizer': 'Adam', 'opt_par': 0.5, 'batch_size': 2}
-1.595137 (0.224487) with: {'optimizer': 'sgd', 'opt_par': 0.5, 'batch_size': 2}
-0.941568 (0.149151) with: {'optimizer': 'Adam', 'opt_par': 0.8, 'batch_size': 2}
-1.338372 (0.523434) with: {'optimizer': 'sgd', 'opt_par': 0.8, 'batch_size': 2}
-1.094907 (0.121018) with: {'optimizer': 'Adam', 'opt_par': 0.5, 'batch_size': 6}
-1.588476 (0.569475) with: {'optimizer': 'sgd', 'opt_par': 0.5, 'batch_size': 6}
-1.443133 (0.342028) with: {'optimizer': 'Adam', 'opt_par': 0.8, 'batch_size': 6}
-1.275414 (0.331939) with: {'optimizer': 'sgd', 'opt_par': 0.8, 'batch_size': 6}
AI: You can use grid_obj.predict(X) or grid_obj.best_estimator_.predict(X) to use the tuned estimator. However, I suggest you to get this _best_estimator and train it again with the full set of data, because in GridSearchCV, you train with K-1 folds and you lost 1 fold to test. More data, better estimates, right? |
H: Validation generator in Autoencoder returning NaN
I am trying to build a fairly simple autoencoder using Keras on the OpenImages dataset. Here is the architecture of the ae:
Layer (type) Output Shape Param #
=================================================================
conv3d_1 (SeparableConv2D) (None, 64, 64, 64) 283
_________________________________________________________________
max_pool_1 (MaxPooling2D) (None, 32, 32, 64) 0
_________________________________________________________________
batch_norm_1 (BatchNormaliza (None, 32, 32, 64) 256
_________________________________________________________________
sep_conv2d_2 (SeparableConv2 (None, 32, 32, 32) 2656
_________________________________________________________________
max_pool_2 (MaxPooling2D) (None, 16, 16, 32) 0
_________________________________________________________________
batch_norm_2 (BatchNormaliza (None, 16, 16, 32) 128
_________________________________________________________________
sep_conv2d_3 (SeparableConv2 (None, 16, 16, 32) 1344
_________________________________________________________________
max_pool_3 (MaxPooling2D) (None, 8, 8, 32) 0
_________________________________________________________________
batch_norm_3 (BatchNormaliza (None, 8, 8, 32) 128
_________________________________________________________________
flatten (Flatten) (None, 2048) 0
_________________________________________________________________
bottleneck (Dense) (None, 64) 131136
_________________________________________________________________
reshape (Reshape) (None, 8, 8, 1) 0
_________________________________________________________________
conv_2d_transpose_1 (Conv2DT (None, 16, 16, 32) 320
_________________________________________________________________
batch_norm_4 (BatchNormaliza (None, 16, 16, 32) 128
_________________________________________________________________
conv_2d_transpose_2 (Conv2DT (None, 32, 32, 32) 9248
_________________________________________________________________
batch_norm_5 (BatchNormaliza (None, 32, 32, 32) 128
_________________________________________________________________
conv_2d_transpose_3 (Conv2DT (None, 64, 64, 64) 18496
_________________________________________________________________
batch_norm_6 (BatchNormaliza (None, 64, 64, 64) 256
_________________________________________________________________
sep_conv2d_4 (SeparableConv2 (None, 64, 64, 3) 771
=================================================================
Total params: 165,278
Trainable params: 164,766
Non-trainable params: 512
I am then defining generators that flow from a directory where I have downloaded the images:
train_data_dir = 'open_images/train/'
validation_data_dir = 'open_images/validation/'
batch_size = 128
train_datagen = ImageDataGenerator(rescale=1./255)
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(64, 64),
batch_size=batch_size,
class_mode=None)
validation_generator = test_datagen.flow_from_directory(
validation_data_dir,
target_size=(64, 64),
batch_size=batch_size,
class_mode=None)
And here is the model build step:
def fixed_generator(generator):
for batch in generator:
yield (batch, batch)
num_epochs = 10
steps_per_epoch = 120
autoencoder.fit_generator(
fixed_generator(train_generator),
steps_per_epoch=steps_per_epoch,
epochs=num_epochs,
validation_data=fixed_generator(validation_generator),
validation_steps=100
)
When I run this code it seems like something is going wrong with the validation step because it only returns NaN:
Epoch 1/10
120/120 [==============================] - 241s 2s/step - loss: 0.0468 - val_loss: nan
Epoch 2/10
120/120 [==============================] - 239s 2s/step - loss: 0.0278 - val_loss: nan
Epoch 3/10
120/120 [==============================] - 240s 2s/step - loss: 0.0248 - val_loss: nan
Epoch 4/10
120/120 [==============================] - 241s 2s/step - loss: 0.0234 - val_loss: nan
Epoch 5/10
120/120 [==============================] - 240s 2s/step - loss: 0.0226 - val_loss: nan
Epoch 6/10
120/120 [==============================] - 241s 2s/step - loss: 0.0221 - val_loss: nan
Epoch 7/10
120/120 [==============================] - 242s 2s/step - loss: 0.0217 - val_loss: nan
Epoch 8/10
120/120 [==============================] - 240s 2s/step - loss: 0.0213 - val_loss: nan
Epoch 9/10
120/120 [==============================] - 240s 2s/step - loss: 0.0210 - val_loss: nan
Epoch 10/10
120/120 [==============================] - 242s 2s/step - loss: 0.0207 - val_loss: nan
Also when the validation generator code is run it prints:
Found 0 images belonging to 0 classes.
There are definitely images in that directory though. Any idea what might be going on?
Edit: If you want to be convinced there are images in the folder...
ubuntu@ip-172-16-1-35:~$ ls -l open_images/validation/ | head
total 12661044
-rw-r--r-- 1 ubuntu ubuntu 290621 Jul 10 2018 0001eeaf4aed83f9.jpg
-rw-r--r-- 1 ubuntu ubuntu 375363 Jul 10 2018 0004886b7d043cfd.jpg
-rw-r--r-- 1 ubuntu ubuntu 462817 Jul 10 2018 000595fe6fee6369.jpg
-rw-r--r-- 1 ubuntu ubuntu 302326 Jul 10 2018 00075905539074f2.jpg
-rw-r--r-- 1 ubuntu ubuntu 970275 Jul 10 2018 0007cebe1b2ba653.jpg
-rw-r--r-- 1 ubuntu ubuntu 614095 Jul 10 2018 0007d6cf88afaa4a.jpg
-rw-r--r-- 1 ubuntu ubuntu 415082 Jul 10 2018 0008e425fb49a2bf.jpg
-rw-r--r-- 1 ubuntu ubuntu 359851 Jul 10 2018 0009bad4d8539bb4.jpg
-rw-r--r-- 1 ubuntu ubuntu 186452 Jul 10 2018 000a045a0715d64d.jpg
AI: please make your data like this format to work with flow_from_directory. |
H: Medical Image Analysis
What are some good starting points for learning medical image analysis and combining it with deep learning?
I would like to analyze images with bone cancers but not sure what is proper way to preprocess them and prepare for the model.
AI: Check out this blog post. There you find a summary of two fairly new papers from the field, which might start you off. One about semantic segmentation, the other about a classification problem. |
H: Numpy arithmetic operation between two columns
Below is the numpy array. I need to perform two operations on this array.
Add one column with value [column 1] - [column 3].
Add another column with value [column 1] - [previous value of column1].
I can do this using normal list operations, but is it possible to use numpy or pandas? If so, how can it be done?
Input data:
[['78' '3412' '98' '3441']
['106' '3412' '127' '3434']
['139' '3411' '160' '3434']
['170' '3411' '191' '3442']
]
AI: These types of operations can easily be done using both numpy or pandas. However, in this case I would recommend pandas since it is more intuitive. Using the example array we can create a pandas dataframe:
arr = np.array([[78, 3412, 98, 3441], [106, 3412, 127, 3434], [139, 3411, 160, 3434], [170, 3411, 191, 3442]])
df = pd.DataFrame(arr, columns=['a', 'b', 'c', 'd'])
The two new columns can now be added as follows:
df['e'] = df['a'] - df['c']
df['f'] = df['a'].diff(1)
Directly using numpy, one possible way would be to do:
arr = np.c_[ arr, arr[:,0] - arr[:,2], np.append(np.NaN, arr[1:, 0] - arr[:-1, 0]) ] |
H: Difference between validation and prediction
As a follow-up to Validate via predict() or via fit()? I wonder about the difference between validation and prediction.
To keep it simple, I will refer to train, val and test:
Training data: Train model, especially find hyperparameters through GridSearchCV or similar
Validation data: Validate these hyperparameters on "new" data?
Test data: Make prediction on unseen data
My status so far:
Split data: 60 % Training - 20 % Validation - 20 % Test
Find hyperparameters on training data
Fit again with best parameters on training data by using .fit(X_train, y_train, validation_data(X_val, y_val)).
Check model on unseen data through .predict() or .evaluate().
Is this correct? Though using GridSearchCV do I have to split train manually into training and validation?
AI: In the GridsearchCV function, there is a parameter, cv that tells you how many times cross-validation is performed. If you're performing cross-validation on the train set, all you need is a train/test set since the hyperparameters are being tuned without seeing the test set.
If you want to use your own validation set, see the link below. The link below that also has a good description of how K-Fold CV works in GridSearchCV.
Finally, gridsearch optimization is pretty slow and there are more advanced methods for hyperparameter tuning now, in case performance is an issue.
See this previous answer:
Difference between test and validation set
Preselected Validation Set
Gridsearch CV and KFold CV |
H: Finding Criminal Name in news?
We have news URLS, which we want to classify into crimes or non-crimes and further identify criminals by using NERs.
For creating a model that identifies criminals, we tried SPacy which gave all the names like lawyers name , president ,criminal etc..
can Anyone help on how to get only Criminal name, Not all these irrelevant names.
I am just a beginner trying things, any help is appreciated
Thanks in Advance
AI: As mentioned by @Edmund you can use public library but where is fun in that.
My suggestion:
Try to create a set of words which are negative in sentiment. Words like guilty, murdered, snatch etc...
Now from NER you are already getting the names. Find the word distance of of these names with the negative words and the name with least distance will be the culprit.
To build a set of negative words, get all crime news and from that pick the frequently occurring words. Some manual cleaning right be required. |
H: Transfer learning VGG16 does not work as expected. (Detect tacos as hamburgers)
I am new in this field of machine learning, to test I wanted to do a simple project. Create a cnn capable of recognizing hamburger images. As I do not have the ability to collect more than 10,000 images of hamburgers I have used an existing model to train what I need ... I have used VGG16.
This is the notebook I am using, to do transfer learning. I am training the model in detecting cats and hamburgers.
https://github.com/EsteveSegura/detecting-food/blob/master/src/BurgerCNN.ipynb
If the notebook does not load, the entire code is also here
https://gist.github.com/EsteveSegura/2be156ba5431fc42fb8ac13eb0506c82
Train
Cat: 1024 imgs
Burger: 1097 imgs
Validation
Cat: 416 imgs
Burger: 326 imgs
When I try to predict photos of hamburgers, it usually gives a good result ... But it also detects as burger photos of tacos (99% acuraccy). Is it an overfitting problem? Do I need more images? Am I doing the transfer learning in a correct way?
My goal will be to detect if there is a hamburger in the photo or not.
One of the recommendations they have given me is to train a third class with completely random objects that do not repeat ... Can this help? In case I can help? How?
This is the log of my training
AI: You are almost there as you mentioned the addition of third class as random photos. Generally what happens when you train a neural network, it assumes the entire domain of images existing as the training data. For your case each image can be either cat or burger. Now you have not shown random images to network and hence when a image which is neither cat nor burger, network will try it's best to predict it as either cat or burger. Instead I would suggest you to train network as burger vs not burger. In not burger class have all possible kind of images that are not burger. You can use CIFAR 1000 data to get images of different classes. Now your model will learn to differentiate between burger and not burger properly. Obviously you can still find some cases where miss classification happening and the reason will be because you have not covered such images in training.
Also play a bit with softmax threshold to get high presicion and recall. |
H: How prevalent is `C/C++` in machine learning development?
I am currently a data scientist mostly doing NLP, and I do most of my work inPython. Since I didn't get a CS degree in undergrad, I've been limited to very high level languages; Java, Python, and R. I somehow even took Data Structures and Algorithms avoiding C or C++.
I'm intending to go to graduate school to study more Natural Language Processing, and I'm wondering how much C/C++ I need to know. Deep-learning frameworks like PyTorch or Tensorflow are written in C++, and CUDA is only available in C. I'm not going to be writing Cython libraries, but I would like to do research and build new models (i.e. like "inventing" CNN's, seq2seq models, transformers).
I don't know how much C/C++ is used, and I'm unsure if it's worth learning the language-specific complexities that may be channeled into learning something else; hopefully somebody can let me know how prevalent the use of C/C++ is?
AI: Machine learning is inherently data intensive, and typical ML algorithms are massively data-parallel. Therefore, even when developing new algorithms, high-level mathy languages (like Python, R, Octave) can be reasonably fast if you are willing to describe your algorithm in terms of standard operations on matrices and vectors.
On the other hand, for deeper exploration of fundamental concepts it can be more interesting to treat individual components as objects for which you want to conceptualize and visualize their internal state and interactions. This is a case where C++ may shine. Using C++, of course, means that a compiler will attempt to optimize your execution speed. Additionally, it opens the door to straightforward multi-core execution with OpenMP (or other available threading approaches).
C++ is a high level language -- not inherently more verbose or tedious than Python for algorithm development. The biggest challenges for working with C++ are:
A more anarchic library ecosystem means a bigger effort for choosing and integrating existing components.
Less stable language rules (or the interpretation thereof) means that something you create today might not compile a few years down the road (due to compiler upgrades).
Consider, also, that TensorFlow documentation identifies some benefits of using C++ over Python for certain low-level cases. See TensorFlow: Create an op.
Low-level coding for GPU acceleration is an entirely different can of worms with very limited language options. This is not something to be concerned about until after you have a well-defined custom algorithm that you want to super-optimize. More likely, you would be better off using a framework (like TensorFlow) to handle GPU interactions for you.
For exploratory visualization purposes, don't discount the interactive power of JavaScript, which is also comparatively fast:
A Neural Network Playground
TensorFlow.js demos
ConvNetJS: Deep Learning in your browser |
H: Why is word prediction an obsession in Natural Language Processing?
I have heard how great BERT is at masked word prediction, i.e. predicting a missing word from a sentence.
In a Medium post about BERT, it says:
The basic task of a language model is to predict words in a blank, or it predicts the probability that a word will occur in that particular context. Let’s take another example:
“FC Barcelona is a _____ club”
Indeed, I recently heard about SpanBERT, which is "designed to better represent and predict spans of text".
What I do not understand is: why?
I cannot think of any common reason that a human would need to do this task, let alone why it would need to be automated.
This does not even seem to be a task where it is particularly easy to evaluate the success of a model. For example,
My ___ is cold
This could reasonably be a number of possible words. How can BERT be expected to get this right, and how can humans or another algorithm be expected to evaluate whether "soup" is a better answer than "coffee"?
Clearly there are a lot of smart people who think that this is important, so I accept my lack of understanding is likely based on my own ignorance. Is it that this task itself is not important, but it's a proxy for ability at other tasks?
What am I missing?
AI: The first line of the BERT abstract is
We introduce a new language representation model called BERT.
The key phrase here is "language representation model". The purpose of BERT and other natural language processing models like Word2Vec is to provide a vector representation of words, so that the vectors can be used as input to neural networks for other tasks.
There are two concepts to grasp about this field; vector representations of words and transfer learning. You can find a wealth of information about either of these topics online, but I will give a short summary.
How can BERT be expected to get this right, and how can humans or another algorithm be expected to evaluate whether "soup" is a better answer than "coffee"?
This ambiguity is the strength of word prediction, not a weakness. In order for language to be fed into a neural network, words somehow have to be converted into numbers. One way would be a simple categorical embedding, where the first word 'a' gets mapped to 1, the second word 'aardvark' gets mapped to 2, and so on. But in this representation, words of similar meaning will not be mapped to similar numbers. As you've said, "soup" and "coffee" have similar meanings compared to all English words (they are both nouns, liquids, types of food/drink normally served hot, and therefore can both me valid predictions for the missing word), so wouldn't it be nice if their numerical representations were also similar to each other?
This is the idea of vector representation. Instead of mapping each word to a single number, each word is mapped to a vector of hundreds of numbers, and words of similar meanings will be mapped to similar vectors.
The second concept is transfer learning. In many situations, there is only a small amount of data for the task that you want to perform, but there is a large amount of data for a related, but less important task. The idea of transfer learning is to train a neural network on the less important task, and to apply the information that was learned to the other task that you actually care about.
As stated in the second half of the BERT abstract,
...the pre-trained BERT model can be finetuned with just one additional output layer
to create state-of-the-art models for a wide
range of tasks, such as question answering and
language inference, without substantial taskspecific architecture modifications.
To summarize, the answer to your question is that people DON'T care about the masked word prediction task on its own. The strength of this task is that there is a huge amount of data readily and freely available to train on (BERT used the entirety of wikipedia, with randomly chosen masks), and the task is related to other natural language processing tasks that require interpretations of the meaning of words. BERT and other language representation models learn vector embeddings of words, and through transfer learning this information is passed to whatever other downstream task that you actually care about. |
H: Explaining feature_importances_ in Scikit Learn RandomForestRegressor
For a project, I used the feature_importances_ attributes from the RandomForestRegressor. Everything works well but I don't know how to explain why one feature is more important than another. I mean I know that the higher the score is the higher the importance, but I don't understand how it is calculated.
For exemple, if a variable as a score of 0.35 what does it mean?
I would appreciate if someone could explain me how it works!
Thanks!
AI: scikit-learn's RandomForestRegressor feature importance is computed in each tree composing the forest. You can find the source code here (starting at line 1053).
What it does is, for each node in the tree where the split is made on the feature, it substracts each child node's (left and right) impurity values from the parent node impurity value. If impurity decreases a lot (meaning the feature performs an efficient split), it basically gives a high score. Of course, all that is weighted depending on how useful the test is for the result: a split between two individuals gives a high impurity decrease, but is trivial, so much easier than between two large populations.
Once the feature importance has been determined for each tree, it is summed up and normalized so that the feature_importances_ vector sums up to 1.
It might introduce some biaises, I guess that it should be the case if variables are not scaled, for instance. However it is quite easy to compute (you just have to read impurity values from the tree), so I guess this is why it is provided by default. But the method is not unequivocal, there are other methods out there that you can implement more or less manually:
shuffling a feature's values in the dataset
reverting the result of each test based on the feature
... and probably a few other approaches. All those will give you other scores that may be of help. |
H: Extracting MM-YYYY from python date and creating a new column with the same
I want to extract the month and year from one of my date columns in the dataset and create a new column in the data-frame with the new MM-YYY format.
My current solution is working fine but its way to long. I am looking for an efficient way to do this.
Date point looks like this: 2017-07-26 and its in datetime64[ns] format and I want the output to look like this: Jul-2017
AI: import datetime
df1['Month'] = df1['booking_date'].apply(lambda x: datetime.datetime.strftime(x, "%B-%Y")) |
H: Suggestion for stacked modelling in machine learning
I have built several models on the training dataset and i am not happy with the results and I wish to club them all together and generate a new model, so here is my idea as i already have the results for the existing models i would like to create a new dataset with the existing model results as separate features on top of the original feature dataset applying a clustering to filter some data in the original dataset and would like to train the model across all the same models and get the result, Would that be called as stacked modelling?
AI: Stacking takes predictions from diverse and shallow or weak models on a dataset.
Stacks meta-features ( meta-features = predictions ) like columns. And usually a linear meta-model ( like Linear Regression ) is fitted on that dataset of metafeatures. Think of it as if you let multiple models , each one with his own prediction , decide what's the best value for each datapoint. mean across all models? maybe mean across just two? The meta-model decides.
Your approach of using meta-features with the original features ressembles to Boosting which takes each datapoint's residual ( difference between truth value and prediction ) and uses it as a feature to correct the gap iteration by iteration. |
H: What is the meaning of "probability distribution of p(x)" of something uncountable?
I'm studying VAE and new to both of the neural network and the statistic.
After some researches, I could understand the rough concept of VAE.
But what makes me confused is, the meaning of probability distribution p(x) itself.
When the x is an image data, what is the meaning of "probability distribution of the image"? and what is "probability distribution of the latent space"?
When I learned the probability distribution in school, x in p(x) was always something numerable(the value of dice, the number of apples, ..), and so I could get some value like p(x=1).
But I couldn't understand the meaning of p('image data'), p('latent space').
Although many websites explains the concept and logical flow of VAE magnificently, I'm stuck by the lack of my knowledge.
Anyone help me?
AI: Let me preface this answer by saying I am not an expert on variational autoencoders, but I think your conceptual gaps don't have anything to do with autoencoders.
First, it is possible to specify a probability distribution over categorical outcomes or continuous outcomes. For example, a sample from a normal distribution is a continuous outcome, and the normal distribution describes the distribution of outcomes (or, equivalently, the relative likelihood of outcomes with various values). When you sample from a normal distribution, the probability of getting any particular numerical value (like 0, or 3.14) is infinitely small. Because of this, people typically talk about the probability density of a continuous distribution -- which can be interpreted as the likelihood of a sample being from a given (small) region. The probability value is called a probability density because it represents probability mass divided by space (or volume).
Second, it is common to represent images as a vector of numbers. For a grayscale image, this might be 1 numeric value per pixel (it's slightly more complicated for color images). You can think of the vector representation of an image as a point in space. If you have a bunch of images, you can talk about the density of images in this space. Some regions of space have a lot of 'image points', and some have very few. You can approximate this density by fitting a high-dimensional probability distribution to the image points in your dataset -- like fitting a smooth curve to a binned histogram of image counts. Because of this, each image maps to a specific value for the fitted probability density. That means you can talk about p(image) -- it really means, if we look 'near' the vector representation of this image in our vector space, how many images are there, relatively speaking? Some images are in high-density regions and some are in low-density regions, which will reflected as high and low values for p(image). |
H: How to print nullity correlation matrix
I've a trainingset which has 400 features and most of them have null value.
I tried to draw the heatmap of nullity correlation matrix by means of Python and missingno, but the heatmap is unreadable due to high number of features.
How can I print the nullity correlation matrix, instead of draw it?
AI: Using pandas, the nullity correlation matrix seems to be obtained by df.isnull().corr() (this is how it is done is missingno), and this makes sense.
missingno package also states, in the heatmap function documentation, that for large datasets the dendogram view is better. However, it does not tell me if "large" means many features or entries. |
H: Why does removal of some features improve the performance of random forests on some occasions?
I completed feature importance of a random forest model. I removed the bottom 4 features out of 17 features. The model performance actually improved. Shouldn't the performance degrade after removal of some features, given that some data has been lost? What are some reasons to explain the performance improvement?
AI: A basic decision tree is pruned to reduce the likelihood of over-fitting to the data and so help to generalise. Random forests don't usually require pruning because each individual tree is trained on a random subset of the features and when trees are combined, there is little correlation between them, reducing the risk of over-fitting and building dependencies between trees.
There could be a few reasons why you get this unexpected improved performance, mostly depending on how you trained the random forest. If you did any of the following, you potentially allowed overfitting to creep in:
a small number of random trees was used
trees with high strength were used; meaning very deep, learning idiosyncrasies of the training set
correlation between your features
and so removing features, you have allowed your model to generalise slightly more and so improve its performance.
It might be a good idea to remove any features that are highly correlated e.g. if two features have a pairwise correlation of >0.5, simply remove one of them. This would essentially be what you did (removing 3 features), but in a more selective manner.
Overfitting in Random Forests
You can read a bit more about the reasons above on Wikipedia or in some papers about random forests that discuss issues:
Random forest paper by Leo Breiman - states in the conclusion section:
Because of the Law of Large Numbers, they do not overfit.
but also mentions the requirement of appropriate levels of randomness.
Elements of Statistical Learning by Hastie et. al (specifically section 15.3.4 Random Forests and Overfitting) gives more insight, referring to the increase of the number of data samples taken from your training set:
at the limit, the average of fully grown trees can result in too rich a model, and incur unnecessary variance
So there is a trade-off perhaps, between the number of features you have, the number of trees used, and their depths. Work has been done to control the depth of trees, with some success - I refer you to Hastie et. al for more details and references.
Here is an image from the book, which shows results of a regression experiment controlling the depth of trees via minimum Node Size. So requiring larger nodes effectively restricts your decision trees from being grown too far, and therefore reducing overfitting.
As a side note, section 15.3.2 addresses variable importance, which might interest you.
I assume that you trained ("grew") your random forest on some training data and tested the performance on some hold-out data, so the performance you speak of is valid. |
H: AttributeError: 'str' object has no attribute 'month' Process finished with exit code 1
Python code
import pandas as pd import numpy as np import os
RD = pd.read_csv("C:/Users/acharbha/Desktop/fullbackup_success/python/raw_Data_success_Rate.csv")
NEW = {"Cell": RD['Cell'], "LastFullResult": RD["LastFullResult"], "LastFullStartTime": RD["LastFullStartTime"],
"status code": RD["status code"]} NEW = pd.DataFrame(NEW) date = pd.datetime.now() NEW_LastFullStartTime = []
for date1 in NEW.LastFullStartTime:
if isinstance(date1.month, int):
if date1.month == date.month:
NEW_LastFullStartTime.append("True")
else:
NEW_LastFullStartTime.append("False")
else:
NEW_LastFullStartTime.append("NaN") NEW["NEW_LastFullStartTime"] = pd.Series(NEW_LastFullStartTime) NEW = NEW.drop("LastFullStartTime", axis=1)
Oct_Full_ran_failed_not_completed_yet = NEW[(NEW["LastFullResult"] == "Failure") &
(NEW["NEW_LastFullStartTime"] == "True") & (NEW["status code"] >= 1)] Oct_Full_not_ran_yet = NEW[(NEW["NEW_LastFullStartTime"] == "False") | (NEW["NEW_LastFullStartTime"] == "NaN")] Oct_full_ran_successful = NEW[(NEW["LastFullResult"] == "Success") & (NEW["NEW_LastFullStartTime"] == "True")]
result = Oct_full_ran_successful.groupby('Cell').count() result = result.drop(result.columns[[1, 2]], axis=1) Oct_full_ran_successful = result.rename(columns={"LastFullResult": "Oct_full_ran_successful"}) d3 = Oct_full_ran_successful
Oct_Full_not_ran_yet.groupby('Cell').count() result = Oct_Full_not_ran_yet.groupby('Cell').count() result = result.drop(result.columns[[1, 2]], axis=1) Oct_Full_not_ran_yet = result.rename(columns={"LastFullResult": "Oct_Full_not_ran_yet"}) d2 = Oct_Full_not_ran_yet
Oct_Full_ran_failed_not_completed_yet.groupby('Cell').count() result = Oct_Full_ran_failed_not_completed_yet.groupby('Cell').count() result = result.drop(result.columns[[1, 2]], axis=1) Oct_Full_ran_failed_not_completed_yet = result.rename(
columns={"LastFullResult": "Oct_Full_ran_failed_not_completed_yet"})
d1 = Oct_Full_ran_failed_not_completed_yet d2 = pd.merge(d1,d2, on=['Cell'], how ='outer')
result = pd.merge(d2, d3, on=['Cell'], how='outer') print(result)
Error:
*C:\Users\acharbha\AppData\Local\Continuum\anaconda3\python.exe C:/Users/acharbha/PycharmProjects/Python_class/Intel/Success_Rate/success_rate.py
Traceback (most recent call last):
File "C:/Users/acharbha/PycharmProjects/Python_class/Intel/Success_Rate/success_rate.py", line 25, in <module>
if isinstance(date1.month, int):
AttributeError: 'str' object has no attribute 'month'
Process finished with exit code 1*
csv input sample:
Oct Full ran, failed & not completed yet: Oct full ran, failed & not
completed yet means – “LastFullStartTime” – contains current month
date and non-empty && “LastFullResult” – Failed && “status
code” – greater than 1
Oct Full not ran yet:
Oct Full not ran means – “LastFullStartTime” - empty or date older than current month
Oct full ran successful – Oct full ran successful means – “LastFullStartTime” – contains current month date &&
“LastFullResult” – success
Grand Total Grand Total means – Count of BackupPolicyID for each distinct cell; should be ideally equal to above 3 columns
(1+2+3=4)
Success rate for full ran in Oct Success rate for full ran in Oct means – Above column1/(column1+column3) in percentage
Success rate of full backup Success rate of full backup means – Above column1/(column1+column2+ column3) in percentage
Percentage of backup coverage Percentage of backup coverage means – Above (column1+column3)/column4
AI: you need to add some more explanation to what you are doing. Without guessing, we can tell you only what the error message already tells you: you are trying to get the month attribute of the date1 variable, in your loop.
My only guess would be to convert the data to date type somehow, but this might also fail, as I don't know what the values look like in RD["LastFullStartTime"].
You could change this line, adding the converter pd.to_datetime():
NEW = {"Cell": RD['Cell'], "LastFullResult": pd.to_datetime(RD["LastFullResult"]), "LastFullStartTime": RD["LastFullStartTime"], "status code": RD["status code"]}
NEW = pd.DataFrame(NEW)
date = pd.datetime.now()
NEW_LastFullStartTime = [] |
H: When do you use FunctionTransformer instead of .apply()?
I'm watching a PyData talk from 2017 in which the speaker provides this example for how to use FunctionTransformer for sklearn.preprocessing
from sklearn.preprocessing import FunctionTransformer
logger = FunctionTransformer(np.log1p)
X_log = logger.transform(X)
In other words, she's applying a function over the rows of a column. I assumed this could be done more simply using .apply(). I feel that there must be something more to the reason why a data analyst would import FunctionTransformer. Could someone help me understand what differentiates the .apply() method from FunctionTransformer?
AI: FunctionTransformer is useful because it allows you to apply a custom function in a pipeline. Because Pipeline() from sklearn.pipeline only works with objects that implement the .transform() and .fit() methods, you use FunctionTransformer to change your custom function to allow .transform() and/or .fit() to be used on it.
You could transform a DataFrame or Series by using .apply() (or something similar like a list comprehension), but you wouldn't be able to use that function in Pipeline() without first using Function Transformer.
(answer adapted from a DataCamp module "Multiple types of processing: FunctionTransformer" from the class "Machine Learning with the Experts: School Budgets")
Example:
# Import FunctionTransformer
from sklearn.preprocessing import FunctionTransformer
# Obtain the text data: get_text_data
get_text_data = FunctionTransformer(lambda x: x['text'], validate=False)
# Obtain the numeric data: get_numeric_data
get_numeric_data = FunctionTransformer(lambda x: x[['numeric', 'with_missing']], validate=False)
# Fit and transform the text data: just_text_data
just_text_data = get_text_data.fit_transform(sample_df)
# Fit and transform the numeric data: just_numeric_data
just_numeric_data = get_numeric_data.fit_transform(sample_df)
# Print head to check results
print('Text Data')
print(just_text_data.head())
print('\nNumeric Data')
print(just_numeric_data.head())
<script.py> output:
Text Data
0
1 foo
2 foo bar
3
4 foo bar
Name: text, dtype: object
Numeric Data
numeric with_missing
0 -10.856306 4.433240
1 9.973454 4.310229
2 2.829785 2.469828
3 -15.062947 2.852981
4 -5.786003 1.826475 |
H: Backpropagation chain rule example
My question is in regards to an MIT course example.
The instructor delves into the backpropagation of this simple NN.
I have two questions.
Why do we seem to disregard the weights of the second layer (in blue)?
The red circle, should this not be $\partial y_2$ (followed by $\frac{\partial y_2}{\partial P_3}$)?
AI: Why do we seem to disregard the weights of the second layer (in blue)?
-----because activation function is after that block, so differential with the o/p y is as good as with the weight.
The red circle, should this not be ∂y2 (followed by ∂y2∂P3)?
-----y2 and y1 is the o/p of the same block i.e. both must be same |
H: Help in understanding the maths behind Logistic Regression
I am following the lecture notes available https://www.stat.cmu.edu/~cshalizi/uADA/12/lectures/ch12.pdf
I cannot understand how Eqs 12.4 and 12.5 come,
why the Bernoulli probability has $1-p(x)$ in the denominator,
how come $p(x) = \exp(\beta + \beta^Tx)$
and how $log \frac{p(x)}{1-p(x)}$ evaluates to $\beta + \beta^Tx$.
In general $\beta$ is the parameter of the model but I don't quite follow how come the log expression evaluates to it. Is there some mathematical formula which is skipped that is used to evaluate the log expression? This is crucial for me to know as these values are substituted in eq 12.10 where $p(x) = \exp(\beta + \beta^Tx)$
AI: This feels like a bit of a convoluted way to introduce the concept, but alright :D Let me start at a slightly different point.
Maybe in Machine Learning or in other places you have encountered the $sigmoid$ function:
$$ sigmoid(S) = \frac{e^S}{1+e^S} = \frac{1}{1+e^{-S}} $$
The sigmoid has the nice property to map any real number $S$ to a number between 0 and 1. This is super when dealing with models that have to represent probabilities.
As in the slides they are looking for a (conditional) probability $p(x)$ they go ahead and implicitly set
$$ p(S) = sigmoid(S) $$
They just do it around an extra corner. In order to be consistent with the slides (and to save some tedious writing) I'll now keep talking about the probability $p(x)$ and think of the $sigmoid$-function.
$p(S)$ (i.e. the $sigmoid$) has an inverse function, called the $logit$. Let's try to find the inverse of $p(S)$:
$$ p = \frac{1}{1+e^{-S}} $$
and from there we move some stuff around
$$ 1+e^{-S} = \frac{1}{p} $$
$$ e^{-S} = \frac{1}{p} - 1 $$
$$ -S = \log \left[\frac{1}{p} - 1 \right] $$
$$ S = \log \left[\frac{1}{\frac{1}{p} - 1} \right] $$
$$ S = \log \left[\frac{p}{1 - p} \right] $$
This is where they start in your notes. We can take any number $S$, punch it into the logit, solve for $p$ and Boom! we've got ourselves a nice conditional probability.
So the next question is: what is $S$? $S$ is supposed to be some function of $x$. There is a wide variety of functions you could use. You could even put a massive Neural Network, but for now, let's stick to linear regression
$$ S(x) = \beta_0 + \beta x $$
If you decide to go for $S(x)$ being linear, you can now go step 3. backwards and end up at step 1. with the expression they also show in the notes
$$ p(x) = \frac{e^{\beta_0 + \beta x}}{1+e^{\beta_0 + \beta x}} = \frac{1}{1+e^{-(\beta_0 + \beta x)}} $$
So to answer your questions:
$1-p(x)$ is not in the denominator of the distribution, it seems to me they just introduce it in the $\log\frac{p}{1-p}$ in order to end up with the $sigmoid$.
$p(x)$ is not equal to $e^{\beta_0 + \beta x}$, but equal to the $sigmoid$.
$\log \frac{p}{1-p}$ is declared to be equal to $\beta_0 + \beta x$ in order to end up with the $sigmoid$.
I think it would have been more straight-forward to say that you want to map a linear function to something between 0 and 1, which you can do with the sigmoid and then you would have been done in one step. It seems a bit weird to introduce the inverse of the sigmoid, claiming that this was some property you want and then solve for $p$, but that might be a matter of taste. |
H: What to do with large number of collinear variables?
I have this time-series dataset that has 63 features, out of which 57 were manually engineered. While checking for collinearity, I get this correlation matrix:
As can be seen there are a number of variables that are correlated/collinear. The ones that are deep red certainly need to be removed, but what about the ones on the bluer range? How do such variables (on the negative range of collinearity) effect regression models?
Also, I ran a recursive feature extraction process from sklearn.feature_extraction module and it has recommended me 39 features to be the best (at default settings). Is RFE the best strategy while dealing with such features?
AI: First, if you're doing regression on a time series, you have to check for autocorrelation or your p-values and significance tests are going to be highly inaccurate. Also, the R^2 value you will get will be seriously misleading if you don't account for autocorrelation.
Secondly, if you are using lagged variables, it's highly likely that many of them are going to be correlated.
Collinearity affects linear regression models by making it difficult for the model to determine which coefficient is causing the effect on the dependent variable. The result is that between highly collinear variables, you will have inaccurate p-values and very small coefficients or even coefficients with the wrong signs.
Multicollinearity can be ignored when you are using some of those variables as control variables or when you don't care very much about interpretation; furthermore, collinearity doesn't matter when the variables of interest are not collinear.
Cross-validated has many good discussions on regression as well. |
H: Normalize / Standardize in a Random Forest?
If I have a matrix of co-occurring words in conversations of different lengths, is it appropriate to standardize / normalize the data prior to training?
My matrix is set up as follows: one row per two-person conversation, and columns are the words that co-occur between speakers. I cannot help but think that, as a longer conversation will likely comprise more shared words than shorter ones, I should factor this in somehow.
AI: Thanks for the clarification by commenting. Tree-based models do not care about the absolute value that a feature takes. They only care about the order of the values.
Hence, normalization is used mainly in linear models/knn/neural networks because they're affected by absolute values taken by features.
You don't need to normalize/standardize.
Check this post. |
H: cross validation issues
I have come here from this great answer. I have come across many approaches for using cross validation and the answer to the attached question is by far explaining it the best to me. My dilemma is that now that I m not able to figure out what to use Kfold cross validation for:-
Testing overfitting?
Hyperparam tuning?
Any other use case?
and that too how? I am unable to figure out what to do with the average score that comes after kfold cross val, what to do with the folds and what to do with a model trained on k-1 folds of train data?
AI: Answering the "what to do" point, if you use the scikit-learn GridSearchCV class (from sklearn.model_selection), you can get from it the following:
best params found among the ones you enter with the 'param_grid' input, based on the 'scoring' metric you want (i.e. roc_auc, recall...)
and the most important point, you can directly access the best estimator (i.e. the model instanced with the best hyperparms found in the CV process) already refit with the whole training dataset.
I have seen some source which make a "manual" retrain on the whole training set, but it is not necessary as scikit-learn already let's you accessing the refit model on the whole train set :) |
H: Feature engineering - house price prediction (small dataset)
I am working on the task of predicting real estate prices. My dataset has only 10 variables described below. I'm thinking about feature engineering but nothing comes to mind.
Variables:
street
city
zip code
rooms
bathrooms
square feet
type
price
latitude
longitude
AI: latitude, longitude could be used for clustering. Then you should set cluster_id to every house as a new feature.
Also it possible to use geo coordinates to calculate the distances to important points (to the city center for example) |
H: What dataset was Stanford NER trained on?
I would like to re-train the Stanford NER library from scratch as a 1 class model.
Only 3,4 and 7 class models are available out of the box.
Is it possible to obtain the data that the model was originally trained on?
AI: The original paper mentions two corpora: CoNLL 2003 (apparently here now) and the "CMU Seminar Announcements Task". However according to the page linked in the question the actual NER was trained on a larger combination of corpora:
Our big English NER models were trained on a mixture of CoNLL, MUC-6, MUC-7 and ACE named entity corpora, and as a result the models are fairly robust across domains.
So it might be difficult to obtain the exact original training data. However most of these corpora were compiled for some shared tasks and should be available online. There are probably more recent ones as well: a quick search "named entity recognition shared task" returns many hits. |
H: Same probability for all classes
I implemented a fully connected MLP of shape [783 (input), 128 (hidden layer) and 10 (output)] the hidden layer had a sigmoid activation function and the output a sofmax.
I tested with the dataset of keras: Classify images of clothing.
At first I got the ouput was 0.1 at all the exits not matter the input. I then read this and because someone asked about the weights initialization I changed my weights initialization from a normal distribution between [0, 1) to [-1, 1). After that my network started working.
Why did this happen? I believe the prection of 0.1 is some kind of local minima because it just says the same probability for all, at least is what makes sense if you knew nothing about the data. But why? I would love to be refered to a paper that talks about this issue and how to prevent it because I am trying with another dataset now and I got the same problem (but this time I could not make it work. I even try Xavier initialization and still no good result).
AI: Assuming that you normalized the pixel-values as it is done in the tutorial, your inputs were vectors of numbers betweeen 0 and 1. Now, if your weight matrices are also randomly drawn numbers between 0 and 1, the input to the hidden layer will be sums of 783 numbers between 0 and 1, i.e. probably something > 100. Now, check out the sigmoid function and its derivative
As you see, it saturates quite quickly for input values > 5. If you choose the initialization as you did, all hidden neurons should be very close to one, while at the same time the derivative should be very close to zero. This would explain how all outputs of the softmax were equally close to 0.1 and since the gradient was close to zero, the network probably didn't learn anything.
Once you changed the weight initialization to something between -1 and 1, the inputs to the hidden layer should be sums of numbers fairly evenly distributed around 0, thus the sigmoid output was about 0.5 and, most importantly, the gradient was non-zero so your network actually got trained.
As you already noticed, choosing the initialization wisely is crucial for getting decent results. Initializations should also take the number of input neurons into account, otherwise you initialize the network in such a way, that the gradients will be close to zero.
A similar problem can occur if you do not normalize the input data properly, e.g. if you feed the pixel-values between 0 and 255 directly to the model.
I'm not sure about papers on this, but maybe start with the original work on Glorot initialization or checkout the initializers Tensorflow has to offer. On that site under "Functions" they list common initializers and also link to the respective papers.
I hope this was helpful. |
H: Intuition behind PCA eigenvectors
For undergraduate students who understand the definition of
eigenvectors and eigenvalues,
$$A v = \lambda v \;,$$
what is the intuition behind why the eigenvectors of the
covariance (or correlation) matrix correspond to the axes of maximal stretching?
Why specifically does that matrix lead to (e.g.) the largest eigenvector
corresponding to the direction of maximal spread in the data?
AI: Since the students know about eigenvectors and eigenvalues, I will assume that they know about Lagrangian multipliers as well.
Let's start with computing the variance of data $\vec{x}$ along a given direction $\vec{e}_a$. Since we only care about the direction, $\vec{e}_a$ is a vector of unit length, i.e.
$$\vec{e}_a^T \vec{e}_a = 1$$
So first, the component of the data along $\vec{e}_a$ is given by
$$ \vec{e}_a^T \vec{x}$$
The variance $\sigma_a^2$ of this component is given by
$$ \sigma_a^2 = E[(\vec{e}_a^T \vec{x} - \vec{e}_a^T \vec{\mu})^2] $$
where $\vec{\mu} = E[\vec{x}]$.
If we re-write this expression a little bit
$$ \sigma_a^2 \\
= E[(\vec{e}_a^T \vec{x} - \vec{e}_a^T \vec{\mu})(\vec{e}_a^T \vec{x} - \vec{e}_a^T \vec{\mu})] \\
= E[\vec{e}_a^T(\vec{x} - \vec{\mu})(\vec{x} - \vec{\mu})^T\vec{e}_a ] \\
= \vec{e}_a^T E[(\vec{x} - \vec{\mu})(\vec{x} - \vec{\mu})^T ] \vec{e}_a \\
= \vec{e}_a^T \Sigma_x \vec{e}_a
$$
where in the last step we use the covariance matrix $\Sigma_x = E[(\vec{x} - \vec{\mu})(\vec{x} - \vec{\mu})^T ]$.
So the variance along a given direction $\vec{e}_a$ is computed using $\Sigma_x$. So far, so good.
Let's now try to find the direction $\vec{e}_a^*$ where the variance is maximal. Since $\vec{e}_a^*$ is also supposed to only indicate the direction, we demand
$$\vec{e}_a^{*T}\vec{e}_a^{*} = 1 $$
With this we can formulate a constrained optimization problem using the Lagrangian multiplier $\lambda$
$$ L(\vec{e}_a) = \vec{e}_a^T \Sigma_x \vec{e}_a + \lambda (1 - \vec{e}_a^T \vec{e}_a ) $$
Taking the derivative with respect to $\vec{e}_a$ and equating it to zero, we end up with
$$ \frac{\partial L}{\partial \vec{e}_a} = \Sigma_x \vec{e}_a - \lambda \vec{e}_a = 0 $$
i.e.
$$ \Sigma_x \vec{e}_a = \lambda \vec{e}_a $$
This is a cool result. The direction of optimal (maximal) variance is a solution to the eigenvalue problem of $\Sigma_x$. Assuming that we found a normalized eigenvector $\vec{e}_b$, it is easy to interpret the eigenvalue $\lambda$
$$ \vec{e}_b^{T} \Sigma_x \vec{e}_b = \lambda = \sigma_b^{2} $$
So, to summarize:
the variance along a given direction is computed using the covariance matrix
the direction of maximal variance is a normalized eigenvector of the covariance matrix
the eigenvalues are the variances along the direction of the eigenvectors
This means, that the problem of finding the direction of maximal variance reduces to finding the eigenvector corresponding to the maximal eigenvalue. This is about as intuitive as it gets, I believe.
(I skipped the part where we need to show that all eigenvalues are non-negative and thus can be interpreted as variances, but this follows from $\Sigma_x$ being positive semi-definit.) |
H: Convolutional neural networks for non image dataset
Can we use Convolutional Neural networks for a non image dataset for prediction?
The dataset is a record of student academic details
I know that CNN is mostly used in computer vision and image processing for analyzing visual imagery.
And it is also used in natural language processing for text classification.
I am new to this please guide me through
Thank you!!
AI: Short answer:
Yes, absolutely
Longer answer:
As a rule-of-thumb, you can always use any technique on any problem, as long as you define it properly. CNNs are meant to work on signal data. 1D signals can be sound waves, financial records, etc. 2D signals are usually images. 3D signals can be videos (with time being the 3rd dimension). And obviously, you can have as many dimensions as you'd like.
So as long as you can turn your input data (let's say a single training instance) into a signal, then be my guest. However, I fail to see how this dataset can be viewed as a signal. |
H: Problem importing dataset
I am new to machine learning and I am trying to build a classifier. My problem is that I am not able to import the dataset I need. In particular, I put my dataset in the Desktop and what I did is:
#pakages
import numpy as np
import pandas as pd
import jsonlines #edit
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import *
from sklearn.naive_bayes import *
from sklearn.metrics import confusion_matrix, classification_report
from sklearn import svm
#for visualizing data
import matplotlib.pyplot as plt
import seaborn as sns; sns.set(font_scale=1.2)
%matplotlib inline
print('Libraries imported.')
now, after these imports I want to use the function
training_set = pd.read_json('\Desktop\training_dataset.jsonl') #edit
print(training_set.head())
to import my dataset. The problem is that what i get is:
ValueError Traceback (most recent call last)
<ipython-input-16-f789503c3c7c> in <module>
----> 1 training_set = pd.read_json('\Desktop\training_dataset.json')
2 print(training_set.head())
~\Anaconda3\lib\site-packages\pandas\io\json\_json.py in
read_json(path_or_buf, orient, typ, dtype, convert_axes, convert_dates,
keep_default_dates, numpy, precise_float, date_unit, encoding, lines,
chunksize, compression)
590 return json_reader
591
--> 592 result = json_reader.read()
593 if should_close:
594 try:
~\Anaconda3\lib\site-packages\pandas\io\json\_json.py in read(self)
715 obj =
self._get_object_parser(self._combine_lines(data.split("\n")))
716 else:
--> 717 obj = self._get_object_parser(self.data)
718 self.close()
719 return obj
~\Anaconda3\lib\site-packages\pandas\io\json\_json.py in
_get_object_parser(self, json)
737 obj = None
738 if typ == "frame":
--> 739 obj = FrameParser(json, **kwargs).parse()
740
741 if typ == "series" or obj is None:
~\Anaconda3\lib\site-packages\pandas\io\json\_json.py in parse(self)
847
848 else:
--> 849 self._parse_no_numpy()
850
851 if self.obj is None:
~\Anaconda3\lib\site-packages\pandas\io\json\_json.py in
_parse_no_numpy(self)
1091 if orient == "columns":
1092 self.obj = DataFrame(
-> 1093 loads(json, precise_float=self.precise_float),
dtype=None
1094 )
1095 elif orient == "split":
ValueError: Expected object or value
and I can't understand why. Can somebody please help me? Thank's in advance.
[EDIT] the file is a .jsonl file, but yet I don't know how to import the dataset because I cannot use .read_json
,I have tried this:
openfile=open('Desktop\training_dataset.jsonl')
jsondata=json.load(openfile)
df=pd.DataFrame(jsondata)
openfile.close()
print(df)
but gives me the following error message:
OSError Traceback (most recent call last)
<ipython-input-28-2422c1a9a77b> in <module>
----> 1 openfile=open('Desktop\training_dataset.jsonl')
2 jsondata=json.load(openfile)
3 df=pd.DataFrame(jsondata)
4 openfile.close()
5 print(df)
OSError: [Errno 22] Invalid argument: 'Desktop\training_dataset.jsonl'
[EDIT 2] by doing as suggested, so:
with open("\Desktop\training_dataset.jsonl") as datafile:
data = json.load(datafile)
dataframe = pd.DataFrame(data)
I again obtain another error message, which is:
OSError Traceback (most recent call last)
<ipython-input-47-1365f26e6db5> in <module>
----> 1 with open("\Desktop\training_dataset.jsonl") as datafile:
2 data = json.load(datafile)
3 dataframe = pd.DataFrame(data)
OSError: [Errno 22] Invalid argument: '\\Desktop\training_dataset.jsonl'
but I don' understand, because my dataset is placed in my desktop.
AI: You can follow the below steps to load a json file,
First check whether the file is json or not using the following; https://jsonlint.com/. Once you are confirmed the file is a json, use the below code to read it.
with open("training_dataset.json") as datafile:
data = json.load(datafile)
dataframe = pd.DataFrame(data)
I hope the above will help you. |
H: Dealing with irrelevant features in dataset (Homework)
I have a specific question pertaining to one of my machine learning homeworks.
Basically, we are required to build a model that takes a 5000*10000 dataset X (5000 examples each with 10000 features), and predict Z, which is a 5000*2 Matrix. The dataset Z is synthetically generated from X, however, it only depends on two of those features from X. That is, only 2 of the 10000 features are relevant. Z has only 3 possible classes, [0,0], [1,0] and [1,1], and is not balanced. All of the features of X are sampled equally from a normal distribution, the only distinguishing feature is that Z is calculated from just 2 of them.
I thought that using PCA might be a good idea, and tried this. However, I had two problems. Firstly calling np.linalg.eig on X@X.T or X.T@X takes an unreasonable amount of time. And secondly, when I used np.linalg.eigh, it was slightly less computationally prohibitive, but many of the eigenvalues were actually very large. This makes sense to me as well since nothing actually distinguishes X except their relationship to Z.
I then built a simple neural network, linear -> tanh -> linear -> sigmoid. This descended fine, however it just learn to mirror the underlying distribution of the dataset. That is, it didn't learn which parameters of X were relevant and instead learnt to predict such that the overall error based on it (seemingly) randomly guessing was minimized.
Can anyone suggest some techniques that might help me to solve my problem?
AI: This could be many things.
Since you want to calssify the data into three classes, I would use one-hot-encoding, rather than the binary enumeration, because you kind of introduce the information to the model that [1,0] and [1,1] are "closer" than [0,0] and [1,1] for example. Unless this is actually the case, better encode the classes like [1,0,0], [0,1,0] and [0,0,1].
That the eigenvalues in your PCA are large can have two reasons: a) you need to divide XX^T by the number of data points, as the PCA is computed on an approximation of the covariance matrix, which in turn is based on an expectation value
b) not sure if you did, but make sure to center your data first
I think you discovered this yourself already, but if the PCA is useful on your problem depends actually on your data distribution and how you computet the Z. It sounds a bit as if there was no covariance between the features, so there are no primary components to discover. And in case you picked the features to compute Z randomly, you have no guarantee that the primary components tell you anything about Z.
Maybe do a PCA on the data and append Z to X. This might help with dimensionality reduction or identifying relevent features.
Since you have many features and use saturating activation functions, it is important that you normalize the data before passing it to the model and that you initialize the weights properly, otherwise the neurons will saturate and the gradient will be almost zero. Also training might be very slow anyway, as with all the "useless" features you introduce a lot of noise.
I hope this helps. If not, feel free to post some more information on your training data and the neural network you wanted to use. |
H: How to choose a model for this cross-validation curve?
I'm using GridSearchCV to tune hyperparameters for a Logistic Regression multiclass model.
I read on Kaggle that you should choose the hyperparameter that results in the lowest discrepancy between the CV-score and the training score, but in this case this leads to a very low score.
How should I choose the proper C value to ensure generalisability of the model but also high model performance based on the CV-curve below?
From my understanding opting for low discrepency between the two scores ensures the ability of the model to be generalised to unseen data. But on the other hand I want a score as high as possible on unseen data.
Thanks for any help!
AI: Choosing the best validation accuracy is the common practice, since validation is unseen data.
Sometimes you might have over-fitting to the validation set, mainly if it is too small or no very representative of the data (for example if it has considerably more examples of one class, thus a good model would be a model that says that (almost) everything belongs to that class).
If you are worried about over-fitting, you could increase your regularization strength. |
H: Best file format for transfer of EHR data
I am working on a clinical trial where we have several sites sending us EHR data. The sites are currently sending the data in excel files. I have a feeling someone's opening the files because 3 of the files have 64,999 rows exactly, and excel 2007 cuts off at 65,000.
I am working in python, but I am trying to prevent the people at the local sites from opening the files in excel.
What's the best format for the files to be sent to me such that they can't be opened and cut off in excel? csv unfortunately can also be opened in excel.
AI: If your are using python, I suggest using Numpy's npy format. An NPY file is a NumPy array file created by the Python software package with the NumPy library installed. It contains an array saved in the NumPy (NPY) file format. NPY files store all the information required to reconstruct an array on any computer, which includes dtype and shape information.
You can also consider using hdf or hdf5 formats both of which are supported in Python. Hierarchical Data Format (HDF) is an open source file format for storing huge amounts of numerical data. It’s typically used in research applications (meteorology, astronomy, genomics etc.) to distribute and access very large datasets without using a database. One can use HDF5 data format for pretty fast serialization to large datasets. |
H: Why Keras Dense layer is expanding number of tensors in each layer
I have following model:
and I wonder why is the number of parameters different for e.g. dense_2477 and dense_2482? Both layers have the same amount of neurons so why do they provide different parameter numbers?
AI: The number of parameters is the number of weights connecting the layer to the previous one, so for the layer $i$ it depends on the number of neurons in the layer $i$ and the previous one $i-1$.
The exact formula for a fully connected neural network is: $$n_i(n_{i-1}+1),$$ where $n_i$ is the number of neurons in the layer $i$, $n_{i+1}$ the number of neurons in the layer $i+1$, and the $+1$ term takes into account the bias.
So for dense_2477 you indeed get $$n_{\mathrm{params}}=8(16+1) = 136,$$ and
for dense_2482, $$n_{\mathrm{params}}=16*(8+1) = 144,$$ as expected. |
H: TS-SS and Cosine similarity among text documents using TF-IDF in Python
A common way of calculating the cosine similarity between text based documents is to calculate tf-idf and then calculating the linear kernel of the tf-idf matrix.
TF-IDF matrix is calculated using TfidfVectorizer().
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf = TfidfVectorizer(stop_words='english')
tfidf_matrix_content = tfidf.fit_transform(article_master['stemmed_content'])
Here article_master is a dataframe containing the text content of all the documents.
As explained by Chris Clark here, TfidfVectorizer produces normalised vectors; hence the linear_kernel results can be used as cosine similarity.
cosine_sim_content = linear_kernel(tfidf_matrix_content, tfidf_matrix_content)
This is where my confusion lies.
Effectively the cosine similarity between 2 vectors is:
InnerProduct(vec1,vec2) / (VectorSize(vec1) * VectorSize(vec2))
Linear kernel calculates the InnerProduct as stated here
So the questions are:
Why am I not divding the inner product with the product of the magnitude of the vectors ?
Why does the normalisation exempt me of this requirement ?
Now if I wanted to calculate ts-ss similarity, could I still use
the normalised tf-idf matrix and the cosine values (calculated by
linear kernel only) ?
AI: Normalised vectors have magnitude 1, so it doesn't matter if you explicitly divide by the magnitudes or not. It's mathematically equivalent either way.
I see no reason that you couldn't use normalised vectors in TS-SS, but it seems that the main motivation for using TS-SS in the first place is that it makes more sense for vectors that may have different magnitudes. I would try both cosine similarity and TS-SS for your problem, and see if there is a noticeable performance difference. |
H: Why is the variance going down so much in this weight initialization problem(using pytorch)?
first look at this example
>>> x = t.randn(512)
>>> w = t.randn(512, 500000)
>>> (x @ w).var()
tensor(513.9548)
it makes sense that the variance is close to 512 because each one of 500000, is a dot product of a 512 vector and a 512 vector, that is sampled from a distribution with a standard deviation of 1 and mean of 0
However, I wanted the variance to go down to 1, and consequently the std to be 1 since standard deviation is square root of variance, where 1 is the variance.
To do this I tried the below
>>> x = t.randn(512)
>>> w = t.randn(512, 500000) * (1/512)
>>> (x @ w).var()
tensor(0.0021)
However the variance is actually now 512 / 512 / 512 instead of 512/ 512
In order to do this correctly, I needed to try
>>> x = t.randn(512)
>>> w = t.randn(512, 500000) * (1 / (512 ** .5))
>>> (x @ w).var()
tensor(1.0216)
Why is that the case?
AI: Basically, because the variance is quadratic in w.
You can consider your problem as computing the variance over scalar products $z = \vec{x}\vec{w}$ by drawing 500000 samples of $\vec{w}$ from $p(\vec{w})$. Let's compute the mean first
$$ E_{p(\vec{w})} [ \vec{w}^T\vec{x} ] = \vec{\mu}_w^T \vec{x} = 0$$
because you draw the $\vec{w}$ from a zero-centered Gaussian. With the expectation value as zero, the variance is given by
$$ E_{p(\vec{w})} [ (\vec{w}^T\vec{x})^2 ] \\
= E_{p(\vec{w})} [ \vec{x}^T\vec{w} \vec{w}^T \vec{x} ] \\
= \vec{x}^T E_{p(\vec{w})} [ \vec{w} \vec{w}^T ] \vec{x}
$$
$E_{p(\vec{w})} [ \vec{w} \vec{w}^T ]$ is the covariance matrix of the $\vec{w}$ and since they stem from independent standard normal distributions is diagonal with $\sigma_w^2 = 1$ on the diagonal. So that gives
$$
E_{p(\vec{w})} [ (\vec{w}^T\vec{x})^2 ] = \vec{x}^T \vec{x} \approx 512
$$
as you are approximating the sum of variances of 512 independent samples of standard Gaussian. If you now scale the $\vec{w}$ with a constant $k$, you can do this calculation again and it will give you
$$
E_{p(\vec{w})} [ (\frac{1}{k}\vec{w}^T\vec{x})^2 ] \\
= \frac{1}{k^2} E_{p(\vec{w})} [ (\vec{w}^T\vec{x})^2 ] \\
\approx \frac{1}{k^2} 512
$$
So in order to end up with 1 in the very end, you need to scale with $k=\sqrt{512}$.
I hope this answers your question. |
H: Turning Histogram values into Numerical format ( Excel-xslx, Pandas-DataFrame, etc.)
I am trying to do a correlation study about personality traits as described in Hofstede's :https://www.hofstede-insights.com/product/compare-countries/ . I would like to have the values described in the bar charts numerically into, say an Excel or pandas file. Is there a way of scraping or using an API that would turn the bar chart values into number values associated to each country? I looked at similar questions but the closest I found was transforming categorical data given in a file format and did not involve scraping/APIs to change categorical into numerical.
Thanks.
AI: The data is available in json format through this link, which contains the data for all countries that can be selected from the list. I found this link by going to the link you provided, opening the Network tab in the Chrome developer tools and reloading the page to see all the resources that are loaded by the webpage. |
H: Supported GPU for Pytorch
Question
Which GPUs are supported in Pytorch and where is the information located?
Background
Almost all articles of Pytorch + GPU are about NVIDIA. Is NVIDIA the only GPU that can be used by Pytorch? If not, which GPUs are usable and where I can find the information?
AI: That's correct, you need a NVIDIA GPU compatible with CUDA 8, 9 or 10: https://pytorch.org/get-started/locally/
NVIDIA has a list of compatible cards here: https://developer.nvidia.com/cuda-gpus#compute
Alternatively you could work on a GPU equipped cloud instance (or install pytorch without GPU). |
H: Square-law based RBF kernel
What is the Square-law based RBF kernel (SQ-RBF)? The definition in the table at the Wikipedia article Activation Function looks wrong, since it says
y =
1 - x^2/2 for |x| <= 1
2 - (2-x^2)/2 for 1 < |x| <= 2
0 for |x| > 2
but this makes it discontinuous at x = 1
AI: That is indeed quite odd. I can't really find many other sources on SQ-RBF, except here, where the definition differs from wikipedia, but when I plot this, it does not recover the image they show. However, if you define
y =
1 - x^2/2 for |x| <= 1
0.5*(|x|-2)^2 for 1 < |x| <= 2
0 for |x| > 2
you can recover a sensible shape. |
H: R packages: How to access csv files in data subfolder?
I have successfully written an R package and want to ship it with a specific csv file. I placed the file in the data and data-raw subfolders.
read.csv("data/foobar.csv")
The above command fails. How can I read the csv file?
AI: data-raw is for storing data alongside a short R script that will do the conversion to R data for the user, and the user will just use the data() function.[source] Alternatively, if you want the raw CSV to be user-accessible, I think you need to use the extdata folder, as documented here. Then the user can get the actual path to the file on their system, after package installation, with system.file("extdata", ..., package = "mypackage"). Then, finally, they can feed that path to read.csv() with whatever options they like. |
H: Suggestions on how to explain 'models' & 'algorithms'
I guess other members of this Stack have ran in to this before, but I may be wrong: Have you ever been approached and asked to explain the difference between models and algorithms? This happened to be recently and, while I feel that I explained it well, there is a colleague whom disagrees with my assessment..
How I see the two:
Algorithms: Used to train a model, that is, give it instructions and process the input.
Model: A diagram of sorts, that incorporates utilizing algorithms to train inputs. Can be reused on similar data.
Pre-Trained Model: A model, usually trained on larger data sets and can be utilized to build [your] own models with other data.
Thoughts? I'd appreciate them to see if I am just completely off base here.
AI: I think your definitions are mostly right....I would add that model is a specific implementation of algorithms designed to handle a specific issue or group of issues
Regarding pre-trained model, my understanding is that it can be used to build other models indeed, but the main use case is simply using it to predict or classify on new data, having already been trained previously |
H: Why I am having ValueError in this Linear Regression?
from sklearn.linear_model import LinearRegression
ClosePrices = data['Close'].tolist()
OpenPrices = data['Open'].tolist()
OpenPrices = np.reshape(OpenPrices, (len(OpenPrices), 1))
ClosePrices = np.reshape(ClosePrices, (len(ClosePrices), 1))
regressor = LinearRegression()
regressor.fit(OpenPrices, ClosePrices)
I am having the error
ValueError: Input contains NaN, infinity or a value too large for dtype('float64').
What is the solution?
AI: Your dataset most likely contains missing (NaN) values. To be sure about the error, it would help a lot if you can show us the dataset you are using for the regression. |
H: Why are my predictions bad, if my accuracy in train is roughly 100% (Keras CNN)
In my CNN i have to handle 2 classes in a binary system, I have 700 images each class to train, and others to validation. This is my train.py:
#import tensorflow as tf
import cv2
import os
import numpy as np
from keras.layers.core import Flatten, Dense, Dropout, Reshape
from keras.models import Model
from keras.layers import Input, ZeroPadding2D, Dropout
from keras import optimizers
from keras.optimizers import SGD
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import EarlyStopping
from keras.applications.vgg16 import VGG16
TRAIN_DIR = 'train/'
TEST_DIR = 'test/'
v = 'v/'
BATCH_SIZE = 32
NUM_EPOCHS = 5
def crop_img(img, h, w):
h_margin = (img.shape[0] - h) // 2 if img.shape[0] > h else 0
w_margin = (img.shape[1] - w) // 2 if img.shape[1] > w else 0
crop_img = img[h_margin:h + h_margin,w_margin:w + w_margin,:]
return crop_img
def subtract_gaussian_blur(img):
return cv2.addWeighted(img, 4, cv2.GaussianBlur(img, (0, 0), 5), -4, 128)
def ReadImages(Path):
LabelList = list()
ImageCV = list()
classes = ["nonPdr", "pdr"]
FolderList = [f for f in os.listdir(Path) if not f.startswith('.')]
for File in FolderList:
for index, Image in enumerate(os.listdir(os.path.join(Path, File))):
ImageCV.append(cv2.resize(cv2.imread(os.path.join(Path, File) + os.path.sep + Image), (224,224)))
LabelList.append(classes.index(os.path.splitext(File)[0]))
img_crop = crop_img(ImageCV[index].copy(), 224, 224)
ImageCV[index] = subtract_gaussian_blur(img_crop.copy())
return ImageCV, LabelList
data, labels = ReadImages(TRAIN_DIR)
valid, vlabels = ReadImages(TEST_DIR)
vgg16_model = VGG16(weights="imagenet", include_top=True)
base_model = Model(input=vgg16_model.input,
output=vgg16_model.get_layer("block5_pool").output)
base_out = base_model.output
base_out = Reshape((25088,))(base_out)
top_fc1 = Dense(4096, activation="relu")(base_out)
top_fc1 = Dropout(0.5)(base_out)
top_fc1 = Dense(4096, activation="relu")(base_out)
top_fc1 = Dropout(0.5)(base_out)
top_fc1 = Dense(64, activation="relu")(base_out)
top_fc1 = Dropout(0.5)(base_out)
top_preds = Dense(1, activation="sigmoid")(top_fc1)
for layer in base_model.layers[0:14]:
layer.trainable = False
model = Model(input=base_model.input, output=top_preds)
sgd = SGD(lr=1e-4, momentum=0.9)
model.compile(optimizer=sgd, loss="binary_crossentropy", metrics=["accuracy"])
data = np.asarray(data)
valid = np.asarray(valid)
data = data.astype('float32')
valid = valid.astype('float32')
data /= 255
valid /= 255
labels = np.array(labels)
perm = np.random.permutation(len(data))
data = data[perm]
labels = labels[perm]
datagen = ImageDataGenerator(
featurewise_center=True,
featurewise_std_normalization=True,
rotation_range=20,
width_shift_range=0.2,
height_shift_range=0.2,
horizontal_flip=True)
datagen.fit(data)
mean = datagen.mean #This result I put manually in predict.py
std = datagen.std #This result I put manually in predict.py
print(mean, "mean")
print(std, "std")
es = EarlyStopping(monitor='val_loss', verbose=1)
model.fit_generator(datagen.flow(data, np.array(labels), batch_size=32),
steps_per_epoch=len(data) / 32, epochs=15,
validation_data=(valid, np.array(vlabels)),
nb_val_samples=72, callbacks=[es])
model.save('model.h5')
And after Run this code, it return a strange result of roughly 100% of accuracy after 5 or 6 epochs. So I try to run my predict.py code: (I know that I have to encapsulate some methods, but for now I just copy and paste all from train)
from keras.models import load_model
import cv2
import os
import numpy as np
TEST_DIR = 'v/0/'
pdr = 0
nonPdr = 0
model = load_model('model.h5')
def normalize(x, mean, std):
x[..., 0] -= mean[0]
x[..., 1] -= mean[1]
x[..., 2] -= mean[2]
x[..., 0] /= std[0]
x[..., 1] /= std[1]
x[..., 2] /= std[2]
return x
def crop_img(img, h, w):
h_margin = (img.shape[0] - h) // 2 if img.shape[0] > h else 0
w_margin = (img.shape[1] - w) // 2 if img.shape[1] > w else 0
crop_img = img[h_margin:h + h_margin,w_margin:w + w_margin,:]
return crop_img
def subtract_gaussian_blur(img):
return cv2.addWeighted(img, 4, cv2.GaussianBlur(img, (0, 0), 5), -4, 128)
for filename in os.listdir(r'v/0/'):
if filename.endswith(".jpg") or filename.endswith(".ppm") or filename.endswith(".jpeg") or filename.endswith(".png"):
ImageCV = cv2.resize(cv2.imread(os.path.join(TEST_DIR) + filename), (224,224))
img_crop = crop_img(ImageCV.copy(), 224, 224)
ImageCV = subtract_gaussian_blur(img_crop.copy())
ImageCV = np.asarray(ImageCV)
ImageCV = ImageCV.astype('float32')
ImageCV /= 255
ImageCV = np.expand_dims(ImageCV, axis=0)
ImageCV = normalize(ImageCV, [0.23883381, 0.23883381, 0.23883381], [0.20992693, 0.25749, 0.26330808]) #Values from train
prob = model.predict(ImageCV)
if prob <= 0.75: #.75 = 80% | .70=79% >>>> .70 = 82% | .75 = 79%
print("nonPDR >>>", filename)
nonPdr += 1
else:
print("PDR >>>", filename)
pdr += 1
print(prob)
print("Number of retinas with PDR: ",pdr)
print("Number of retinas without PDR: ",nonPdr)
The problem is: when I try to predict, roughly all of my preds are poor (the prediction are nonPdr, or class 0, to all images). I already tried to cut off the data augmentation to test, and the result doesn't change how I want. I tried too change my model, change the preprocess (this preprocess is the best I can use for this project) and never happens.
How can I deal with this?
UPDATE
As @serali said, I tried to cut some layers to reduce the overfitting. This is my model now:
vgg16_model = VGG16(weights="imagenet", include_top=True)
#visualize layers
print("VGG16 model layers")
for i, layer in enumerate(vgg16_model.layers):
print(i, layer.name, layer.output_shape)
# (2) remove the top layer
base_model = Model(input=vgg16_model.input,
output=vgg16_model.get_layer("block1_pool").output)
# (3) attach a new top layer
base_out = base_model.output
top_fc1 = GlobalAveragePooling2D()(base_out)
top_fc2 = Dense(16, activation='relu')(top_fc1)
top_fc3 = Dropout(0.5)(top_fc2)
top_preds = Dense(1, activation="sigmoid")(top_fc3)
# (5) create new hybrid model
model = Model(input=base_model.input, output=top_preds)
As you can see, I cut in the first convolutional block, so my model looked like this:
0 input_1 (None, 224, 224, 3)
1 block1_conv1 (None, 224, 224, 64)
2 block1_conv2 (None, 224, 224, 64)
3 block1_pool (None, 112, 112, 64)
top_fc1 = GlobalAveragePooling2D()(base_out)
top_fc2 = Dense(16, activation='relu')(top_fc1)
top_fc3 = Dropout(0.5)(top_fc2)
top_preds = Dense(1, activation="sigmoid")(top_fc3)
But, when I try to predict the same images I've trained, the prediction is wrong (with foreign images the result is the same). So, how can I improve this?
AI: This phenomenon is called overfitting. In short it means that your CNN has memorized the dataset, achieving $100\%$ training accuracy. This knowledge, however, doesn't generalize well to unseen data.
I'd suggest reading this post for more details on overfitting and ways to combat it. |
H: how to split the original data in training, validation and testing?
I have the original data but i didn't know how to split the data and how to implement that data into some algorithms. can you guys help me out in this problem.
thank you
AI: I think you should start with some tutorials to understand the cycle of a data project- there are normally several stages, like preparing and cleaning the data, etc...
There are many free resources and courses in coursera for example that you can find by searching for "data science" or "machine learning"
Regarding your specific question, I think a good place to start might be here https://www.kaggle.com/learn/intro-to-machine-learning
An example of splitting and validating can be found in section 4 |
H: suggestion for good online source according to Syllabus
Hi, are there some online courses e.g. some classes in Coursera?
I have difficulty in following the professor's teaching because I have a weak statistics background. I want to catch up by reading some online complementary resource!
Thank you!
AI: A good deal of the topics, excluding the more advanced ones like autoencoders or manifold learning, can be learned in the excellent course of andrew ng https://www.coursera.org/learn/machine-learning
There's also https://www.coursera.org/specializations/deep-learning?utm_source=deeplearningai&utm_medium=institutions&utm_campaign=WebsiteCoursesDLSBottomButton that goes deeper into aspects of deep learning
For generative models I don't have specific recommendations... |
H: Genetic algorithms: what connection to support vector machine / naive bayes
I found the following list of seven classifiers:
Linear Classifiers: Logistic Regression, Naive Bayes Classifier
Nearest Neighbor
Support Vector Machines
Decision Trees
Boosted Trees
Random Forest
Neural Network
What are genetic algorithms, and why aren't they considered as part of the seven classifiers?
AI: A fast search on Google for the term "Genetic Algorithms" will return you this answer:
"A genetic algorithm is a search heuristic that is inspired by Charles Darwin's theory of natural evolution"
So, the term is usually associated with a "search heuristic", for example the search for a local optima.
By that common definition: Genetic Algorithms are not listed among these 7 classifiers since it is not a classifier, it would be in a list of optimization methods like Gradient Descent (and it's variation), Grid Search, Random Search, etc. |
H: Problem building a feature vector
I am trying build a classifier for malware analysis for which basing in the instructions of an assembly code, such as push, mov,... I want to predict the compiler, and in a second time the optimization op, and I am having some troubles. My code is the following:
#pakages
import numpy as np
import pandas as pd
import json as j
import re
import nltk
from nltk.tokenize import word_tokenize
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.naive_bayes import *
from sklearn.metrics import confusion_matrix, classification_report
from sklearn import svm
#for visualizing data
import matplotlib.pyplot as plt
import seaborn as sns; sns.set(font_scale=1.2)
%matplotlib inline
json_data = None;
with open('training_dataset.jsonl') as data_file:
lines = data_file.readlines()
joined_lines = "[" + ",".join(lines)+"]"
json_data = j.loads(joined_lines)
data = pd.DataFrame(json_data)
data.head()
which gives:
now, when I look at:
len(data['instructions'])
I have as output : 30000
but if I do the following:
for value in data['instructions'].iteritems():
myList = list(value[1]);
myList
opcodes = [instruction.split()[0] for instruction in myList]
len(opcodes)
I get as output : 151
Why don't I have an output 30000? I don't understand why I have less elements. I want to use the opcodes to build a feature vector, but don't understand why the number of elements become so low.
Can somebody help me? Thank's in advance.
[EDIT] if it can be useful, if I do:
data['instructions']
I get as output:
AI: Analyse this code
for value in data['instructions'].iteritems(): # Here we iter by line int the instruction colunm
myList = list(value[1]); # Here you get 1 of the values and assing to your list
myList
opcodes = [instruction.split()[0] for instruction in myList]
len(opcodes)
your "myList" variable only holds 1 of the lines (the last one) of your data, which is probably the number of instructions in that line of your dataframe
While the 30k (I don't know how panda works here) is probably the number of lines in that column (or very unlikely the sum of all instructions in every line) |
H: How to Keep Missing Values in Ordinal Logistic Regression
I’m using mord package in python to do ordinal logit regression (predict response to movie rating 1-5 stars).
One of my predictor variables is also ordinal but there are some missing values where the viewer skipped a question because it wasn’t applicable due to skip logic from a prior question or because they missed it.
What’s the best way to indicate a value is “missing” and/or “not applicable” while also retaining the ordinal nature of this predictor variable for everyone else? I don’t think I should delete this viewer or try to impute the value.
I get an error if I leave the NaN. I thought about dummy coding so I have something like question5_never, question5_sometimes, question5_always, question5_na, question5_missing, but I not sure.
AI: This problem refers to different missing data mechanisms.
When it comes to missing data, there are three different types of missing data mechanism:
Missing completely at random
Missing at random
Missing not at random
For the cases you mentioned in your problem are:
(1) missing values where the viewer skipped a question because it wasn’t applicable due to skip logic from a prior question
This kind of missing values are missing due to the Missing not at random mechanism. For this kind of missing values, removing it can produce a bias in the model. Therefore, you should not delete it. You can try setting a value indicating the missing.
(2) missing values because the viewers missed it
This kind of missing values are missing due to the Missing completely at random mechanism. You can just delete this kind of missing values without influencing your model. |
H: Deep Learning for non-continuous dataset
I am working with this dataset which is record of student academic details and I want to predict the student's performance.
since the dataset is non-continuous I cannot apply CNN on this dataset.
How can I apply Deep learning on this kind(non-continuous) of dataset. I searched online but could not find anything relevant
Thank you!!
AI: Deep Learning excels in problems where the data is relatively unstructured. Stacked layers help find conceptual features that can be used to infer rules.
Your dataset seems very structured at first glance. And, as you pose, it doens't look like it needs specialised layers that exploit sequential or spatial relations.
Neural network wise, this would warrant one or two fully connected layers, connected to an output layer (shaped to your wishes). However, typically, problems like these are tackled with a more biased approaches (eg Decision Tree Learners). |
H: Genetic algorithms(GAs): to be considered only as optimization algorithms? Are GAs used in machine learning any way?
As a quick question, what are genetic algorithms meant to be used for? I read somewhere else that they should be used as optimization algorithms (similar to the way we use gradient descent to optimize the best parameters search with a linear regression, neural network...).
If so, why are these GAs not so much present in machine learning (or at least I did not see it too much in the literature)?
AI: Yes, as you found they (evolutionary algorithms such as genetic algorithms) are using for optimization tasks. One reason that they are not using mostly in machine learning could be their performance to converge to the optimum point. Also, implementation of the GAs for some domains could be problematic and it could not be generalized like gradient descent, as it's involved in at least 5 phases (mutation, crossover, ...). |
H: Difference between learning_curve and validation_curve
What is the difference between these two curves: learning_curve and validation_curve ?
AI: Both curves show the training and validation scores of an estimator on the y-axis.
A learning curve plots the score over varying numbers of training samples, while a validation curve plots the score over a varying hyper parameter.
The learning curve is a tool for finding out if an estimator would benefit from more data, or if the model is too simple (biased).
Above example shows the training curve for a classifier where training and validation scores converge to a low value. This classifier would hardly benefit from adding more training data; a more expressive model may be more appropriate.
The validation curve is a tool for finding good hyper parameter settings. Some hyper parameters (number of neurons in a neural network, maximum tree depth in a decision tree, amount of regularization, etc.) control the complexity of a model. We want the model to be complex enough to capture relevant information in the training data but not too complex to avoid overfitting.
Above example shows the validation curve over a support vector machine's gamma parameter. A too low value of gamma restricts the model too much; both, training and validation scores are very low. A high value of gamma causes overfitting: very good training score but low validation score. The optimal value is somewhere in the middle, where the curves do not diverge too much.
Image source: scikit-learn documentation |
H: How to fine tuning VGG16 with my own layers
I want to maintain the first 4 layers of vgg 16 and add the last layer. I have this example:
vgg16_model = VGG16(weights="imagenet", include_top=True)
# (2) remove the top layer
base_model = Model(input=vgg16_model.input,
output=vgg16_model.get_layer("block5_pool").output) #I wanna cut all layers after 'block1_pool'
# (3) attach a new top layer
base_out = base_model.output
base_out = Reshape(25088,)(base_out)
top_fc1 = Dropout(0.5)(base_out)
top_preds = Dense(1, activation="sigmoid")(top_fc1)
# (4) freeze weights until the last but one convolution layer (block4_pool)
for layer in base_model.layers[0:4]:
layer.trainable = False
# (5) create new hybrid model
model = Model(input=base_model.input, output=top_preds)
So in this example he is cutting from the 'block5_pool', and I want to cut from 'block1_pool' but if I only change to block1_pool it throws this error:
data_format = value.lower()
AttributeError: 'int' object has no attribute 'lower'
So how could I change it to cut in block1_pool, and then add my own dense layers?
FULL CODE
#import tensorflow as tf
import cv2
import os
import numpy as np
from keras.layers.core import Flatten, Dense, Dropout, Reshape
from keras.models import Model
from keras.layers import Input, ZeroPadding2D, Dropout
from keras import optimizers
from keras.optimizers import SGD
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import EarlyStopping
from keras.applications.vgg16 import VGG16
TRAIN_DIR = 'train/'
TEST_DIR = 'test/'
v = 'v/'
BATCH_SIZE = 32
NUM_EPOCHS = 5
def crop_img(img, h, w):
h_margin = (img.shape[0] - h) // 2 if img.shape[0] > h else 0
w_margin = (img.shape[1] - w) // 2 if img.shape[1] > w else 0
crop_img = img[h_margin:h + h_margin,w_margin:w + w_margin,:]
return crop_img
def subtract_gaussian_blur(img):
return cv2.addWeighted(img, 4, cv2.GaussianBlur(img, (0, 0), 5), -4, 128)
def ReadImages(Path):
LabelList = list()
ImageCV = list()
classes = ["nonPdr", "pdr"]
# Get all subdirectories
FolderList = [f for f in os.listdir(Path) if not f.startswith('.')]
# Loop over each directory
for File in FolderList:
for index, Image in enumerate(os.listdir(os.path.join(Path, File))):
# Convert the path into a file
ImageCV.append(cv2.resize(cv2.imread(os.path.join(Path, File) + os.path.sep + Image), (224,224)))
#ImageCV[index]= np.array(ImageCV[index]) / 255.0
LabelList.append(classes.index(os.path.splitext(File)[0]))
img_crop = crop_img(ImageCV[index].copy(), 224, 224)
ImageCV[index] = subtract_gaussian_blur(img_crop.copy())
return ImageCV, LabelList
data, labels = ReadImages(TRAIN_DIR)
valid, vlabels = ReadImages(TEST_DIR)
vgg16_model = VGG16(weights="imagenet", include_top=True)
# (2) remove the top layer
base_model = Model(input=vgg16_model.input,
output=vgg16_model.get_layer("block1_pool").output)
print(base_model)
# (3) attach a new top layer
base_out = base_model.output
base_out = Reshape(25088,)(base_out)
top_fc1 = Dropout(0.5)(base_out)
# output layer: (None, 5)
top_preds = Dense(1, activation="sigmoid")(top_fc1)
# (4) freeze weights until the last but one convolution layer (block4_pool)
for layer in base_model.layers[0:4]:
layer.trainable = False
# (5) create new hybrid model
model = Model(input=base_model.input, output=top_preds)
# (6) compile and train the model
sgd = SGD(lr=1e-4, momentum=0.9)
model.compile(optimizer=sgd, loss="binary_crossentropy", metrics=["accuracy"])
data = np.asarray(data)
valid = np.asarray(valid)
data = data.astype('float32')
valid = valid.astype('float32')
data /= 255
valid /= 255
labels = np.array(labels)
datagen = ImageDataGenerator(
featurewise_center=True,
featurewise_std_normalization=True,
rotation_range=20,
width_shift_range=0.2,
height_shift_range=0.2,
horizontal_flip=True)
# compute quantities required for featurewise normalization
# (std, mean, and principal components if ZCA whitening is applied)
datagen.fit(data)
mean = datagen.mean
std = datagen.std
print(mean, "mean")
print(std, "std")
es = EarlyStopping(monitor='val_loss', verbose=1)
# fits the model on batches with real-time data augmentation:
model.fit_generator(datagen.flow(data, np.array(labels), batch_size=32),
steps_per_epoch=len(data) / 32, epochs=15,
validation_data=(valid, np.array(vlabels)),
nb_val_samples=72, callbacks=[es])
model.save('model.h5')
FULL ERROR
base_out = Reshape(25088,)(base_out)
self.target_shape = tuple(target_shape)
TypeError: 'int' object is not iterable
```
AI: The solution is include Flatten layer to the model:
base_out = base_model.output
top_fc1 = Flatten()(base_out)
top_fc2 = Dropout(0.5)(top_fc1)
top_preds = Dense(1, activation="sigmoid")(top_fc2)
Now it works! |
H: Anomaly detection thresholds issue
I'm working on an anomaly detection development in Python.
More in details, I need to analysed timeseries in order to check if anomalies are present.
An anomalous value is typically a peak, so a value very high or very low compared to other values.
The main idea is to predict timeseries values and, using thresholds, detect anomalies.
Thresholds are calculated using the error, that is the real values minus the predicted ones.
Then, mean and standard deviation of the error are performed.
The upper threshold is equals to mean + (5 * standard deviation).
The lower threshold is equals to mean - (5 * standard deviation).
If the error exceeds thresholds is marked as anomalous.
What doesn't work with this approach is that if I have more than one anomalous value in a day, they are not detected. This because error, mean and standard deviation are too much influenced by anomalous values.
How can i fix this problem? Is there another method that i can use to identify thresholds without this issue?
Thank you
AI: Instead of mean and standard deviation, you could estimate the median and mean absolute deviation. The median is immune to outliers, and the MAD should be at least more robust than the standard deviation formula.
You will probably have to change your critical value to something other than 5 to get the same kind of coverage. According to Wikipedia, you'll want the new critical value to be $5\sqrt{\frac{\pi}{2}}$ if your data are iid Gaussian.
An alternative that might be more difficult to implement, but is probably more statistically appropriate, is to use trimmed estimators for the mean and standard deviation. With trimmed estimators, you throw away the most extreme values in your data (the proportion of which is specified beforehand), and estimate your statistics on the remaining data.
The estimator for the mean would be the truncated mean, and the Wikipedia page for trimmed estimators mentions how to get a decent estimator for the standard deviation from the interquartile range.
I hope this helps! |
H: Grid search or gradient descent?
Assume we have a neural network and one of its activation functions is a function of parameter a. We want to find the weights and parameter a that leads to the minimum loss on the validation set which one is better?:
Treat a as a hyperparameter. Do grid search for a: consider a range for a and evaluate the loss for a few points in the range. Pick the best a in the range along with the weights leading to the minimum loss.
Treat a as a parameter. Since the loss of the network is a function of parameter a, use gradient descent to update weights as well as parameter a at each iteration.
To me, the second option is better since it can lead to the optimal point but the grid cannot lead to the optimal point, it can lead to the points around the optimal point depending on your selection. But, is the second method right?
Is there a better method?
Lastly, in grid search, how do we pick the range?
AI: The main distinction to be made here is between a parameter and a hyperparameter; once we have this clarified, the rest is easy: grid search is not used for tuning the parameters (only the hyperparameters), which are tuned with gradient descend.
Now, roughly speaking, a parameter is something that changes during training; in a neural network, the only parameters are the weights and the biases, and they are tuned with gradient descend.
A hyperparameter can be thought of as something "structural", e.g. the number of layers, the number of nodes for each layer (notice that these two determine indirectly also the number of parameters, i.e. how many weights and biases there are in our model), i.e. things that do not change during training. Hyperparameters are not confined to the model itself, they are also applicable to the learning algorithm used (e.g. optimization algorithm, learning rate, etc). A specific set of hyperparameters defines a family of models, which differ among themselves in the exact values of their parameters; in contrast, a specific parameter set (e.g. weights & biases in a NN) defines a unique model.
Having clarified the above, it should be easy to see that a in your example above is a hyperparameter and not a parameter; as such, it would normally be optimized using grid search.
To me, the second option is better since it can lead to the optimal point
Not so fast; gradient descend is not applicable everywhere: there are mathematical prerequisites for a function in order to be eligible for optimization using gradient descend, i.e. to be continuous & differentiable. This is indeed the case for the loss as a function of the weights & biases, but not clear if it is also the case for the loss as a function of your parameter a. And if it isn't, we simply cannot use gradient descend for tuning a.
On the contrary, techniques like grid search do not have such prerequisites, thus they can be used for a much wider range of cases.
in grid search, how do we pick the range?
As we pick almost everything else in ML & DL: empirically, and with trial & error (there is actually not much theory behind it).
UPDATE (after comment):
Now assume that the prerequisites are met for a so that we can use gradient descent. Which one is better now? Grid search or gradient descent?
Sorry, I didn't realize that this was a "Superman vs. Batman" question... Well, gradient descend is a disciplined mathematical operation, which is guaranteed to find the global minimum for convex functions (although NN loss functions are not convex), while grid search is actually just an ad hoc, quick-and-dirty procedure which does not guarantee anything (as you have already correctly suspected), so...
Beware though, especially if this is part of any homework or exam: the question, despite its phrasing, is not actually asking you which one is "better" in an abstract & hypothetical context, but which one is better in a very specific context, i.e a neural network model; and if something is not even applicable, it cannot be actually better, right?
In other words, if someone is actually testing your ML & NN knowledge here, they are certainly more interested in the exposition above, rather than which one is "better" in general; and choice (2) would be a huge mistake (after all, assume that the NN loss is a continuous & differentiable function of a is not part of the question). |
H: Back-propagation and stochastic gradient descent
Is backpropagation a learning method or an optimisation method?
How are backpropagation and stochastic gradient descent related to each other?
AI: Stochastic Gradient Descent (SGD) is an optimization method. As the name suggests, it depends on the gradient of the optimization objective.
Let's say you want to train a neural network. Usually, the loss function $L$ is defined as a mean or a sum over some "error" $l_i$ for each individual data point like this
$$ L(\theta) = \frac{1}{N} \sum_{i=0}^N l_i(\theta)$$
where $N$ is the number of data points and $\theta$ the model parameters.
For SGD you would randomly sample $i$ at each time step $t$ and do
$$ \theta_{t+1} = \theta_{t} - \alpha \nabla_{\theta}l_i(\theta_t)$$
with some learning rate $\alpha$ and the gradient with respect to the model parameters $\nabla_{\theta}$.
Backpropagation is now used to compute the gradient $\nabla_{\theta}l_i(\theta_t)$. As $l_i$ depends on a neural network with parameters $\theta$, this is not necessarily straight-forward, but it can be done quite efficiently using the chain rule in a smart way. This involves recursively computing the gradient of parameters in some layer using the gradients from higher layers, i.e. the gradients are computed starting at the network output and moving backwards. Hence the name backpropagation.
The wikipedia article on backpropagation goes through the math in a detailed manner. |
H: LSTM fot text classification always returns the same results
Hello fellow Data Scientists,
I'm trying to make a classifier that was to classify sequences of text into some predefined classes, but i always get the same output, can anyone help me understand why?
The training of the model:
# The maximum number of words to be used. (most frequent)
MAX_NB_WORDS = 100
#2155
# Max number of words in each complaint.
MAX_SEQUENCE_LENGTH = 100
# This is fixed.
EMBEDDING_DIM = 20
cf.go_offline()
cf.set_config_file(offline=False, world_readable=True)
def treina(model_name):
df = pd.read_csv("divididos.csv",sep='§',header=0)
df.info()
max_len = 0
for value in df.Perguntas:
if(len(value)>max_len):
max_len = len(value)
max_words = 0
for value in df.Perguntas:
word_count = len(value.split(" "))
if(word_count>max_words):
max_words = word_count
tokenizer = Tokenizer(num_words=MAX_NB_WORDS, filters='!"#$%&()*+,-./:;<=>?@[\]^_`{|}~', lower=True)
tokenizer.fit_on_texts(df['Perguntas'].values)
word_index = tokenizer.word_index
X = tokenizer.texts_to_sequences(df['Perguntas'].values)
X = pad_sequences(X, maxlen=MAX_SEQUENCE_LENGTH)
Y = pd.get_dummies(df['Class']).values
X_train, X_test, Y_train, Y_test = train_test_split(X,Y, test_size = 0.05, random_state = 42)
print(X_train.shape,Y_train.shape)
print(X_test.shape,Y_test.shape)
#Balance data
sm = SMOTE(random_state=12)
X_train, Y_train = sm.fit_sample(X_train, Y_train)
print(X_train.shape,Y_train.shape)
#LSTM net
model = Sequential()
model.add(Embedding(MAX_NB_WORDS, EMBEDDING_DIM, input_length=X.shape[1]))
model.add(LSTM(20, dropout=0.2, recurrent_dropout=0.2,activation="relu",return_sequences=True))
model.add(LSTM(10, dropout=0.2, recurrent_dropout=0.2,activation="relu"))
model.add(Dropout(0.2))
model.add(Dense(11, activation='softmax'))
opt = adam(lr=0.3)
model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
epochs = 100
batch_size = 20
history = model.fit(X_train, Y_train, epochs=epochs, batch_size=batch_size,validation_split=0.1)
accr = model.evaluate(X_test,Y_test)
print('Test set\n Loss: {:0.3f}\n Accuracy: {:0.3f}'.format(accr[0],accr[1]))
model.save(model_name)
return model
and the testing:
def corre(modelo):
labels = ["a","b","c","d","e","f","g","h","i","j","k"]
model = load_model(modelo)
a = 0
tokenizer = Tokenizer(num_words=MAX_NB_WORDS, filters='!"#$%&()*+,-./:;<=>?@[\]^_`{|}~', lower=True)
while (a==0):
new_complaint = input()
new_complaint = [new_complaint]
seq = tokenizer.texts_to_sequences(new_complaint)
padded = pad_sequences(seq, maxlen=MAX_SEQUENCE_LENGTH)
pred = model.predict(padded)
print(pred, labels[np.argmax(pred)])
Thank you for your time
AI: You are not using the same Tokenizer in testing which you used in training, so texts_to_sequences is not outputting the required result. |
H: Building an efficient feature vector
I am building a classifier for malware analysis, which predicts if I have a malware by looking at the intructions of an assembly code, such as push, mov,... and predicting the optimization method. Note that I am considering a json file. My code is the following:
#pakages
import numpy as np
import pandas as pd
import json as j
import re
import nltk
from nltk.tokenize import word_tokenize
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.naive_bayes import *
from sklearn.metrics import confusion_matrix, classification_report
from sklearn import svm
#for visualizing data
import matplotlib.pyplot as plt
import seaborn as sns; sns.set(font_scale=1.2)
%matplotlib inline
json_data = None;
with open('training_dataset.jsonl') as data_file:
lines = data_file.readlines()
joined_lines = "[" + ",".join(lines)+"]"
json_data = j.loads(joined_lines)
data = pd.DataFrame(json_data)
data.head()
myList = [];
for value in data['instructions'].iteritems():
myList.extend(list(value[1]))
opcodes = [instruction.split()[0] for instruction in myList]
vect = CountVectorizer()
x = vect.fit_transform(opcodes)
a =vect.vocabulary_
X = list(a.values())
X_all = np.array(X).reshape(-1,1)
Y = list(data['opt'])
MlistY = Y[ :395]
y_all = np.array(MlistY)
X_train, X_test, y_train, y_test = train_test_split(X_all, y_all,
test_size=0.2, random_state=15)
from sklearn.svm import SVC
model = SVC()
model.fit(X_train,y_train)
y_pred = model.predict(X_test)
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))
model.score(X_test,y_test)
so, what I did is doing a feature extraction where I counted the numner of times each instruction such as push,mov,... appearsin the training set, and use these numbers as feature vectors. After that I had to cut the column data['opt'] in such a way to have the same number of elements of X_all. Then I split the dataset and I used as model support vector machines.
My problem is that the accuracy is very low, infact it is : 0.4810126582278481
I think this method I just used is called bag of words, but it is not very efficient for my case.
I think this is due to the fact that the method I used to extract the features is very inefficient.
My idea is to try to do a vectorization such that I assign to each operator a number, for example:
push -->0
mov -->1
jmp -->2
edx -->3
and so on and build a feature vector like this. But I also would like to keep track of the order on which the operators occurs inside the feature vector.
Is there a way to do this?
I have not found a specific vectorizer that does this, so is there a way for doing this type of vectorization?
Thank's in advance.
[EDIT] To create such feature vector where I keep the order I tried the following:
opcodes_ordered = pd.factorize(opcodes)
opcodes_ordered_true = opcodes_ordered[0]
opcodes_ordered_true
which returns : rray([ 0, 0, 0, ..., 22, 3, 5], dtype=int64)
Now I create the feature vector and define a model:
X_all_2 = opcodes_ordered_true.reshape(-1,1)[:30000] #had to cut the vector
#because y has 30000
# elements
y_all_2 = list(data['opt'])
X_train_2, X_test_2, y_train_2, y_test_2 = train_test_split(X_all_2,
y_all_2, test_size=0.2, random_state=15)
model_2 = SVC(kernel = 'sigmoid',gamma = 1.0)
model_2.fit(X_train_2,y_train_2)
y_pred_2 = model_2.predict(X_test_2)
print(confusion_matrix(y_test_2, y_pred_2))
print(classification_report(y_test_2, y_pred_2))
model_2.score(X_test_2,y_test_2)
but accuracy is still very low, in fact I have an accuracy of :
0.4841666666666667
I don't know what to do now.
[EDIT] I also tried to reduce the number of features, but by doing so I only got a small improvement.
[EDIT 2] What also I have tried to do is the following:
opcodes_ordered = pd.factorize(opcodes)
opcodes_ordered_true = opcodes_ordered[0]
opcodes_ordered_true
which gives as output : array([ 0, 0, 0, ..., 22, 3, 5], dtype=int64)
X_all_2 = opcodes_ordered_true.reshape(-1,1)[:1000]
y_all_2 = list(data['opt'])[:1000]
X_train_2, X_test_2, y_train_2, y_test_2 = train_test_split(X_all_2,
y_all_2, test_size=0.2, random_state=15)
model_2 = SVC(kernel = 'linear',gamma = 1.0)
model_2.fit(X_train_2,y_train_2)
y_pred_2 = model_2.predict(X_test_2)
print(confusion_matrix(y_test_2, y_pred_2))
print(classification_report(y_test_2, y_pred_2))
model_2.score(X_test_2,y_test_2)
but I get as accuracy : 0.56
which is still low. Does anybdy know how could I have better accuracy? Thank's in advance.
[EDIT 3] I don't kow if I am doing it correctly but to see if the dataset is balanced or not, I looked how many times in a dataset I have optimization high (H) and optimizaion low (L), which is also what I would like to predict for new samples.
Sorry if I am not really precise but I just started with machine learning.
What I did is the following:
Y = list(data['opt'])
MlistY = Y
MlistY.count('L')
which returns : 17924
MlistY.count('H')
which returns: 12076
Moreover I have also tried to use TfidfVectorizer and what I did is:
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer(smooth_idf=False, sublinear_tf=False,norm=None,
analyzer='word')
x = vectorizer.fit_transform(opcodes)
a = vectorizer.vocabulary_
X = list(a.values())
X_all = np.array(X).reshape(-1,1)
Y = list(data['opt'])[:395]
MlistY = Y
y_all = np.array(MlistY)
X_train, X_test, y_train, y_test = train_test_split(X_all, y_all,
test_size=0.3, random_state=15)
from sklearn.svm import SVC
model = SVC(kernel = 'linear',C= 1)
model.fit(X_train,y_train)
y_pred = model.predict(X_test)
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))
model.score(X_test,y_test)
and in this case the accuracy is : 0.5294117647058824
Moreover, if I print the classification report I find this:
in particular this is for the case of TfidfVectorizer.
AI: J.D., welcome to Data Science Exchange.
First, let's start with the following question: Based on what target distribution are you saying that your accuracy is low? Every problem in modelling, Regression, Binary Classification, Multiclass Classification has a baseline so we can say that our model has an acceptable performance. Sometimes, even humans can be our baseline. Also, what is your dataset size?
Second, do you have a balanced dataset? What I mean is, after you calculate your baseline, you might have a class, let's say no malware code that dominates your dataset. You should assign this by using method for imbalanced datasets such as Oversampling and Undersampling. You can read more about here.
Lastly, let's ignore everything I wrote and focus on the problem. Yes, you are correct, Bag of Words will ignore order in your data since it will just count the appearance of each word. I will list a few things you can try:
You are using SVC class from sklearn. From docs I see the default kernel is rbf, have you tried using linear? Also, you can use LinearSVC.
Try out RandomForest models, they perform really good even in text datasets.
From CountVectorizer class, you could vary ngram_range parameters. Basically, it will create features based on grams, so let's say you use a 3-gram approach, then you will have for your first row: push_r12_push_rbp counting as one feature.
Also, you could try Tf-Idf Vectorizer. TF-IDF Vectorizer is based on an algorithm where not only the count of words is taken into account but also the appearance in each document. Putting in simple terms, if a specific word appears too much in your dataset it will have a high inverse value and decrease its feature value, since this word will not be useful to differentiate your class.
You can read more about it here: https://www.quora.com/How-does-TF-IDF-work
Lastly, but not less important, you could try using a more advanced technique, WordEmbeddings, for example. It is an algorithm that will create real vectors from your text taking in consideration the enclosing words for each command. It is a little more complicate than that, again, you can learn more here.
Note that for word embeddings to work properly, you should not have a small dataset. As a code example you can use this notebook of mine as a guidance.
I hope this help. |
H: Frequency of occurrence - dummy variables
I am thinking about it not the first time, namely if I have a variable that I want to convert later to the variable dummy (cities in this case), should I delete lines that occur less often than N times?
For example, the value of new york has occurred 400+ times but there are cities that only appeared once or twice.
What should I do with values that have appeared only once or twice?
print(df[cities].value_counts())
Output:
city1 424
city2 107
city3 35
city4 33
city5 28
city6 24
city7 15
city8 7
city9 4
city10 3
city11 2
city12 1
city13 1
city14 1
city15 1
city16 1
city17 1
AI: There's no general rule that can apply to all cases, and there's a lot of context missing in your post to say anything conclusive.
Having said that, I think that a good approach is to treat the cities with a lot of occurrences each on its own, and then group all others under a 'other' category.
Going further, you could have multiple 'other' groups, grouped by various criteria, for example, geographical criteria, or anything that makes sense in your context.
Hope this helps. |
H: What expresses if/how two variables are dependent on each other?
The accepted answer to question Why is a correlation matrix symmetric? includes this:
The correlation matrix is a measure of linearity. It does not express how two variables are dependent on each other.
My question is : What is there that is related to if/how two variables are dependent on each other?
Edit 1 : When asking this question, not considering anything related to correlation, only want to know if there are ways to see if two or more variables have dependency on each other or not, irrespective high or no correlation between them.
I used the correlation tag because I could not think of what tag to use, please feel free to edit/correct the tag.
AI: There are two senses in which one might understand the sentence that correlation does not express dependence. One is, as stated in the answer above, that correlation is symmetric, that correlation is symmetric, it doesn't have an arrow and can't be used to show that one variable has a causal effect on the other. In general, there isn't any way in which to determine causality merely from probability distributions. A good reading on the subject of causal reasoning and relation to ML is https://www.inference.vc/untitled/
The other sense might be that correlation only captures linear relations. In this ense there is a more general measure which is Mutual Information, but it requires knowledge of the probability distribution and not only a set of observations https://en.wikipedia.org/wiki/Mutual_information |
H: Is (manual) feature extraction outdated?
I recently attended a PhD thesis defence in which one committee members claimed that "manual feature extraction is outdated. Nowadays, we have [deep] machine learning models doing that job for us automatically."
Is this statement true? If yes, please provide a reference substantiating this claim.
Edit: Apparently, there seem to be different answers depending on the data type. Thus, please let me know about any references substantiating your claims for images, time series, etc... separately.
AI: In the general case, this is by no means true. Let's break down the case for different data scenarios:
For discriminative image models (e.g. image classification/labeling) this is true for some scenarios. You just throw some convnets (even pretrained models) at your data, and that's it. Nevertheless, convnets themselves profit from the "expert knowledge" that information locality is important and so is hierarchical information processing. For some other scenarios, applying domain knowledge (e.g. specific data transformations) may give the edge to reach the needed level of quality in the results.
For many image processing problems, neural networks work best when infused with some kind of inductive bias, e.g. attention.
For Natural Language Processing (NLP) problems, a good amount of craftsmanship is needed nowadays, especially in the data preprocessing stage.
For "typical data science" problems, it is also crucial to do feature extraction. You can have a look at Kaggle competitions to verify this.
For time series problems, it is also normal to rely on expert knowledge to understand which models fit best based on the nature of the data.
However, I think that the trend of the areas where deep learning is applicable (i.e. tons of available data) is to try to devise systems that are trained end-to-end, with the least possible ad hoc processing. Nevertheless, many times this is achieved by infusing the expert knowledge into the network in the form of inductive biases. |
H: How can I increase my accuracy avoiding overfitting? CNN-Keras-VGG16
As I asked in this question: Why are my predictions bad, if my accuracy in train is roughly 100% (Keras CNN) , my problem was Overfitting, so, I reduce the number of layers, and now I have this model:
vgg16_model = VGG16(weights="imagenet", include_top=True)
# (2) remove the top layer
base_model = Model(input=vgg16_model.input,
output=vgg16_model.get_layer("block5_pool").output)
# (3) attach a new top layer
base_out = base_model.output
base_out = Reshape((25088,))(base_out)
# output layer: (None, 5)
top_preds = Dense(1, activation="sigmoid")(base_out)
# (4) freeze weights until the last but one convolution layer (block4_pool)
for layer in base_model.layers[0:14]:
layer.trainable = False
# (5) create new hybrid model
model = Model(input=base_model.input, output=top_preds)
# (6) compile and train the model
sgd = SGD(lr=1e-4, momentum=0.9)
model.compile(optimizer=sgd, loss="binary_crossentropy", metrics=["accuracy"])
But, when I predict some images, the class 0 accuracy is roughly 96%, but the accuracy of class 1 is roughly 58%. So how can I increase the accuracy without overfitting.
I've trained my model with 700 images each class and to test I have 50 images each class.
AI: Keras has support for image preprocessing. You can do lateral translations and shears on your images which will alter them but since the content is still the same your have more labeled data to train on. This is known as data augmentation.
Another thing you can do is introduce dropout in your network. This drops nodes in your network based on the probability you provide. This is done to prevent dependency of neurons on each other in your network.
Both of these where introduced in this paper. You can go through the reduce overfitting section to get a better idea and understanding. |
H: Why is pandas corr() deleting columns?
I'm doing a basic correlation analysis but for some reason pandas corr() is deleting columns, not sure why.
import pandas as pd
data = pd.read_csv("data.csv")
print(len(data.columns))
print(len(data.corr().columns))
Output:
100
64
AI: Pearson's correlation is the default correlation used with Pandas corr method.
Categorical features ( not numerical ) are ignored during this process due to their nature of not being continuous. It makes no sense to say if categorical_var1 is increased by one , categorical_var2 also increases by X ( X's value depends on the correlation between the 2 variables ).
That's why you only see numerical variables! There are other statistical tests you can apply to categorical variables to better understand them.
Note : some columns may appear as numerical at first glance, but a string may be there due to an input mistake, or simply when the formatting of the file was done, that column type was set to 'Object'. Make sure to test the values in your supposedly numerical columns and apply astype to set them back to int or float |
H: What is a latent space vector?
I do not understand this about GANs.
Apparently the Generator is supposed to receive a latent space vector as its input. Yet I couldn't find an example of how I can implement it in Pytorch. This is a problem for me, because different posts suggest different approaches.
Is it simply an image of Gaussian noise which is then served as an input to Generator's Convolutional Neural Network or is it a one-dimensional array passed through a Fully-Connected layer to the Generator?
AI: The latent vector $z$ is just random noise.
The most frequent distributions for that noise are uniform: $z \sim U[-1,+1]$ or Gaussian: $z \sim \mathcal{N}(0, 1)$ . I am not aware of any theoretical study about the properties derived from different priors, so I think it's a practical choice: choose the one that works best in your case.
The dimensionality of the noise depends on the architecture of the generator, but most of the GANs I've seen use a unidimensional vector of length between 100 and 256.
In PyTorch, torch.Tensor.random_ and torch.randn can respectively be used to generate uniform and Gaussian noise. |
H: use .apply() function to change values to a column of the dataframe
I have a dataframe which is the following:
and I would like to consider only the column of instructions and keep just the values push, test, mov, test ,....., so just the first word of each string inside each list. What I am doing is the following:
dataFrame['opcodes'] = dataFrame['instructions'].apply(instruction.split()[0] for instruction in dataFrame['instructions'])
but I get the following message:
TypeError: 'generator' object is not callable
so, my objective is to change the values only of the column instructions.
Can somebody please help me? Thank's in advance.
AI: Given you dataframe is data, use the below apply() function:
For column with list of words separated by space:
data['New_instructions'] = data['instructions'].apply(lambda x: [i.split()[0].strip()for i in x])
For column with single list word:
data['New_instructions'] = data['instructions'].apply(lambda x: x.split()[0].strip()) |
H: Structure of LSTM gates
It is my impression that a single layer LSTM architecture consists of $t$ LSTM cells that are identical duplicates, where $t$ is the number of time steps. Then there are gates within the LSTM cell. I have struggled to find a rigorous explanation of what each “gate” actually consists of. Is each gate simply a feed forward neural network who’s output is either squished through sigmoid or tanh depending on which gate it is?
Thanks
AI: Is each gate simply a feed forward neural network who’s output is either squished through sigmoid or tanh depending on which gate it is?
Close. Each gate is an activation function (typically sigmoid for actual gates, but tanh is used for other functions within the cell) over a weighted sum of all inputs to the cell's layer. Although spome reading of the diagrams may imply LSTM cells are processing a few single inputs in isolation, recombined later, in fact the inputs to each cell are the whole previous layer and the whole previous timestep cell states for the whole layer. This is similar to a view of a single neuron in a fully-connected feed-forward network.
Each gate in each cell has its own set of weights - one weight for each input from previous layer plus one weight for each cell state in the same layer, plus a single bias. When you combine these cells into a layer of multiple cells (e.g. when you choose to have an LSTM layer with 64 cells), this results in a separate matrix plus bias for each gate. In descriptions of LSTMs, these different matrices are often named for the type of gate they parametrise, so e.g. there will be a matrix plus bias for the layer's "forget" gate, which might be noted as $\mathbf{W}_f$.
If you have $N_i$ inputs, and $N_c$ LSTM cells in a single LSTM layer, then $\mathbf{W}_f$ will be a $N_c \times (N_i + N_c)$ matrix of weights, and so will all the other parameter matrices describing other gates and value calculations for the combined cells.
In practice the calculations don't need to be handled per cell, but calculations over a layer of cells can be vectorised, and it is more like having a few parallel fully connected layers that combine in various ways to generate output plus next cell states.
There would be nothing stopping you extending this architecture and making any single gate or value calculation deeper by giving its own hidden layers. I suspect this has been tried by researchers, but cannot find any references. But without this customisation, the gates in standard LSTM are more like logistic regression over concatenated input and cell state. |
H: Finding out which values lead Random tree to a decision
I have a dataset of machines that produce plastic parts. A camera evaluates whether a plastic part was produced correctly or not (binary classification). I'm trying to figure out which factors influence a part being wrongly produced. E.g. I have different temperature values of the machine parts during the production.
I'm using a Random Forest to classify the data. The test dataset is being recognized quite well. The next step is to figure out which values lead to a wrongly produced part (e.g. when temperature > 150K: Part is broken). I've searched the internet but I couldn't find any information about this.
At the moment I'm trying a brute force method where I simply generate a test dataset where I go through different value ranges. But so far everything is classified as wrongly produced part.
Are there other methods I can use to get the values?
Thank you!
AI: If your dataset was separable into a series of neat decisions, then a classification and regression tree (CART), would give you the type of solution you're looking for. The price you pay for random forest is that by reducing the variance through generating many random trees, you're also reducing the interpretability of the model significantly. Solutions that can give you local solutions are LIME or you can calculate SHAP values or feature importances, but that gives you importances in the context of the model and may not be useful for the type of decision you're looking to make. |
H: Get elements from lists in pandas dataframe
I have the following column of a data frame:
I get it by doing dataFrame['opcodes].
and I would like to consider only the first 20 and the last 20 elements of each list. Is there a way to do this?
I have tried to do the following:
dataFrame['opcodes_modified'] = dataFrame['opcodes'].apply(lambda x: x[:20].append(x[-20:])
but i get the following error message:
SyntaxError: unexpected EOF while parsing
can somebody help me? Thank's in advance.
AI: You logic is correct, it's just the lambda function that is slightly wrong. .append() adds a single element to the end of the list, so in your case I guess it adds an element which is a list of 20 elements. You could either use .extend() instead, which will do what you want or you can simply write
lambda x: x[:20] + x[-20:] |
H: Replace entire columns in pandas dataframe
I would like to replace entire columns of a pandas dataframe with other columns, for example:
and I would like to replace the columns A and B. What I did is the following:
df['A']=dataFrame['opcodes'].values
df['B']=dataFrame['opt'].values
or also
df['A']=dataFrame['opcodes']
df['B']=dataFrame['opt']
but it does not work. In particular I get the following error:
KeyError: 'opcodes'
in both cases. Can anyone help me? Thank's in advance.
[EDIT] The original dataframe is the following:
on which I did the following modification:
dataFrame['opcodes'] = dataFrame['instructions'].apply(lambda x:[i.split() [0] for i in x])
now I would like to define an other dataframe on which only the 'opt' column and a column with the values 'opcodes' appear.
AI: You are getting a KeyError because 'opcodes' doesn't exist in dataFrame. Can you show us the contents of dataFrame not just df?
EDIT:
You don't need to create an empty DataFrame and then apply the data across. Simply do:
df = dataFrame[["opt", "opcodes"]].copy() |
H: How do I use multilevel regression models?
I have crime event data rows:
dayofweek1, region1, hour1, crimetype1
dayofweek2, region2, hour2, crimetype2 ...
and I want to use them as factors to model crime rates/probabilities at the region level.
I also want to use the resulting model to be able to input factor values to produce crime probabilities. e.g. on Sunday, in region1 there is a .03 chance of burglary at 3pm.
I think I should use a multilevel model, but everything I have found assumes a y value at the individual data row level which I do not have. All the row data are crimes.
Does anyone have an example of such a model (obviously not necessarily crime data, and preferably using python)?
Can the prediction bit be done?
AI: I think what you're looking for is Survival analysis.
From Wikipedia:
Survival analysis is a branch of statistics for analyzing the expected duration of time until one or more events happen, such as death in biological organisms and failure in mechanical systems.
In your case, you'd like to predict the time a crime will occur in a given region.
Here is a good talk about survival analysis in Python.
This is a python library dedicated to survival analysis and the one used in the video mentioned above. The lib itself has some examples which will also help you understand survival analysis as a whole.
An introduction to the concepts of Survival Analysis and its implementation in lifelines package for Python.
Hope this helps!
EDIT1:
As of the 'per group prediction': your problem is this:
everything I have found assumes a y value at the individual data row level
There's no such thing as an algorithm to predict a target variable per group. You have to transform your data so it has a 'y' value per group, and then train some model based on that. It may be seen as a regression problem, or maybe as some kind of survival analysis. But you can't escape from the fact that you'll need one target value per group to start doing stuff. |
H: What is the reason behind having low results using the data augmentation technique in NLP?
I used Data augmentation technique on my dataset, to have more data to train. My data is text so the data augmentation technique is based on random insertion of words, random swaps and synonyms replacement.
The algorithm I used performs well in other datasets, but in mine, it gives lower accuracy results comparing to the original experiment. Are there any logical interpretations?
AI: Text data is at the same time:
very structured, because swapping only a few words in a sentence can make it complete gibberish,
and very flexible, because there are usually many ways to express the same idea in a sentence.
As a consequence, it's very hard to have a text sample which is representative enough of a "population" text, i.e. which covers enough cases for all the possible inputs. But augmentation methods are practically sure to fail, because either they are going to make the text gibberish or just cover minor variations which don't improve the coverage significantly.
That's why a lot of the work in NLP is about experimental design and preprocessing. |
H: Find Phone Numbers in messy data
I hope this isn't too basic of a question, I'm banking on the Data Science site description being true where it says "...and those interested in learning more about the field". I'm not looking for programming help, just validation that machine learning could help me with a problem.
I'm trying to find all customer phone numbers in our databases. One database has a field with free-form comments from our customer service center. Here is an obfuscated snippet:
multiple #'s 123-456-7890 and 2345678901...current account #2233445566
As you can see, this record contains two phone numbers and also a 10 digit account number. One of the phone numbers has dashes while the 2nd doesn't. Looking for parenthesis helps, but only finds a small set. There are also other 10 digit numbers that could look like a phone number but in fact aren't.
If I run a query to return all records with a 10 digit number formatted with dashes, I get thousands of records. If I check for any 10 digit number, I get tens of thousands. So manually scanning these records to validate accurate matches is not practical.
I'm wondering if I could build a machine learning model that I could train to accurately find phone numbers in this mess. When I say "accurately", I don't mean 100%, just better than standard SQL queries. If I can, I would use this going forward to parse new data that is created in this database.
It seems to me this problem could be a good candidate for machine learning. But I'm new to machine learning, and the research I've done so far talks about different scenarios that don't seem quite the same.
AI: In principle this seems close to a NER task, you could try to annotate a sample and train a sequence labeling model on it. However this would require quite a lot of work to get it right: annotation, then probably a good bit of trial and error to tune the right combination of features.
In a case like this I would rather go for a few carefully chosen regular expressions, they are likely to perform at about the same level without requiring as much work. |
H: How do I force specified coefficients in a Linear Regression model to be positive?
Looking for a way to do this in Python. scipy.optimize.nnls forces all coefficients to be positive.
Some additional context: I have a data frame with a some explanatory variables and a response variable. When I run a regular linear regression, the coefficients of some explanatory variables become negative. This is okay for some variables, but not for all. I want to prevent the coefficients of specific variables from becoming negative.
I want to force these coefficients to be positive because they should have a positive contribution to the response. The variables I want to have positive coefficients are investment dollars in a few different channels. The response is revenue. I don't want my model to say that investing more dollars in a certain channel lowers revenue (even if that creates a more accurate model).
AI: Sorry, but on the surface, this sounds like a terrible idea to me: if linear regression gives you negative coefficients for some explanatory variables that you think should be positive, then it means that either your data is "wrong" (typically noisy or too small) or your intuition is misguided.
I can't see any good reason why one would use a data-driven approach if the goal is to manually force the model in a particular way. This is the equivalent of breaking the thermometer to hide the fever.
I'd suggest the following instead:
In general an unexpected outcome is arguably a good thing, in the sense that it tells us something we didn't know about the data. That's a cue to investigate what happens in the data. Linear regression is simple enough to analyze: one can look at the correlation, plot the relation between the variables etc.
If there's really something suspicious going on with some variables, maybe some errors in the data which make them behave in a way they shouldn't, then it's much better to discard them altogether from the model rather than fixing their coefficient, because this way the model won't rely on them at all. |
H: Increase accuracy of classification problem
I am trying to build a classifier that predicts the compiler given some operations of assembly code. Here is the pandas dataframe:
What I do is using a TfidfVectorizer and select the features that have most predictive power by doing:
tfidf_vectorizer=TfidfVectorizer(max_features=500)
so using max_features=500 to select the 500 features with the highest idf. The problem is that the accuracy is still low, infact it is around 0.69. I would like to arrive at least at 0.9, but I dont't know what else do.
I am using support vector machines and this gives me the accuracy of 0.69. I also tried random fores and I was around 0.75.
My code is the following:
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf_vectorizer=TfidfVectorizer(max_features=1000)
df_x = df['opcodes']
X_all = tfidf_vectorizer_vectors=tfidf_vectorizer.fit_transform(df_x)
y_all = df['compiler']
X_train, X_test, y_train, y_test = train_test_split(X_all, y_all,
test_size=0.2, random_state=15)
from sklearn import svm
model = svm.SVC(kernel='linear', C=1).fit(X_train,y_train)
y_pred = model.predict(X_test)
acc = model.score(X_test, y_test)
print("Accuracy %.3f" %acc)
moreover the dataset is balanced, infact I have 3 compilers with 1000 samples each.
I don't know what other startegy to try to increase accuracy and get at 0.9.
Can somebody please help me? Thank's in advance.
AI: I'm not familiar with scikit but I'm assuming that TfidfVectorizer represents bag of words features right? By this I mean that it treats all the instructions in an instance as a set, i.e. doesn't take into account their sequential order.
I'm also not familiar with compilers but I'm guessing that the order of the instructions could be a relevant indication? I.e. a compiler may generate particular sequences of instructions.
Based on these remarks I would try to represent instances with n-grams of instructions rather than individual instructions. Then you can still use some kind of bag-of-ngrams representation, possibly with TFIDF, but I would start with simple binary or frequency features. A simple feature selection step with something like information gain might be useful.
[edit] N-grams take order into account locally. In a bag of words model, words (or instructions in your case) are considered individually of each other: for instance the sequence push, push, mov is the same as push, mov, push. With bigrams this sequence would be represented as (push,push), (push,mov) whereas the second one is (push,mov), (mov,push). This means two things:
Higher level of detail about the instance, which can help the model capture the relevant indications
More features so higher risk of overfitting (the model taking some random details as indication, which lead to errors on the test set). |
H: TypeError: 'GridSearchCV' object is not callable - how do I use a pickle of an SVM (Scikit-learn)?
I have created an SVM in Scikit-learn for classification. It works; it prints out either 1 or 0 depending on the class. I converted it to a pickle file and tried to use it, but I am receiving this error:
TypeError: 'GridSearchCV' object is not callable
(occurs during the last line of the program)
How can I overcome this?
Code:
import pandas as pd
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.naive_bayes import MultinomialNB, GaussianNB
from sklearn import svm
from sklearn.model_selection import GridSearchCV
import numpy as np
from sklearn.externals import joblib
from joblib import load
import pickle
dataframe = pd.read_csv("emails.csv")
x = dataframe["text"]
y = dataframe["spam"]
x_train,y_train = x[0:5724],y[0:5724]
cv = CountVectorizer()
features = cv.fit_transform(x_train)
tuned_parameters = {'kernel': ['rbf','linear'], 'gamma': [1e-3, 1e-4],
'C': [1, 10, 100, 1000]}
model = GridSearchCV(svm.SVC(), tuned_parameters)
file = open("finalized_model.sav",'rb')
model = pickle.load(file)
file.close()
X = pd.read_csv("ExampleSingleEmail.csv")
model(cv.transform(X))
AI: Most sklearn model instances are fitted using the fit method.
Same goes here - https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html.
Change the last line to model.fit(cv.transform(X)) and it should work. |
H: Training a model sample by sample
I'm training a Scikit model but it seems that in all examples, they call the fit method on the entire training set. What I want to do however is call it per sample (i.e. looping through all samples). This has multiple reasons but most importantly are
MemoryError with my huge trainingset
Training with new data instead of recompiling entire model
Yet when I loop and call the fit model self.gnb = self.gnb.fit(sample.data,labels) and then debug, the gnb model only has one class (namely the last one). So how should I approach this?
AI: Not every model is able to learn sample-by-sample or incrementally. However, in scikit-learn, there're some models which have partial_fit method:
Incremental fit on a batch of samples.
This method is expected to be called several times consecutively on different chunks of a dataset so as to implement out-of-core or online learning.
This is especially useful when the whole dataset is too big to fit in memory at once.
This method has some performance and numerical stability overhead, hence it is better to call partial_fit on chunks of data that are as large as possible (as long as fitting in the memory budget) to hide the overhead.
You can just search for methods name in sklearn's documentation. This method exists, for example, for GaussianNB and Stohastic Gradient Descent, both Classifier and Regressor.
Also, you can use Random Forest and set number of samples (or sample ratio) per tree is small to fit the memory. Or use Dask and Dask ML to fit your data in memory. |
H: How reproducible should CNN models be?
I want to train several CNN architectures with Google Colab (GPU), Keras and Tensorflow.
Since the trained models are not reproducible due to GPU support, I would like to train the models several times and determine the mean and the standard deviation of the results.
I'm totally unsure if I should at least try to make the models minimal reproducible? For example with the following code at the beginning of the program:
import numpy
import tensorflow as tf
import random as rn
import os
os.environ['PYTHONHASHSEED']='0'
np.random.seed(1)
rn.seed(1)
tf.set_random_seed(1)
from keras import backend as K
if 'tensorflow' == K.backend():
import tensorflow as tf
tf.set_random_seed(1)
from keras.backend.tensorflow_backend import set_session
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
config.gpu_options.visible_device_list = "0"
set_session(tf.Session(config=config)
I don't know if this makes sense. Would it be better if I did not use seeds at all?
What do you think?
AI: Getting slightly different results is natural and should not be a problem. How to minimize the instabilities due to several contributing factors is discussed at length in the linked post below, for Keras using different backends including TensorFlow:
https://machinelearningmastery.com/reproducible-results-neural-networks-keras/ |
H: How to represent a dataset as a linked list-like graph?
I would like to visualize this data set using Python and probably a visualization tool like Matplotlib. The data set contains three columns: a user id with a question, a user id with an answer, and time. I would like to visualize this data set as a linked list style graph. That is a tree-like structure which would show relationships based on the connections between questions and answers. I've tried different sets of keywords in an attempt to find out how to visualize a data set in this way, but to no avail. [I'm new to data science or analysis]. So I would appreciate some help with this.
AI: The graph resulting from this kind of dataset is also known as a Network Graph and the kind of analysis you are trying to do is known as Social Network Analysis.
There are many prominent Python libraries for visualization and subsequent analysis of network graphs. The most widely used is NetworkX. It is easy to add nodes and directed edges in a NetworkX graph and visualize them with Matplotlib.
Installing NetworkX is a prerequisite.
pip install networkx
The following code creates a NetworkX DiGraph object, adds edges(and thereby nodes) to it and plots the graph.
import networkx as nx
from matplotlib import pyplot as plt
G = nx.DiGraph()
G.add_edges_from([(1, 3), (2, 3), (2, 4), (4, 1)])
nx.draw(G)
Hopefully, this should get you started :) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.