text
stringlengths
301
426
source
stringclasses
3 values
__index_level_0__
int64
0
404k
Machine Learning, Data Science, Python, Support Vector Machine, Artificial Intelligence. Overview An SVM is a supervised machine learning algorithm that uses non-probabilistic linear classification to classify data. The basic idea behind SVMs is to find a hyperplane that best separates the data points into different classes by maximizing the margin between the two classes. SVMs are
medium
0
Machine Learning, Data Science, Python, Support Vector Machine, Artificial Intelligence. particularly useful when dealing with complex, high-dimensional datasets and can handle both linearly separable and non-linearly separable data through the use of kernel functions. SVMs have been successfully applied in various fields such as computer vision, natural language processing, and
medium
1
Machine Learning, Data Science, Python, Support Vector Machine, Artificial Intelligence. bioinformatics, and have been shown to have high accuracy and generalization capabilities. Understanding SVMs For this lesson, we’ll be using the iris dataset provided by sklearn. Below shows the code for loading the data set, its description, & a picture to understand the “petal” and “sepal”
medium
2
Machine Learning, Data Science, Python, Support Vector Machine, Artificial Intelligence. attributes it describes. import pandas as pd from sklearn.datasets import load_iris iris = load_iris() print(dir(iris)) print(iris.DESCR) The first thing we want to do is get all this data into a pandas dataframe to better understand what we’re working with. In the code below, we create a
medium
3
Machine Learning, Data Science, Python, Support Vector Machine, Artificial Intelligence. dataframe, add a column called flower name that applies a lambda function naming each row based on the value it has under the target column, and prints the first, second, & third datasets within the dataframe. We can see that the first 50 rows are for the setosa flower, the next 50 are for the
medium
4
Machine Learning, Data Science, Python, Support Vector Machine, Artificial Intelligence. versicolor flower, and the last 50 rows are for the virginica flower. df = pd.DataFrame(data=iris.data, columns=iris.feature_names) df['target'] = iris.target df['flower_name'] =df.target.apply(lambda x: iris.target_names[x]) print(df[:50].head()) print(df[51:99].head()) print(df[100:].head())
medium
5
Machine Learning, Data Science, Python, Support Vector Machine, Artificial Intelligence. Let’s graph our data to visualize it. I’m going to separate the data into 3 different dataframes, corresponding to each unique flower. import matplotlib.pyplot as plt setosa_df = df[:50] versicolor_df = df[50:100] virginica_df = df[100:] plt.xlabel('Sepal Length') plt.ylabel('Sepal Width')
medium
6
Machine Learning, Data Science, Python, Support Vector Machine, Artificial Intelligence. plt.scatter(setosa_df['sepal length (cm)'], setosa_df['sepal width (cm)'],color="green",marker='+') plt.scatter(versicolor_df['sepal length (cm)'], versicolor_df['sepal width (cm)'],color="blue",marker='.') You could imagine it’s possible to draw a line diagonally across the graph from the bottom
medium
7
Machine Learning, Data Science, Python, Support Vector Machine, Artificial Intelligence. left to the top right, separating the setosa and versicolor flowers. This imaginary line (our classification boundary) is our support vector machine. There are many ways we could draw the line though, so how we do know what’s the optimal solution? The line with the highest margin, or distance
medium
8
Machine Learning, Data Science, Python, Support Vector Machine, Artificial Intelligence. between the line and the corresponding data points, is the solution. In the graph above, we see each data point has a margin (drawn in red) between it and the classification boundary. We want a line that maximizes this distance. These data points near the line are referred to as “support vectors”
medium
9
Machine Learning, Data Science, Python, Support Vector Machine, Artificial Intelligence. hence the name support vector machine. In the case we have only 2 features like the graph above, the boundary is a line. But in the case of 3 features, the boundary is a plane. Now imagine if there are more than 3 features… while it would be impossible to make an image that allows us to visualize
medium
10
Machine Learning, Data Science, Python, Support Vector Machine, Artificial Intelligence. it, it’s still mathematically possible and the boundary would be referred to as a hyperplane. final definition of SVM: Support vector machines draw a hyperplane in n-dimensional space such that it maximizes the margin between classification groups Gamma & Regularization Depending on how many data
medium
11
Machine Learning, Data Science, Python, Support Vector Machine, Artificial Intelligence. points we want to consider in drawing our decision boundary can impact where the line is drawn. On the left side, only the closest points to the line are considered. This makes the line more accurate for our training data, but can potentially lead to over-fitting the model and not allowing it to be
medium
12
Machine Learning, Data Science, Python, Support Vector Machine, Artificial Intelligence. more generalized. On the right side, we have a low gamma, and more data points are considered. Even though there is an error, the upside is a potentially more generalized model. Neither choice is more correct, it simply depends on your specific situation. Gamma is a hyperparameter we will have to
medium
13
Machine Learning, Data Science, Python, Support Vector Machine, Artificial Intelligence. set when building our model. C is the hyperparameter for regularization. The c parameter trades off correct classification of training examples against the maximization of the decision function’s margin. For larger values of c, a smaller margin will be accepted if the decision function is better at
medium
14
Machine Learning, Data Science, Python, Support Vector Machine, Artificial Intelligence. classifying all training points correctly. A lower c will encourage a larger margin, therefore a simpler decision function, at the cost of training accuracy. In other words, c behaves as a regularization parameter in the SVM. Below is a heatmap found on the sklearn website demonstrating this
medium
15
Machine Learning, Data Science, Python, Support Vector Machine, Artificial Intelligence. relationship between gamma and c. We optimize these values by doing something called grid-search, which we will cover. Building Our Model In the code below, we import the SVM class from sklearn, split our data into X/Y & training/test sets, build our model with a C value of 2 and gamme value of 10
medium
16
Machine Learning, Data Science, Python, Support Vector Machine, Artificial Intelligence. to start, & test our model’s score. We end up with a score of .93, which is solid. By the way, sklearn has auto values to use for C & gamma if we do not explicitly state them. from sklearn.model_selection import train_test_split from sklearn.svm import SVC x = df.drop(['target','flower_name'],
medium
17
Machine Learning, Data Science, Python, Support Vector Machine, Artificial Intelligence. axis='columns') y = df.target x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2) model = SVC(C =2, gamma=10) model.fit(x_train, y_train) model.score(x_test, y_test) .93 is great, but we can use sklearn’s grid search class to calculate even better values for gamma & C. import
medium
18
Machine Learning, Data Science, Python, Support Vector Machine, Artificial Intelligence. numpy as np from sklearn.model_selection import StratifiedShuffleSplit from sklearn.model_selection import GridSearchCV # logspace: create an array of evenly spaced values between two numbers on the logarithmic scale C_range = np.logspace(-2, 10, 13) gamma_range = np.logspace(-9, 3, 13) param_grid
medium
19
Machine Learning, Data Science, Python, Support Vector Machine, Artificial Intelligence. = dict(gamma=gamma_range, C=C_range) cv = StratifiedShuffleSplit(n_splits=5, test_size=0.2, random_state=42) grid = GridSearchCV(SVC(), param_grid=param_grid, cv=cv) grid.fit(x_train, y_train) print( "The best parameters are %s with a score of %0.2f" % (grid.best_params_, grid.best_score_) ) And
medium
20
Scala, Concurrency, API, Project Loom, Java. Java 19 brought a preview of virtual threads and an API for structural concurrency. Java 20 extends this with ScopedValues, a much-improved version of ThreadLocal s. When writing concurrent code in Java, the first option is to work directly with threads (virtual or not). However, this is usually
medium
22
Scala, Concurrency, API, Project Loom, Java. not the best idea. While possible quite easily from Java and Scala, it’s too easy to run into deadlock, race conditions, or heisenbugs. Another approach is the new StructuredTaskScope (described in more detail below). Quite obviously, this API is tailored for Java. Moreover, at times it's pretty
medium
23
Scala, Concurrency, API, Project Loom, Java. low-level and easy to misuse: you need to call the proper methods in the correct order, or you'll get a StructureViolationException exception. Given Scala’s more advanced type system and other features, we should be able to provide a better developer experience. Below is a prototype of what a
medium
24
Scala, Concurrency, API, Project Loom, Java. Loom-based concurrency API for Scala might look like. Maintaining structure Java’s approach to concurrency, introduced in recent releases, is structured. It might be novel to Java, but it’s also present (among others) in some Python libraries and the Kotlin programming language. But what is
medium
25
Scala, Concurrency, API, Project Loom, Java. structured concurrency? The best introduction is probably the “Go considered harmful” article. It’s a long but interesting read. If you need a quick refresher: concurrency is structured when the syntactic code structure bounds the lifetime of threads. When new threads are spawned within a syntactic
medium
26
Scala, Concurrency, API, Project Loom, Java. block, we can be sure that these threads are terminated when executing the block ends. I like to think about structured concurrency as an aspect of purity: a function satisfies the structural concurrency property, iff it is pure wrt to threading side-effects. As mentioned, the main API for
medium
27
Scala, Concurrency, API, Project Loom, Java. structured concurrency in Java revolves around the StructuredTaskScope class. It is designed to be used within a try-with-resources block, which delimits the lifetime of any threads spawned within the block. So, for example, here's how to run two computations in parallel. If one computation fails,
medium
28
Scala, Concurrency, API, Project Loom, Java. the other is interrupted, and the whole block throws that exception: try (var scope = new StructuredTaskScope.ShutdownOnFailure()) { Future<String> user = scope.fork(() -> findUser()); Future<Integer> order = scope.fork(() -> fetchOrder()); scope.join(); scope.throwIfFailed(); return new
medium
29
Scala, Concurrency, API, Project Loom, Java. Response(user.resultNow(), order.resultNow()); } The Scala equivalent, the design of which we’re exploring here, has the following form: scoped { val user: Fork[String] = fork(findUser()) val order: Fork[Integer] = fork(fetchOrder()) Response(user.join(), order.join()) } Let’s examine this in more
medium
30
Scala, Concurrency, API, Project Loom, Java. detail. The scoped and fork functions come from the Ox object (the name comes from cOncurrency eXtensions). As the name suggests, scoped delimits a scope within which new threads can be started. Once the scoped block completes, it is guaranteed that all child threads have finished. This is the same
medium
31
Scala, Concurrency, API, Project Loom, Java. guarantee that StructuredTaskScope provides; indeed, this class is used under the hood. Hence, the scoped construct forms the basis of implementing the structured concurrency approach in Ox. The fork method, quite unsurprisingly, starts a new thread running the given code (here, findUser() or
medium
32
Scala, Concurrency, API, Project Loom, Java. fetchOrder()). The result is a Fork instance. It has two methods: a blocking .join() to get the result, and a blocking .cancel(), which interrupts the underlying thread, and waits until the thread terminates. In essence, Fork is similar to a (virtual) Thread, a Future or a Fiber (known from
medium
33
Scala, Concurrency, API, Project Loom, Java. functional effect systems), as it represents a running computation. However, the name is different to avoid clashes. Once the code inside the scoped block completes, any threads that are still running (which were not join ed or have happened to complete), will be interrupted, and the whole block
medium
34
Scala, Concurrency, API, Project Loom, Java. will only complete once they complete. Racing and paralleling In the Java API, various patterns of running code concurrently can be implemented by providing an appropriate implementation of StructuredTaskScope. For example, using StructuredTaskScope.ShutdownOnFailure we can run some computations in
medium
35
Scala, Concurrency, API, Project Loom, Java. parallel, shutting down the scope when a first error occurs. Similarly, StructuredTaskScope.ShutdownOnSuccess can be used to shutdown on first success, allowing to implement a race between two computations. Ox’s API diverges from the Java approach and always uses a fixed StructuredTaskScope
medium
36
Scala, Concurrency, API, Project Loom, Java. implementation (which is neither of the above-one which never shuts down the scope on its own). But it also allows some additional flexibility. The fork method, by default, propagates any errors to the scope's main thread-hence, in the example above, if any of the forks fail, the whole scope will
medium
37
Scala, Concurrency, API, Project Loom, Java. short-circuit and fail as well. This is similar to StructuredTaskScope.ShutdownOnFailure. However, there’s also another variant: forkHold, which does not propagate any exceptions. Instead, it allows running forks which might fail, and where failures are dealt with within the scope. The failure is
medium
38
Scala, Concurrency, API, Project Loom, Java. thrown as an exception when the fork's join method is called. We can mix & match both of these types of forks within a single scope. fork works best for background tasks that are not joined, and hence the only way to discover that they failed is by interrupting the main thread. forkHold is designed
medium
39
Scala, Concurrency, API, Project Loom, Java. for situations where we join the computations, inspecting their (successful or failed) results. What about racing? Ox provides a race method as a primitive. It is implemented using a forkHold + a queue, on which the main thread blocks, awaiting the first forked result to complete. Furthermore, this
medium
40
Scala, Concurrency, API, Project Loom, Java. can be quite trivially used to implement a timeout method, for example: // In Ox.scala def timeout[T](duration: FiniteDuration)(t: => T): T = raceSuccess(Right(t))({ Thread.sleep(duration.toMillis) Left(()) }) match case Left(_) => throw new TimeoutException(s"Timed out after $duration") case
medium
41
Scala, Concurrency, API, Project Loom, Java. Right(v) => v // In OxTest.scala "timeout" should "short-circuit a long computation" in { val trail = Trail() scoped { try timeout(1.second) { Thread.sleep(2000) trail.add("no timeout") } catch case _: TimeoutException => trail.add("timeout") trail.add("done") Thread.sleep(2000) } trail.trail
medium
42
Scala, Concurrency, API, Project Loom, Java. shouldBe Vector("timeout", "done") } Escaping the structure Structured concurrency is useful, but it also might prove challenging to work with at times. However, there is an escape hatch that allows us some more flexibility. First, what about nesting forks? scoped { val f1 = fork { val f2 = fork {
medium
43
Scala, Concurrency, API, Project Loom, Java. Thread.sleep(1000) println("f2 complete") } Thread.sleep(500) println("f1 complete") f2 } f1.join().join() } Such nesting is, of course, allowed, and the above code (as you might expect or not) prints f1 complete followed by f2 complete. Even though f2 is nested within f1, it won't be interrupted
medium
44
Scala, Concurrency, API, Project Loom, Java. when f1 finishes. Only scoped creates a boundary within which the structured concurrency principle must hold. Scopes might be nested as well, with the expected semantics. Secondly, it is also possible to extract code, which includes forks to separate methods or classes. To do that, we need a way to
medium
45
Scala, Concurrency, API, Project Loom, Java. express that we are within scope. This is done using a given parameter of type Ox. Behind the scenes, the fork and forkHold methods require a given Ox to be in scope. Such an instance is provided by the scoped method, using context functions. For example: def run()(using Ox): Int = { val f1 = fork
medium
46
Scala, Concurrency, API, Project Loom, Java. { Thread.sleep(1000); 5 } val f2 = fork { Thread.sleep(2000); 7 } f1.join() + f2.join() } scoped { println(run()) println(run()) } In other words, the presence of a using Ox in a method's type signature represents a requirement for a capability to fork computations. As before, the structural
medium
47
Scala, Concurrency, API, Project Loom, Java. properties are enforced by scoped. Hence, a “normal” threading model, where threads can be spawned at any time, without lifetime constraints can be implemented by wrapping the application’s main method in scoped, passing a given Ox to each method/class, and using forkHold at will. Scoped values
medium
48
Scala, Concurrency, API, Project Loom, Java. Java 20 introduces scoped values, which aim to replace ThreadLocals, as in many ways they are broken, especially when used in conjunction with VirtualThread s. The major improvement is that the value of a ScopedValue is inherited by child threads (forked within a StructuredTaskScope), without
medium
49
Scala, Concurrency, API, Project Loom, Java. sacrificing performance or memory usage. Moreover, the value of a ScopedValue can only be changed structurally within the code passed to the ScopedValue.where method. However, this Java API also has to be used with care, as we can't change scoped values after starting a scope (otherwise, an
medium
50
Scala, Concurrency, API, Project Loom, Java. exception will be thrown on the next fork). To improve safety in this area, Ox provides its wrapper, ForkLocal, which differs from ScopedValue in two ways: first, it has a default value (instead of null). Second, setting a new value always starts a new scope: val trail = Trail() val v =
medium
51
Scala, Concurrency, API, Project Loom, Java. Ox.ForkLocal("a") scoped { forkHold { v.scopedWhere("x") { trail.add(s"nested1 = ${v.get()}") scoped { forkHold { trail.add(s"nested2 = ${v.get()}") }.join() } } }.join() trail.add(s"outer = ${v.get()}") } trail.trail shouldBe Vector("nested1 = x", "nested2 = x", "outer = a") The code The source
medium
52
Scala, Concurrency, API, Project Loom, Java. code of Ox is available on GitHub. The Ox class and the methods in the Ox companion object implement the thin layer on top of Java's structured concurrency tools to provide the functionality described above. There’s also a small test suite, covering (in a rather non-comprehensive way) the basic use
medium
53
Scala, Concurrency, API, Project Loom, Java. cases. Feel free to explore and comment! Interruptions I think the main problem with the approach described above, which I’ve encountered so far, are interruptions. These are modeled as exceptions in Java, with all the consequences: they can be caught and ignored. The deficiencies of Java’s
medium
54
Scala, Concurrency, API, Project Loom, Java. interruption model would probably be an article on its own, so we won’t explore this subject in more depth here. But as an example: imagine that you have a process running “forever”, doing some side-effects, and if any exceptions happen, they are logged, and the processing continues: scoped { fork
medium
55
Scala, Concurrency, API, Project Loom, Java. { forever { try processSingleItem() catch case e: Exception => logger.error("Processing error", e) } } // do something else that keeps the scope busy } The fork above will never terminate, even if executing the scope's code ends: the block can only finish its execution when all child threads have
medium
56
Scala, Concurrency, API, Project Loom, Java. terminated, and interrupting the fork will be caught & logged. The way to fix the above is to catch InterruptedException separately and re-throw it or use NonFatal: scoped { fork { forever { try processSingleItem() catch case NonFatal(e) => logger.error("Processing error", e) } } // do something
medium
57
Scala, Concurrency, API, Project Loom, Java. else that keeps the scope busy } It’s a subtle difference, which might cause hard-to-find bugs. But it seems we are stuck with the current Java interruption model for the foreseeable future. Comparing with functional effects I think the question that everybody who has used the more functional
medium
58
Scala, Concurrency, API, Project Loom, Java. flavors of Scala has is how this compares to functional effect systems, such as cats-effect or ZIO? This, again, could be its own series of articles. I think we’re just scratching the surface of how Loom might be leveraged from Scala, so it might be too early to draw conclusions. However, to get a
medium
59
Scala, Concurrency, API, Project Loom, Java. feel of how programming using Ox might look like, I ported some of the examples from an article comparing Akka, Monix (cats-effect predecessor), and Scalaz IO (ZIO predecessor) a while back. Hence, in the sources, you can find: a rate limiter a link crawler supervision example interfacing with a
medium
60
Scala, Concurrency, API, Project Loom, Java. simple socket class Everything is structured Apart from having to remain vigilant on interruptions, one important conclusion from the above experiments is that structured concurrency strongly influences the code’s overall structure. For example, the RateLimiter starts a background process, which
medium
61
Scala, Concurrency, API, Project Loom, Java. executes the rate-limited actions. However, because all concurrency is structured, this dictates a specific way of using that functionality. Therefore, we have to make sure that the RateLimiter is used structurally so that its lifetime also corresponds to the structure of the source code:
medium
62
Scala, Concurrency, API, Project Loom, Java. RateLimiter.withRateLimiter(2, 1.second) { rateLimiter => rateLimiter.runLimited { println(s"${LocalTime.now()} Running …") } // other logic } In a way, we are back to callbacks: the withRateLimiter method takes as a parameter a function, which should be called back with a running rate limiter.
medium
63
Scala, Concurrency, API, Project Loom, Java. Once this function completes, the scope is closed, closing the rate limiter. Scala provides a nice syntax for this, but we might still end up with a long callback chain when allocating all of the "resources" in our application. But how do they compare? Make no mistake: I think functional effect
medium
64
Scala, Concurrency, API, Project Loom, Java. systems, such as ZIO and cats-effect, are currently the best way to develop concurrent business applications. Some of their advantages compared to Ox/Loom: an asynchronous runtime which works on JVM 8+, as well as on JS and Native platforms higher control over thread usage, which impacts fairness
medium
65
Scala, Concurrency, API, Project Loom, Java. safe cancellation model, with out-of-bounds, uncatchable interruptions a rich set of concurrency operations for declarative concurrency a safe, advanced implementation of fiber-locals advanced error handling fearless refactoring through referential transparency possibility to control time in tests,
medium
66
Scala, Concurrency, API, Project Loom, Java. which prevents flakiness and allows precision composable resources But they have some disadvantages as well, which are not present in Ox: monadic, instead of direct syntax virality of the IO/ZIO data types custom control structures, instead of for loops or try-catch A major problem in the Loom/Ox
medium
67
Scala, Concurrency, API, Project Loom, Java. approach is that it mostly glosses over error handling. A candidate to implement this might be the experimental CanThrow capabilities — we are using a pre-release JDK already, so why not include an upcoming Scala 3 feature? That’s an area requiring more research, though. Summing up Remember that Ox
medium
68
Scala, Concurrency, API, Project Loom, Java. is only a prototype, and more of an exploration project, than a complete solution. If you’d have some ideas on how to use Loom + Scala differently or some thoughts on the advantages and disadvantages of this approach, please comment below! Is structured concurrency the way to go? Should every
medium
69
Scala, Concurrency, API, Project Loom, Java. resource be used structurally? What about the forking model? What cannot be expressed using Ox that is easily done using a functional effect system? That’s just a handful of questions which might be worth exploring further. Originally published at https://softwaremill.com on February 3, 2023.
medium
70
Python, Image Processing, Kernel, Matplotlib, TensorFlow. In this post, we will examine the basic image processing methods of Python, including filtering blur and sharpness to the image using the kernels. The goal is to give readers a basic understanding of image filter methods, which will be crucial when we go on to employing strong frameworks for deep
medium
71
Python, Image Processing, Kernel, Matplotlib, TensorFlow. learning problems in later posts. We will use the image of the Eiffel Tower to explore image processing and filtering techniques. import numpy as np import matplotlib.pyplot as plt from PIL import Image We will use matplotlib, NumPy, and PIL libraries for this code snippet. The PIL module in
medium
72
Python, Image Processing, Kernel, Matplotlib, TensorFlow. Python, which stands for "Python Imaging Library" is a powerful library used for opening, manipulating, and saving many different image file formats. # Load the image img = Image.open('eiffel.jpg') # Resize the image to 50x50 pixels img_resized = img.resize((100, 100)) # Convert the resized image
medium
73
Python, Image Processing, Kernel, Matplotlib, TensorFlow. to a numpy array image_array = np.array(img_resized) “eiffel.jpg” is loaded using the Image.open() function.It is defined as img. Next, resize() function is used to resize the image data to 100x100, and it is transformed into an np.array. In [4]: image_array.shape Out[4]: (100, 100, 3) As can be
medium
74
Python, Image Processing, Kernel, Matplotlib, TensorFlow. observed, the format of image_array is 100x100x3, and each 100x100 image is displayed by 3 with respect to red, blue, and green colors. # Initialize a figure plt.figure(figsize=(20, 5)) # Titles for each subplot titles = ['Red Channel', 'Green Channel', 'Blue Channel', 'Full Color'] # Display the
medium
75
Python, Image Processing, Kernel, Matplotlib, TensorFlow. Red channel red_image = np.zeros_like(image_array) red_image[:, :, 0] = image_array[:, :, 0] # Red channel plt.subplot(1, 4, 1) plt.imshow(red_image) plt.title(titles[0]) plt.axis('off') # Display the Green channel green_image = np.zeros_like(image_array) green_image[:, :, 1] = image_array[:, :, 1]
medium
76
Python, Image Processing, Kernel, Matplotlib, TensorFlow. # Green channel plt.subplot(1, 4, 2) plt.imshow(green_image) plt.title(titles[1]) plt.axis('off') # Display the Blue channel blue_image = np.zeros_like(image_array) blue_image[:, :, 2] = image_array[:, :, 2] # Blue channel plt.subplot(1, 4, 3) plt.imshow(blue_image) plt.title(titles[2])
medium
77
Python, Image Processing, Kernel, Matplotlib, TensorFlow. plt.axis('off') # Display the full color image plt.subplot(1, 4, 4) plt.imshow(image_array) plt.title(titles[3]) plt.axis('off') plt.show() As a result of the combination of these three values(RGB), the original image appears. When the average of the three values(RGB) is taken, the grayscale is
medium
78
Python, Image Processing, Kernel, Matplotlib, TensorFlow. created. ## Make it grayscale ## Equal ratio grayscale_image = (image_array[:,:,0] + image_array[:,:,1] + image_array[:,:,2]) / 3 # Display the grayscale image plt.figure(figsize=(20,20)) plt.imshow(grayscale_image, cmap='gray') plt.title('Grayscale Image by Average') plt.axis('off') plt.show()
medium
79
Python, Image Processing, Kernel, Matplotlib, TensorFlow. Since the weights of the red-blue-green values are in equal ratio, the image did not have black and white as expected. We obtained the healthy grayscale if we adjusted the weights based on human perception, which are 0.2989 for the red channel, 0.5870 for the green channel, and 0.1140 for the blue
medium
80
Python, Image Processing, Kernel, Matplotlib, TensorFlow. channel. # The typical weights used are 0.2989 for Red, 0.5870 for Green, and 0.1140 for Blue grayscale_image = 0.2989 * image_array[:,:,0] + 0.5870 * image_array[:,:,1] + 0.1140 * image_array[:,:,2] # Display the grayscale image plt.figure(figsize=(20, 20)) plt.imshow(grayscale_image, cmap='gray')
medium
81
Python, Image Processing, Kernel, Matplotlib, TensorFlow. plt.title('Grayscale Image by Typical Weights') plt.axis('off') plt.show() img_small_resized = img.resize((25, 25)) # Convert the resized image to a numpy array image_array_small = np.array(img_small_resized) grayscale_image_small = 0.2989 * image_array_small[:,:,0] + 0.5870 *
medium
82
Python, Image Processing, Kernel, Matplotlib, TensorFlow. image_array_small[:,:,1] + 0.1140 * image_array_small[:,:,2] print(grayscale_image_small) fig, ax = plt.subplots(figsize=(7, 7)) # Increase the figure size ax.imshow(grayscale_image_small, cmap='gray', interpolation='nearest') # Annotate each cell with the numeric value for (j, i), label in
medium
83
Python, Image Processing, Kernel, Matplotlib, TensorFlow. np.ndenumerate(grayscale_image_small): ax.text(i, j, int(label), ha='center', va='center', color='black', fontsize=8) # Larger font size and black color plt.axis('off') # Hide the axes plt.show() It is visible that the image converted into a np.array is formed up of numbers, which become black when
medium
84
Python, Image Processing, Kernel, Matplotlib, TensorFlow. the number lowers and white as it increases. The grayscale is defined as the values in the interval [0,255]. We can modify the image’s sharpness or blurriness by using filters. To implement these filters, we apply the kernel. We apply sharpen kernel to the image in the “workflow” above. We sum each
medium
85
Python, Image Processing, Kernel, Matplotlib, TensorFlow. value of the kernel by multiplying the corresponding number in the image. You will find out more about the kernel if you closely examine the workflow. def apply_kernel(image, kernel): """Apply convolution kernel to an image using 'valid' mode.""" kernel_height, kernel_width = kernel.shape
medium
86
Python, Image Processing, Kernel, Matplotlib, TensorFlow. image_height, image_width = image.shape output_height = image_height - kernel_height + 1 output_width = image_width - kernel_width + 1 output = np.zeros((output_height, output_width)) for y in range(output_height): for x in range(output_width): output[y, x] = np.sum(image[y:y + kernel_height, x:x +
medium
87
Python, Image Processing, Kernel, Matplotlib, TensorFlow. kernel_width] * kernel) return output apply_kernel() function performs convolution between an image and a kernel, outputting a new image where each pixel value is determined by the sum of the weighted neighborhood computations. # Define blur kernel (3x3 average) blur_kernel = np.ones((3, 3)) / 9
medium
88
Python, Image Processing, Kernel, Matplotlib, TensorFlow. The blur_kernel is a 3x3 matrix filled with ones, then normalized by dividing each element by 9. This kernel is used to average the surrounding pixels, effectively blurring the image. sharpen_kernel = np.array([ [-1, -1, -1], [-1, 9, -1], [-1, -1, -1] ]) A 3x3 matrix with a center value of 9 and
medium
89
Python, Image Processing, Kernel, Matplotlib, TensorFlow. all surrounding values set to -1 is the definition of the sharpen_kernel. This layout produces a sharpening effect by enhancing the central pixel and decreasing the impact of surrounding pixels. # Apply blur kernel blurred_image = apply_kernel(grayscale_image, blur_kernel) # Apply sharpen kernel
medium
90
Python, Image Processing, Kernel, Matplotlib, TensorFlow. sharpened_image = apply_kernel(grayscale_image, sharpen_kernel) Apply the kernels to the image and visualize them. # Plot original, blurred, and sharpened images plt.figure(figsize=(12, 4)) plt.subplot(1, 3, 1) plt.imshow(grayscale_image, cmap='gray') plt.title('Original') plt.axis('off')
medium
91
Python, Image Processing, Kernel, Matplotlib, TensorFlow. plt.subplot(1, 3, 2) plt.imshow(blurred_image, cmap='gray') plt.title('Blurred') plt.axis('off') plt.subplot(1, 3, 3) plt.imshow(sharpened_image, cmap='gray') plt.title('Sharpened') plt.axis('off') plt.show() Thus, we got the blurred and sharpened image of the Eiffel Tower. This examination of
medium
92
Python, Image Processing, Kernel, Matplotlib, TensorFlow. image processing techniques, especially the use of kernels for blurring and sharpening, provides an excellent base for more advanced topics. Understanding these fundamental processes will help us create and fine-tune neural networks for a range of image recognition applications as we go through
medium
93
Python, Image Processing, Kernel, Matplotlib, TensorFlow. frameworks such as TensorFlow and Keras. The abilities and concepts learned here can help you move easier to complex deep learning. Hope you enjoy:) In Plain English 🚀 Thank you for being a part of the In Plain English community! Before you go: Be sure to clap and follow the writer ️👏️️ Follow us:
medium
94
Privacy, Vacation, Rental, Hidden Camera, Advocate. Illustration by Dustin Elliott Does Diana Rojas have the right to privacy in her vacation rental? Apparently not. When she rented a home on Vrbo recently, she was shocked when her host “invaded” her privacy. “We felt intimidated,” she says. Now she wants a refund for her entire stay. But does she
medium
96
Privacy, Vacation, Rental, Hidden Camera, Advocate. deserve to get $1,100 back? Her problem raises a few questions: Do Airbnb and Vrbo rentals allow surveillance cameras? If I find a camera, what should I do? Am I entitled to privacy in my vacation rental? The answers aren’t as straightforward as you might think. But first, let’s point our camera at
medium
97
Privacy, Vacation, Rental, Hidden Camera, Advocate. Rojas for a minute. “She ruined our stay” Rojas and her husband rented a lakeside home in Montgomery, Texas, for two days during the New Year’s holiday. Rojas reviewed the rental policies, which stated: “The property features an exterior security camera, attached to a tree, facing the back of the
medium
98
Privacy, Vacation, Rental, Hidden Camera, Advocate. house and the dock. It does not look into any interior spaces.” Vrbo allows outside cameras as long as they meet several criteria, including that they are primarily used for security purposes. But it turns out this host was monitoring more than the outside of the property. The camera in Rojas’ Vrbo
medium
99
Privacy, Vacation, Rental, Hidden Camera, Advocate. rental was pointed directly into her living room. Rojas arrived at the home in the late afternoon and started setting up for a holiday dinner. Then she received a text message from the owner. Owner: Hi! This is Heather, the owner of the lake house you’re staying at. We noticed on the cameras that
medium
100
Privacy, Vacation, Rental, Hidden Camera, Advocate. the door is open and you have a long table and looks like a possible DJ. We are so glad to have you but don’t allow parties at the house. Only the guests staying and that’s all. Rojas: You are welcome to come here to check if we have a D.J. This is a family dinner. You are invading our privacy by
medium
101