title
stringlengths
3
46
content
stringlengths
0
1.6k
20:201
20:202
No. Since each Pod is in charge of a specific kind of requests and has a unique address, we can't add more Pods.
20:203
20:204
20:205
Yes. Since Pods are indistinguishable, more Pods can't cause problems, but just improve the performance of the whole set.
20:206
20:207
20:208
20:209
20:210
Pods can store permanent data inside of them
20:211
20:212
20:213
Yes, they are designed for this. Requests are issued to Pods with the sharding technique.
20:214
20:215
20:216
No, because they are designed to be undistinguishable, and storing a specific datum in a specific Pod would make a Pod different from the others in the set.
20:217
20:218
20:219
20:220
20:221
Table 20.1: StataefulSets versus ReplicaSets
20:222
The following subsection describes how to provide stable network addresses to both ReplicaSets and StatefulSets.
20:223
Services
20:224
Since Pod instances can be moved between nodes, they have no stable IP address attached to them. Services take care of assigning a unique and stable virtual address to a whole ReplicaSet and of load balancing the traffic to all its instances. Services are not software objects created in the cluster, but just an abstraction of the various settings and activities needed to implement their functionalities.
20:225
Services work at level 4 of the protocol stack, so they understand protocols such as TCP, but they aren鈥檛 able to perform, for instance, HTTP-specific actions/transformations, such as ensuring a secure HTTPS connection. Therefore, if you need to install HTTPS certificates on the Kubernetes cluster, you need a more complex object that is capable of interacting at level 7 of the protocol stack. The Ingress object was conceived for this. We will discuss this in the next subsection.
20:226
Services also handle assigning a unique virtual address to each instance of a StatefulSet. In fact, there are various kinds of Services; some were conceived for ReplicaSets and others for StatefulSets.
20:227
A ClusterIP service type is assigned a unique cluster internal IP address. It specifies the ReplicaSets or Deployments it is connected to through label pattern matching. It uses tables maintained by the Kubernetes infrastructure to load balance the traffic it receives between all the Pod instances to which it is connected.
20:228
Therefore, other Pods can communicate with the Pods connected to a Service by interacting with this Service that is assigned the stable network name <service name>.<service namespace>.svc.cluster.local. Since they are just assigned local IP addresses, a ClusterIP service can鈥檛 be accessed from outside the Kubernetes cluster.
20:229
20:230
A ClusterIP is the usual communication choice for Deployments and ReplicaSets that do not communicate with anything outside of their Kubernetes cluster.
20:231
20:232
Here is the definition of a typical ClusterIP service:
20:233
apiVersion: v1
20:234
kind: Service
20:235
metadata:
20:236
name: my-service
20:237
namespace: my-namespace
20:238
spec:
20:239
selector:
20:240
my-selector-label: my-selector-value
20:241
...
20:242
ports:
20:243
- name: http
20:244
protocol: TCP
20:245
port: 80
20:246
targetPort: 9376
20:247
- name: https
20:248
protocol: TCP
20:249
port: 443
20:250
targetPort: 9377
20:251
20:252
Each Service can work on several ports and can route any port (port) to the ports exposed by the containers (targetPort). However, it is very often the case that port = targetPort. Ports can be given names, but these names are optional. Also, the specification of the protocol is optional; when not explicitly specified, all supported level 4 protocols are allowed. The spec->selector property specifies all the name/value pairs that select the Pods for the Service to which to route the communications it receives.
20:253
Since a ClusterIP service can鈥檛 be accessed from outside the Kubernetes cluster, we need other Service types to expose a Kubernetes application on a public IP address.
20:254
NodePort-type Services are the simplest way to expose Pods to the outside world. In order to implement a NodePort service, the same port x is opened on all nodes of the Kubernetes cluster and each node routes the traffic it receives on this port to a newly created ClusterIP service.
20:255
In turn, the ClusterIP service routes its traffic to all Pods selected by the service:
20:256
20:257
Figure 20.3: NodePort service
20:258
Therefore, you can simply communicate with port x through a public IP of any cluster node in order to access the Pods connected to the NodePort service. Of course, the whole process is completely automatic and hidden from the developer, whose only preoccupation is getting the port number x so they know where to forward the external traffic.
20:259
The definition of a NodePort service is similar to the definition of a ClusterIP service, the only difference being that they specify a value of NodePort for the spec->type property:
20:260
...
20:261
spec:
20:262
type: NodePort
20:263
selector:
20:264
...
20:265
20:266
As a default, a node port x in the range 30000-32767 is automatically chosen for each targetPort specified by the Service. The port property associated with each targetPort is meaningless for NodePort Services since all traffic passes through the selected node port x, and, by convention, is set to the same value as the targetPort.
20:267
The developer can also set the NodePort x directly through a nodePort property:
20:268
...
20:269
ports:
20:270
- name: http
20:271
protocol: TCP
20:272
port: 80
20:273
targetPort: 80
20:274
nodePort: 30007
20:275
- name: https
20:276
protocol: TCP
20:277
port: 443
20:278
targetPort: 443
20:279
nodePort: 30020
20:280
...
20:281
20:282
When the Kubernetes cluster is hosted on a cloud, the more convenient way to expose some Pods to the outside world is through a LoadBalancer service, in which case the Kubernetes cluster is exposed to the outside world through a level 4 load balancer of the selected cloud provider.
20:283
20:284
A LoadBalancer is the usual communication choice for Deployments and ReplicaSets that do communicate outside of their Kubernetes cluster but don鈥檛 need advanced HTTP features.
20:285
20:286
The definition of a LoadBalancer service is similar to that of a ClusterIp service, the only difference being that the spec->type property must be set to LoadBalancer:
20:287
...
20:288
spec:
20:289
type: LoadBalancer
20:290
selector:
20:291
...
20:292
20:293
If no further specification is added, a dynamic public IP is randomly assigned. However, if a specific public IP address is required, it can be set as a public IP address for the cluster load balancer by specifying it in the spec->loadBalancerIP property:
20:294
...
20:295
spec:
20:296
type: LoadBalancer
20:297
loadBalancerIP: <your public ip>
20:298
selector:
20:299
...
20:300