input
stringclasses
1 value
output
stringlengths
30
28.3k
text
stringlengths
151
31.8k
instruction
stringlengths
49
30.3k
It can take a few minutes for the external IP to show up because it takes a while for the cloud provider (EC2 in this case) to provision a load balancer for you. On the off chance that it doesn't show up after a few minutes, run kubectl get events to see what's going wrong.
###Instruction: I have created the service with the following config file: { "kind":"Service", "apiVersion":"v1", "metadata":{ "name":"my-service", "labels":{ "app":"my-service" } }, "spec":{ "ports": [ { "port":8080, "targetPort":"http-server" } ], "selector":{ "app":"my-service" }, "type": "LoadBalancer" } } Executed the following command: $ kubectl create -f my-service.json service "my-service" created Then I want to see the external address of the service: $ kubectl get services NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-service 10.0.203.169 8080/TCP 3m kubernetes 10.0.0.1 <none> 443/TCP 32m In the examples of kubernetes apps it says there should be an "External IP" column, which is set: $ kubectl get services NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE guestbook 10.0.217.218 146.148.81.8 3000/TCP app=guestbook 1h redis-master 10.0.136.3 <none> 6379/TCP app=redis,role=master 1h redis-slave 10.0.21.92 <none> 6379/TCP app-redis,role=slave 1h ... kubectl events shows no information about the load balancer and service. What might be the problem? I am using AWS if it helps. Btw. If I look at the AWS console I can see the DNS name of the Load Balancer and connect to it. ### Response: It can take a few minutes for the external IP to show up because it takes a while for the cloud provider (EC2 in this case) to provision a load balancer for you. On the off chance that it doesn't show up after a few minutes, run kubectl get events to see what's going wrong.
I have created the service with the following config file: { "kind":"Service", "apiVersion":"v1", "metadata":{ "name":"my-service", "labels":{ "app":"my-service" } }, "spec":{ "ports": [ { "port":8080, "targetPort":"http-server" } ], "selector":{ "app":"my-service" }, "type": "LoadBalancer" } } Executed the following command: $ kubectl create -f my-service.json service "my-service" created Then I want to see the external address of the service: $ kubectl get services NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-service 10.0.203.169 8080/TCP 3m kubernetes 10.0.0.1 <none> 443/TCP 32m In the examples of kubernetes apps it says there should be an "External IP" column, which is set: $ kubectl get services NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE guestbook 10.0.217.218 146.148.81.8 3000/TCP app=guestbook 1h redis-master 10.0.136.3 <none> 6379/TCP app=redis,role=master 1h redis-slave 10.0.21.92 <none> 6379/TCP app-redis,role=slave 1h ... kubectl events shows no information about the load balancer and service. What might be the problem? I am using AWS if it helps. Btw. If I look at the AWS console I can see the DNS name of the Load Balancer and connect to it.
Try kubectl get pods --selector=app=tas-core as in http://kubernetes.io/docs/user-guide/kubectl-cheatsheet/
###Instruction: I have a question about kubectl command with flag --selector. In help menu it says, -l, --selector="": Selector (label query) to filter on how ever it does't work as i expect, for example, i want to get RC who have selector like "spec": { "replicas": 2, "selector": { "app": "tas-core" }, when i give command kubectl get pod --selector="app:tas-core" system report: the provided selector "app:tas-core" is not valid: unable to parse requirement: label key: invalid value 'app:tas-core', Details: must match regex [a-z0-9?(.a-z0-9?)* / ] a-z0-9? after i check the regexp [[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)* / ] [a-z0-9]([-a-z0-9]*[a-z0-9])? i still can't find any string which can pass the regexp! i gave, kubectl get rc -l app/tas-core nothing has been returned. How could i use it? I have also another question, how to filter all pods which with label like "labels": { "app": "tas-core" } ? ### Response: Try kubectl get pods --selector=app=tas-core as in http://kubernetes.io/docs/user-guide/kubectl-cheatsheet/
I have a question about kubectl command with flag --selector. In help menu it says, -l, --selector="": Selector (label query) to filter on how ever it does't work as i expect, for example, i want to get RC who have selector like "spec": { "replicas": 2, "selector": { "app": "tas-core" }, when i give command kubectl get pod --selector="app:tas-core" system report: the provided selector "app:tas-core" is not valid: unable to parse requirement: label key: invalid value 'app:tas-core', Details: must match regex [a-z0-9?(.a-z0-9?)* / ] a-z0-9? after i check the regexp [[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)* / ] [a-z0-9]([-a-z0-9]*[a-z0-9])? i still can't find any string which can pass the regexp! i gave, kubectl get rc -l app/tas-core nothing has been returned. How could i use it? I have also another question, how to filter all pods which with label like "labels": { "app": "tas-core" } ?
Take a look at https://www.mongodb.com/blog/post/running-mongodb-as-a-microservice-with-docker-and-kubernetes, there is a great pdf tutorial.
###Instruction: I want to create MongoDB replica of the three machines, it needs to specify the IP-addresses of these machines? But they run into a pod's and have a dynamic IP. If you try to specify the DNS-name service MongoDB he says No host described in new configuration XXXXX for replica set app_replica maps to this node How to configure MongoDB replica for c k8s? I use DNS-addons for k8s. And I try to initialize the cluster as follows: var config = { "_id" : "app_replica", "members" : [ { "_id" : 0, "host" : "mongodb-node-01" }, { "_id" : 1, "host" : "mongodb-node-02" }, { "_id" : 2, "host" : "mongodb-node-03", "arbiterOnly" : true } ] } rs.initiate(config) Config Service: apiVersion: v1 kind: Service metadata: name: "mongodb-node-01" labels: app: "mongodb-node-01" spec: ports: - port: 27017 targetPort: 27001 selector: app: "mongodb-node-01" Config Replication Controller: apiVersion: v1 kind: ReplicationController metadata: name: "mongodb-node-01" labels: app: "mongodb-node-01" spec: replicas: 1 selector: app: "mongodb-node-01" template: metadata: labels: app: "mongodb-node-01" spec: containers: - name: "mongodb-node-01" image: 192.168.0.139:5000/db/mongo command: - mongod - "--replSet" - "app_replica" - "--smallfiles" - "--noprealloc" env: - name: ENV value: "prod" ports: - containerPort: 27017 volumeMounts: - name: mongo-persistent-storage mountPath: /data/db readOnly: false volumes: - name: mongo-persistent-storage hostPath: path: /data/mongo/mongodb-node-01 nodeSelector: database: "true" mongodb01: "true" ### Response: Take a look at https://www.mongodb.com/blog/post/running-mongodb-as-a-microservice-with-docker-and-kubernetes, there is a great pdf tutorial.
I want to create MongoDB replica of the three machines, it needs to specify the IP-addresses of these machines? But they run into a pod's and have a dynamic IP. If you try to specify the DNS-name service MongoDB he says No host described in new configuration XXXXX for replica set app_replica maps to this node How to configure MongoDB replica for c k8s? I use DNS-addons for k8s. And I try to initialize the cluster as follows: var config = { "_id" : "app_replica", "members" : [ { "_id" : 0, "host" : "mongodb-node-01" }, { "_id" : 1, "host" : "mongodb-node-02" }, { "_id" : 2, "host" : "mongodb-node-03", "arbiterOnly" : true } ] } rs.initiate(config) Config Service: apiVersion: v1 kind: Service metadata: name: "mongodb-node-01" labels: app: "mongodb-node-01" spec: ports: - port: 27017 targetPort: 27001 selector: app: "mongodb-node-01" Config Replication Controller: apiVersion: v1 kind: ReplicationController metadata: name: "mongodb-node-01" labels: app: "mongodb-node-01" spec: replicas: 1 selector: app: "mongodb-node-01" template: metadata: labels: app: "mongodb-node-01" spec: containers: - name: "mongodb-node-01" image: 192.168.0.139:5000/db/mongo command: - mongod - "--replSet" - "app_replica" - "--smallfiles" - "--noprealloc" env: - name: ENV value: "prod" ports: - containerPort: 27017 volumeMounts: - name: mongo-persistent-storage mountPath: /data/db readOnly: false volumes: - name: mongo-persistent-storage hostPath: path: /data/mongo/mongodb-node-01 nodeSelector: database: "true" mongodb01: "true"
Due to a likely bug in Kubernetes, one must first delete the ingress and then re-create it (rather than doing kubectl replace as I did): kubectl delete -f ingress.yaml kubectl create -f ingress.yaml
###Instruction: How do I configure the Google Computing Engine (GKE) L7 load balancer to serve HTTPS? I have made HTTP work, but when I configure for TLS as described in the guide, it does not respond to HTTPS requests. Specifically, the spec.tls section should ensure that the load balancer makes use of HTTPS. Ingress Specification apiVersion: extensions/v1beta1 kind: Ingress metadata: name: l7-ingress spec: tls: - secretName: web-secret backend: serviceName: web servicePort: 8080 Ingress Description under Kubernetes βœ— kubectl describe ing Name: l7-ingress Namespace: default Address: 130.211.11.24 Default backend: web:8080 (10.32.2.5:8080) TLS: web-secret terminates Rules: Host Path Backends ---- ---- -------- Annotations: target-proxy: k8s-tp-default-l7-ingress url-map: k8s-um-default-l7-ingress backends: {"k8s-be-32051":"HEALTHY"} forwarding-rule: k8s-fw-default-l7-ingress No events. L7 Controller Logs βœ— kubectl logs --namespace=kube-system l7-lb-controller-v0.6.0-fbj20 -c l7-lb-controller I0420 13:46:15.089090 1 main.go:159] Starting GLBC image: glbc:0.6.0 I0420 13:46:16.149998 1 gce.go:245] Using existing Token Source &oauth2.reuseTokenSource{new:google.computeSource{account:""}, mu:sync.Mutex{state:0, sema:0x0}, t:(*oauth2.Token)(nil)} I0420 13:46:16.150399 1 controller.go:190] Starting loadbalancer controller I0420 14:37:02.033271 1 event.go:211] Event(api.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"l7-ingress", UID:"585651e5-0705-11e6-88d0-42010af0005e", APIVersion:"extensions", ResourceVersion:"2367", FieldPath:""}): type: 'Normal' reason: 'ADD' default/l7-ingress I0420 14:37:02.227796 1 instances.go:56] Creating instance group k8s-ig I0420 14:37:06.166686 1 gce.go:1654] Adding port 32051 to instance group k8s-ig with 0 ports I0420 14:37:06.834215 1 backends.go:116] Creating backend for instance group k8s-ig port 32051 named port &{port32051 32051 []} I0420 14:37:07.036501 1 healthchecks.go:49] Creating health check k8s-be-32051 I0420 14:37:16.305240 1 gce.go:1654] Adding port 30007 to instance group k8s-ig with 1 ports I0420 14:37:16.911701 1 backends.go:116] Creating backend for instance group k8s-ig port 30007 named port &{port30007 30007 []} I0420 14:37:17.108589 1 healthchecks.go:49] Creating health check k8s-be-30007 I0420 14:37:25.213110 1 loadbalancers.go:128] Creating l7 default-l7-ingress I0420 14:37:26.038349 1 loadbalancers.go:288] Creating url map k8s-um-default-l7-ingress for backend k8s-be-30007 I0420 14:37:30.305857 1 loadbalancers.go:304] Creating new http proxy for urlmap k8s-um-default-l7-ingress I0420 14:37:34.643141 1 loadbalancers.go:397] Creating forwarding rule for proxy [k8s-tp-default-l7-ingress] and ip :80-80 I0420 14:37:43.301563 1 controller.go:325] Updating loadbalancer default/l7-ingress with IP 130.211.11.24 I0420 14:37:43.329469 1 event.go:211] Event(api.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"l7-ingress", UID:"585651e5-0705-11e6-88d0-42010af0005e", APIVersion:"extensions", ResourceVersion:"2367", FieldPath:""}): type: 'Normal' reason: 'CREATE' ip: 130.211.11.24 ### Response: Due to a likely bug in Kubernetes, one must first delete the ingress and then re-create it (rather than doing kubectl replace as I did): kubectl delete -f ingress.yaml kubectl create -f ingress.yaml
How do I configure the Google Computing Engine (GKE) L7 load balancer to serve HTTPS? I have made HTTP work, but when I configure for TLS as described in the guide, it does not respond to HTTPS requests. Specifically, the spec.tls section should ensure that the load balancer makes use of HTTPS. Ingress Specification apiVersion: extensions/v1beta1 kind: Ingress metadata: name: l7-ingress spec: tls: - secretName: web-secret backend: serviceName: web servicePort: 8080 Ingress Description under Kubernetes βœ— kubectl describe ing Name: l7-ingress Namespace: default Address: 130.211.11.24 Default backend: web:8080 (10.32.2.5:8080) TLS: web-secret terminates Rules: Host Path Backends ---- ---- -------- Annotations: target-proxy: k8s-tp-default-l7-ingress url-map: k8s-um-default-l7-ingress backends: {"k8s-be-32051":"HEALTHY"} forwarding-rule: k8s-fw-default-l7-ingress No events. L7 Controller Logs βœ— kubectl logs --namespace=kube-system l7-lb-controller-v0.6.0-fbj20 -c l7-lb-controller I0420 13:46:15.089090 1 main.go:159] Starting GLBC image: glbc:0.6.0 I0420 13:46:16.149998 1 gce.go:245] Using existing Token Source &oauth2.reuseTokenSource{new:google.computeSource{account:""}, mu:sync.Mutex{state:0, sema:0x0}, t:(*oauth2.Token)(nil)} I0420 13:46:16.150399 1 controller.go:190] Starting loadbalancer controller I0420 14:37:02.033271 1 event.go:211] Event(api.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"l7-ingress", UID:"585651e5-0705-11e6-88d0-42010af0005e", APIVersion:"extensions", ResourceVersion:"2367", FieldPath:""}): type: 'Normal' reason: 'ADD' default/l7-ingress I0420 14:37:02.227796 1 instances.go:56] Creating instance group k8s-ig I0420 14:37:06.166686 1 gce.go:1654] Adding port 32051 to instance group k8s-ig with 0 ports I0420 14:37:06.834215 1 backends.go:116] Creating backend for instance group k8s-ig port 32051 named port &{port32051 32051 []} I0420 14:37:07.036501 1 healthchecks.go:49] Creating health check k8s-be-32051 I0420 14:37:16.305240 1 gce.go:1654] Adding port 30007 to instance group k8s-ig with 1 ports I0420 14:37:16.911701 1 backends.go:116] Creating backend for instance group k8s-ig port 30007 named port &{port30007 30007 []} I0420 14:37:17.108589 1 healthchecks.go:49] Creating health check k8s-be-30007 I0420 14:37:25.213110 1 loadbalancers.go:128] Creating l7 default-l7-ingress I0420 14:37:26.038349 1 loadbalancers.go:288] Creating url map k8s-um-default-l7-ingress for backend k8s-be-30007 I0420 14:37:30.305857 1 loadbalancers.go:304] Creating new http proxy for urlmap k8s-um-default-l7-ingress I0420 14:37:34.643141 1 loadbalancers.go:397] Creating forwarding rule for proxy [k8s-tp-default-l7-ingress] and ip :80-80 I0420 14:37:43.301563 1 controller.go:325] Updating loadbalancer default/l7-ingress with IP 130.211.11.24 I0420 14:37:43.329469 1 event.go:211] Event(api.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"l7-ingress", UID:"585651e5-0705-11e6-88d0-42010af0005e", APIVersion:"extensions", ResourceVersion:"2367", FieldPath:""}): type: 'Normal' reason: 'CREATE' ip: 130.211.11.24
A service uses a selector to identify the list of pods to proxy to (if they're in the Ready state). You could simply ask for the same list of pods with a GET request: $ curl -G "$MASTER/api/v1/namespaces/$NAMESPACE/pods?labelSelector=$KEY=$VALUE" And then manually send your request to each of the pod ip:port endpoints. If you need to be able to send the request from outside the cluster network, you could create a proxy pod (exposed to the external network through the standard means). The proxy pod could watch for pods with your label (similar to above), and forward any requests it receives to the list of ready pods. A similar effect could be achieved using hostPort and forwarding to nodes, but the use of hostPort is discourage (see best practices).
###Instruction: I have pod and its purpose is to take the incoming data and write it to the host volume. I'm running this pod in all the minions. Now when i setup NodePort service to this pods, traffic will go to 1 pod at a time. But how do i send request to all this pods in different minions? How to i bypass the load-balancing here? I want that data to be available in all the minions host volume. ### Response: A service uses a selector to identify the list of pods to proxy to (if they're in the Ready state). You could simply ask for the same list of pods with a GET request: $ curl -G "$MASTER/api/v1/namespaces/$NAMESPACE/pods?labelSelector=$KEY=$VALUE" And then manually send your request to each of the pod ip:port endpoints. If you need to be able to send the request from outside the cluster network, you could create a proxy pod (exposed to the external network through the standard means). The proxy pod could watch for pods with your label (similar to above), and forward any requests it receives to the list of ready pods. A similar effect could be achieved using hostPort and forwarding to nodes, but the use of hostPort is discourage (see best practices).
I have pod and its purpose is to take the incoming data and write it to the host volume. I'm running this pod in all the minions. Now when i setup NodePort service to this pods, traffic will go to 1 pod at a time. But how do i send request to all this pods in different minions? How to i bypass the load-balancing here? I want that data to be available in all the minions host volume.
Check out the Termination of Pods section in the pods user guide. You might wish to implement a preStop hook to ensure traffic is drained before the TERMΒ signal is sent to the processes in the pod.
###Instruction: I need to scale down the number of pods in a replica controllers. However, I need a clean scale down: Stop to send load on the pods that will be scaled down Wait for the pod to have finished to handle the load Delete the pod I do not want a pod to be deleted when it is still doing stuff. Is there a way to do that with Kubernetes? ### Response: Check out the Termination of Pods section in the pods user guide. You might wish to implement a preStop hook to ensure traffic is drained before the TERMΒ signal is sent to the processes in the pod.
I need to scale down the number of pods in a replica controllers. However, I need a clean scale down: Stop to send load on the pods that will be scaled down Wait for the pod to have finished to handle the load Delete the pod I do not want a pod to be deleted when it is still doing stuff. Is there a way to do that with Kubernetes?
The identifiers doc on Github is the closest thing to what you're looking for.
###Instruction: Is there any doc and go file that shows all the various naming restrictions for Kubernetes entities? I am interested in charset and length restriction. ### Response: The identifiers doc on Github is the closest thing to what you're looking for.
Is there any doc and go file that shows all the various naming restrictions for Kubernetes entities? I am interested in charset and length restriction.
It would be ideal to have the service IP of the API in the SAN field of all your server certificates. If this is not possible in your setup, set the clusters{}.cluster.insecure-skip-tls-verify field to true in your kubeconfig file, or the pass the --insecure-skip-tls-verify flag to kubectl. If you are trying to reach the API from within a pod you could use the secrets mounted via the Service Account. By default, if you use the default secret, the CA certificate and a signed token are mounted to /var/run/secrets/kubernetes.io/serviceaccount/ in every pod, and any client can use them from within the pod to communicate with the API. This would help you solving the unknown certificate authority error and provide you with an easy way to authenticate against your API servers at the same time.
###Instruction: I have a Kubernetes cluster running in High Availability mode with 3 master nodes. When I try to run the DNS cluster add-on as-is, the kube2sky application errors with an x509 signed by unknown certificate authority message for the API Server service address (which in my case is 10.100.0.1). Reading through some of the GitHub issues, it looked like Tim Hockin had fixed this type of issue via using the default service account tokens available. All 3 of my master nodes generate their own certificates for the secured API port, so is there something special I need to do configuration-wise on the API servers to get the CA certificate included in the default service account token? ### Response: It would be ideal to have the service IP of the API in the SAN field of all your server certificates. If this is not possible in your setup, set the clusters{}.cluster.insecure-skip-tls-verify field to true in your kubeconfig file, or the pass the --insecure-skip-tls-verify flag to kubectl. If you are trying to reach the API from within a pod you could use the secrets mounted via the Service Account. By default, if you use the default secret, the CA certificate and a signed token are mounted to /var/run/secrets/kubernetes.io/serviceaccount/ in every pod, and any client can use them from within the pod to communicate with the API. This would help you solving the unknown certificate authority error and provide you with an easy way to authenticate against your API servers at the same time.
I have a Kubernetes cluster running in High Availability mode with 3 master nodes. When I try to run the DNS cluster add-on as-is, the kube2sky application errors with an x509 signed by unknown certificate authority message for the API Server service address (which in my case is 10.100.0.1). Reading through some of the GitHub issues, it looked like Tim Hockin had fixed this type of issue via using the default service account tokens available. All 3 of my master nodes generate their own certificates for the secured API port, so is there something special I need to do configuration-wise on the API servers to get the CA certificate included in the default service account token?
Replacing the ReplicationController object does not actually recreate the underlying pods, so the pods keep the spec from the previous configuration of the RC until they need to be recreated. If you delete the running pod, the new one that gets created to replace it will have the new environment variable. This is what the kubectl rolling update command is for, and a part of the reason why the Deployment type was added to Kubernetes 1.2.
###Instruction: I am adding an environment variable to a Kubernetes replication controller spec, but when I update the running RC from the spec, the environment variable isn't added to it. How come? I update the RC according to the following spec, where the environment variable IRON_PASSWORD gets added since the previous revision, but the running RC isn't updated correspondingly, kubectl replace -f docker/podspecs/web-controller.yaml: apiVersion: v1 kind: ReplicationController metadata: name: web labels: app: web spec: replicas: 1 selector: app: web template: metadata: labels: app: web spec: containers: - name: web image: quay.io/aknuds1/muzhack # Always pull latest version of image imagePullPolicy: Always env: - name: APP_URI value: https://staging.muzhack.com - name: IRON_PASSWORD value: password ports: - name: http-server containerPort: 80 imagePullSecrets: - name: quay.io After updating the RC according to the spec, it looks like this (kubectl get pod web-scpc3 -o yaml), notice that IRON_PASSWORD is missing: apiVersion: v1 kind: Pod metadata: annotations: kubernetes.io/created-by: | {"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"default","name":"web","uid":"c1c4185f-0867-11e6-b557-42010af000f7","apiVersion":"v1","resourceVersion":"17714"}} kubernetes.io/limit-ranger: 'LimitRanger plugin set: cpu request for container web' creationTimestamp: 2016-04-22T08:54:00Z generateName: web- labels: app: web name: web-scpc3 namespace: default resourceVersion: "17844" selfLink: /api/v1/namespaces/default/pods/web-scpc3 uid: c1c5035f-0867-11e6-b557-42010af000f7 spec: containers: - env: - name: APP_URI value: https://staging.muzhack.com image: quay.io/aknuds1/muzhack imagePullPolicy: Always name: web ports: - containerPort: 80 name: http-server protocol: TCP resources: requests: cpu: 100m terminationMessagePath: /dev/termination-log volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: default-token-vfutp readOnly: true dnsPolicy: ClusterFirst imagePullSecrets: - name: quay.io nodeName: gke-staging-default-pool-f98acf11-ba7d restartPolicy: Always securityContext: {} serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 volumes: - name: default-token-vfutp secret: secretName: default-token-vfutp status: conditions: - lastProbeTime: null lastTransitionTime: 2016-04-22T09:00:49Z message: 'containers with unready status: [web]' reason: ContainersNotReady status: "False" type: Ready containerStatuses: - containerID: docker://dae22acb9f236433389ac0c51b730423ef9159d0c0e12770a322c70201fb7e2a image: quay.io/aknuds1/muzhack imageID: docker://8fef42c3eba5abe59c853e9ba811b3e9f10617a257396f48e564e3206e0e1103 lastState: terminated: containerID: docker://dae22acb9f236433389ac0c51b730423ef9159d0c0e12770a322c70201fb7e2a exitCode: 1 finishedAt: 2016-04-22T09:00:48Z reason: Error startedAt: 2016-04-22T09:00:46Z name: web ready: false restartCount: 6 state: waiting: message: Back-off 5m0s restarting failed container=web pod=web-scpc3_default(c1c5035f-0867-11e6-b557-42010af000f7) reason: CrashLoopBackOff hostIP: 10.132.0.3 phase: Running podIP: 10.32.0.3 startTime: 2016-04-22T08:54:00Z ### Response: Replacing the ReplicationController object does not actually recreate the underlying pods, so the pods keep the spec from the previous configuration of the RC until they need to be recreated. If you delete the running pod, the new one that gets created to replace it will have the new environment variable. This is what the kubectl rolling update command is for, and a part of the reason why the Deployment type was added to Kubernetes 1.2.
I am adding an environment variable to a Kubernetes replication controller spec, but when I update the running RC from the spec, the environment variable isn't added to it. How come? I update the RC according to the following spec, where the environment variable IRON_PASSWORD gets added since the previous revision, but the running RC isn't updated correspondingly, kubectl replace -f docker/podspecs/web-controller.yaml: apiVersion: v1 kind: ReplicationController metadata: name: web labels: app: web spec: replicas: 1 selector: app: web template: metadata: labels: app: web spec: containers: - name: web image: quay.io/aknuds1/muzhack # Always pull latest version of image imagePullPolicy: Always env: - name: APP_URI value: https://staging.muzhack.com - name: IRON_PASSWORD value: password ports: - name: http-server containerPort: 80 imagePullSecrets: - name: quay.io After updating the RC according to the spec, it looks like this (kubectl get pod web-scpc3 -o yaml), notice that IRON_PASSWORD is missing: apiVersion: v1 kind: Pod metadata: annotations: kubernetes.io/created-by: | {"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"default","name":"web","uid":"c1c4185f-0867-11e6-b557-42010af000f7","apiVersion":"v1","resourceVersion":"17714"}} kubernetes.io/limit-ranger: 'LimitRanger plugin set: cpu request for container web' creationTimestamp: 2016-04-22T08:54:00Z generateName: web- labels: app: web name: web-scpc3 namespace: default resourceVersion: "17844" selfLink: /api/v1/namespaces/default/pods/web-scpc3 uid: c1c5035f-0867-11e6-b557-42010af000f7 spec: containers: - env: - name: APP_URI value: https://staging.muzhack.com image: quay.io/aknuds1/muzhack imagePullPolicy: Always name: web ports: - containerPort: 80 name: http-server protocol: TCP resources: requests: cpu: 100m terminationMessagePath: /dev/termination-log volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: default-token-vfutp readOnly: true dnsPolicy: ClusterFirst imagePullSecrets: - name: quay.io nodeName: gke-staging-default-pool-f98acf11-ba7d restartPolicy: Always securityContext: {} serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 volumes: - name: default-token-vfutp secret: secretName: default-token-vfutp status: conditions: - lastProbeTime: null lastTransitionTime: 2016-04-22T09:00:49Z message: 'containers with unready status: [web]' reason: ContainersNotReady status: "False" type: Ready containerStatuses: - containerID: docker://dae22acb9f236433389ac0c51b730423ef9159d0c0e12770a322c70201fb7e2a image: quay.io/aknuds1/muzhack imageID: docker://8fef42c3eba5abe59c853e9ba811b3e9f10617a257396f48e564e3206e0e1103 lastState: terminated: containerID: docker://dae22acb9f236433389ac0c51b730423ef9159d0c0e12770a322c70201fb7e2a exitCode: 1 finishedAt: 2016-04-22T09:00:48Z reason: Error startedAt: 2016-04-22T09:00:46Z name: web ready: false restartCount: 6 state: waiting: message: Back-off 5m0s restarting failed container=web pod=web-scpc3_default(c1c5035f-0867-11e6-b557-42010af000f7) reason: CrashLoopBackOff hostIP: 10.132.0.3 phase: Running podIP: 10.32.0.3 startTime: 2016-04-22T08:54:00Z
I figured out the issue. In the Create your cluster section I missed a critical step. The step I missed was: Please ensure that you have configured kubectl to use the cluster you just created. The configured part is a link to how to do this: The steps are as follows: gcloud config set project PROJECT gcloud config set compute/zone ZONE gcloud config set container/cluster CLUSTER_NAME gcloud container clusters get-credentials CUSTER_NAME
###Instruction: I am following the hellonode tutorial on kubernetes.io http://kubernetes.io/docs/hellonode/ I am getting an error when trying to do the 'Create your pod' section. When I run this command (replacing PROJECT_ID with the one I created) I get the following: $ kubectl run hello-node --image=gcr.io/PROJECT_ID/hello-node:v1 --port=8080 The connection to the server localhost:8080 was refused - did you specify the right host or port? I get a similar error just typing kubectl version: $ kubectl version Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.2", GitCommit:"528f879e7d3790ea4287687ef0ab3f2a01cc2718", GitTreeState:"clean"} The connection to the server localhost:8080 was refused - did you specify the right host or port? I'm not sure what to do since I have no experience with kubernetes other than following the steps of this tutorial. ### Response: I figured out the issue. In the Create your cluster section I missed a critical step. The step I missed was: Please ensure that you have configured kubectl to use the cluster you just created. The configured part is a link to how to do this: The steps are as follows: gcloud config set project PROJECT gcloud config set compute/zone ZONE gcloud config set container/cluster CLUSTER_NAME gcloud container clusters get-credentials CUSTER_NAME
I am following the hellonode tutorial on kubernetes.io http://kubernetes.io/docs/hellonode/ I am getting an error when trying to do the 'Create your pod' section. When I run this command (replacing PROJECT_ID with the one I created) I get the following: $ kubectl run hello-node --image=gcr.io/PROJECT_ID/hello-node:v1 --port=8080 The connection to the server localhost:8080 was refused - did you specify the right host or port? I get a similar error just typing kubectl version: $ kubectl version Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.2", GitCommit:"528f879e7d3790ea4287687ef0ab3f2a01cc2718", GitTreeState:"clean"} The connection to the server localhost:8080 was refused - did you specify the right host or port? I'm not sure what to do since I have no experience with kubernetes other than following the steps of this tutorial.
Unfortunately, the max back off time for container restarts is not tunable for the node reliability (i.e., too many container restarts can overwhelm the node). If you absolutely want to change it in your cluster, you will need to modify the max backoff time in the code, compile your own kubelet binary, and distribute it onto your nodes.
###Instruction: I have a pod with some terrible, buggy software in it. One reason Kubernetes is great is that it'll just restart the software when it crashes, which is awesome. Kubernetes was designed for good software, not terrible software, so it does an exponential backoff while restarting pods. This means I have to wait five minutes between crashes before my pods are restarted. Is there any way to cap the kubernetes backoff strategy? I'd like to change it to not wait longer than thirty seconds before starting up the pod again. ### Response: Unfortunately, the max back off time for container restarts is not tunable for the node reliability (i.e., too many container restarts can overwhelm the node). If you absolutely want to change it in your cluster, you will need to modify the max backoff time in the code, compile your own kubelet binary, and distribute it onto your nodes.
I have a pod with some terrible, buggy software in it. One reason Kubernetes is great is that it'll just restart the software when it crashes, which is awesome. Kubernetes was designed for good software, not terrible software, so it does an exponential backoff while restarting pods. This means I have to wait five minutes between crashes before my pods are restarted. Is there any way to cap the kubernetes backoff strategy? I'd like to change it to not wait longer than thirty seconds before starting up the pod again.
To help drill down on what the problem is, I'd recommend seeing if the master is able to reach the pod at all. This'll help determine whether the issue is in your networking setup as a whole or just with the service routing from the master. You should be able to verify whether the apiserver can reach the pod by kubectl attach -it --namespace=kube-system monitoring-influxdb-grafana-v3-grbs1 and seeing whether it's able to connect. If it can connect, then there's something wrong with the service routing. If it can't, then the master is having trouble communicating with the node.
###Instruction: I have a K8S cluster setup on openstack using the COREOS guide. I am getting following error while accessing the GRAFANA UI on http://master-ip:8080/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/ Error: 'dial tcp 172.17.0.5:3000: i/o timeout' Trying to reach: 'http://172.17.0.5:3000/' I can access the InfluxDB UI at the influxdb-nodeip:8083. I can curl to 172.17.0.5:3000 from within the node. Steps I followed: Created the K8S cluster with 1 master and 1 node. Created namespace Setup DNS Confirmed DNS is working using busybox example. Setup InfluxDB and Grafana. Grafana container log 2016/04/21 14:53:33 [I] Listen: http://0.0.0.0:3000/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana .Grafana is up and running. Creating default influxdb datasource... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 242 100 37 100 205 3274 18143 --:--:-- --:--:-- --:--:-- 18636 HTTP/1.1 200 OK Content-Type: application/json; charset=UTF-8 Set-Cookie: grafana_sess=cd44a6ed54b863df; Path=/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana; HttpOnly Date: Thu, 21 Apr 2016 14:53:34 GMT Content-Length: 37 {"id":1,"message":"Datasource added"} Importing default dashboards... Importing /dashboards/cluster.json ... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 71639 100 49 100 71590 539 769k --:--:-- --:--:-- --:--:-- 776k HTTP/1.1 100 Continue Cluster-info cluster-info Kubernetes master is running at <master>:8080 Heapster is running at <master>:8080/api/v1/proxy/namespaces/kube-system/services/heapster KubeDNS is running at <master>:8080/api/v1/proxy/namespaces/kube-system/services/kube-dns Grafana is running at <master>:8080/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana InfluxDB is running at <master>:8080/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb version Client Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.8", GitCommit:"a8af33dc07ee08defa2d503f81e7deea32dd1d3b", GitTreeState:"clean"} Server Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.8", GitCommit:"a8af33dc07ee08defa2d503f81e7deea32dd1d3b", GitTreeState:"clean"} Node iptables: sudo iptables -n -t nat -L Chain PREROUTING (policy ACCEPT) target prot opt source destination KUBE-PORTALS-CONTAINER all -- 0.0.0.0/0 0.0.0.0/0 /* handle ClusterIPs; NOTE: this must be before the NodePort rul es */ DOCKER all -- 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL KUBE-NODEPORT-CONTAINER all -- 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL /* handle service NodePorts; NOTE : this must be the last rule in the chain */ Chain INPUT (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination KUBE-PORTALS-HOST all -- 0.0.0.0/0 0.0.0.0/0 /* handle ClusterIPs; NOTE: this must be before the NodePort rules */ DOCKER all -- 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL KUBE-NODEPORT-HOST all -- 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL /* handle service NodePorts; NOTE: thi s must be the last rule in the chain */ Chain POSTROUTING (policy ACCEPT) target prot opt source destination MASQUERADE all -- 172.17.0.0/16 0.0.0.0/0 MASQUERADE tcp -- 172.17.0.5 172.17.0.5 tcp dpt:8086 MASQUERADE tcp -- 172.17.0.5 172.17.0.5 tcp dpt:8083 Chain DOCKER (2 references) target prot opt source destination DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:8086 to:172.17.0.5:8086 DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:8083 to:172.17.0.5:8083 Chain KUBE-NODEPORT-CONTAINER (1 references) target prot opt source destination Chain KUBE-NODEPORT-HOST (1 references) target prot opt source destination Chain KUBE-PORTALS-CONTAINER (1 references) target prot opt source destination REDIRECT tcp -- 0.0.0.0/0 10.100.0.1 /* default/kubernetes: */ tcp dpt:443 redir ports 43104 REDIRECT udp -- 0.0.0.0/0 10.100.0.10 /* kube-system/kube-dns:dns */ udp dpt:53 redir ports 60423 REDIRECT tcp -- 0.0.0.0/0 10.100.0.10 /* kube-system/kube-dns:dns-tcp */ tcp dpt:53 redir ports 35036 REDIRECT tcp -- 0.0.0.0/0 10.100.176.182 /* kube-system/monitoring-grafana: */ tcp dpt:80 redir ports 41454 REDIRECT tcp -- 0.0.0.0/0 10.100.17.81 /* kube-system/heapster: */ tcp dpt:80 redir ports 40296 REDIRECT tcp -- 0.0.0.0/0 10.100.228.184 /* kube-system/monitoring-influxdb:http */ tcp dpt:8083 redir ports 39963 REDIRECT tcp -- 0.0.0.0/0 10.100.228.184 /* kube-system/monitoring-influxdb:api */ tcp dpt:8086 redir ports 40214 Chain KUBE-PORTALS-HOST (1 references) target prot opt source destination DNAT tcp -- 0.0.0.0/0 10.100.0.1 /* default/kubernetes: */ tcp dpt:443 to:10.10.1.84:43104 DNAT udp -- 0.0.0.0/0 10.100.0.10 /* kube-system/kube-dns:dns */ udp dpt:53 to:10.10.1.84:60423 DNAT tcp -- 0.0.0.0/0 10.100.0.10 /* kube-system/kube-dns:dns-tcp */ tcp dpt:53 to:10.10.1.84:35036 DNAT tcp -- 0.0.0.0/0 10.100.176.182 /* kube-system/monitoring-grafana: */ tcp dpt:80 to:10.10.1.84:41454 DNAT tcp -- 0.0.0.0/0 10.100.17.81 /* kube-system/heapster: */ tcp dpt:80 to:10.10.1.84:40296 DNAT tcp -- 0.0.0.0/0 10.100.228.184 /* kube-system/monitoring-influxdb:http */ tcp dpt:8083 to:10.10.1.84:39963 DNAT tcp -- 0.0.0.0/0 10.100.228.184 /* kube-system/monitoring-influxdb:api */ tcp dpt:8086 to:10.10.1.84:40214 describe pod --namespace=kube-system monitoring-influxdb-grafana-v3-grbs1 Name: monitoring-influxdb-grafana-v3-grbs1 Namespace: kube-system Image(s): gcr.io/google_containers/heapster_influxdb:v0.5,gcr.io/google_containers/heapster_grafana:v2.6.0-2 Node: 10.10.1.84/10.10.1.84 Start Time: Thu, 21 Apr 2016 14:53:31 +0000 Labels: k8s-app=influxGrafana,kubernetes.io/cluster-service=true,version=v3 Status: Running Reason: Message: IP: 172.17.0.5 Replication Controllers: monitoring-influxdb-grafana-v3 (1/1 replicas created) Containers: influxdb: Container ID: docker://4822dc9e98b5b423cdd1ac8fe15cb516f53ff45f48faf05b067765fdb758c96f Image: gcr.io/google_containers/heapster_influxdb:v0.5 Image ID: docker://eb8e59964b24fd1f565f9c583167864ec003e8ba6cced71f38c0725c4b4246d1 QoS Tier: memory: Guaranteed cpu: Guaranteed Limits: cpu: 100m memory: 500Mi Requests: cpu: 100m memory: 500Mi State: Running Started: Thu, 21 Apr 2016 14:53:32 +0000 Ready: True Restart Count: 0 Environment Variables: grafana: Container ID: docker://46888bd4a4b0c51ab8f03a17db2dbf5bfe329ef7c389b7422b86344a206b3653 Image: gcr.io/google_containers/heapster_grafana:v2.6.0-2 Image ID: docker://7553afcc1ffd82fe359fe7d69a5d0d7fef3020e45542caeaf95e5623ded41fbb QoS Tier: cpu: Guaranteed memory: Guaranteed Limits: cpu: 100m memory: 100Mi Requests: memory: 100Mi cpu: 100m State: Running Started: Thu, 21 Apr 2016 14:53:32 +0000 Ready: True Restart Count: 0 Environment Variables: INFLUXDB_SERVICE_URL: http://monitoring-influxdb:8086 GF_AUTH_BASIC_ENABLED: false GF_AUTH_ANONYMOUS_ENABLED: true GF_AUTH_ANONYMOUS_ORG_ROLE: Admin GF_SERVER_ROOT_URL: /api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/ Conditions: Type Status Ready True Volumes: influxdb-persistent-storage: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: grafana-persistent-storage: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: default-token-lacal: Type: Secret (a secret that should populate this volume) SecretName: default-token-lacal Events: FirstSeen LastSeen Count From SubobjectPath Reason Message ───────── ──────── ───── ──── ───────────── ────── ─────── 23m 23m 5 {scheduler } FailedScheduling Failed for reason PodFitsHostPorts and possibly others 22m 22m 1 {kubelet 10.10.1.84} implicitly required container POD Created Created with docker id 97a95bd1f80a 22m 22m 1 {scheduler } Scheduled Successfully assigned monitoring-influxdb-grafana-v3-grbs1 to 10.10.1.84 22m 22m 1 {kubelet 10.10.1.84} implicitly required container POD Pulled Container image "gcr.io/google_containers/pause:0.8.0" already present on machine 22m 22m 1 {kubelet 10.10.1.84} spec.containers{grafana} Pulled Container image "gcr.io/google_containers/heapster_grafana:v2.6.0-2" already present on machine 22m 22m 1 {kubelet 10.10.1.84} spec.containers{grafana} Created Created with docker id 46888bd4a4b0 22m 22m 1 {kubelet 10.10.1.84} spec.containers{grafana} Started Started with docker id 46888bd4a4b0 22m 22m 1 {kubelet 10.10.1.84} spec.containers{influxdb} Pulled Container image "gcr.io/google_containers/heapster_influxdb:v0.5" already present on machine 22m 22m 1 {kubelet 10.10.1.84} implicitly required container POD Started Started with docker id 97a95bd1f80a 22m 22m 1 {kubelet 10.10.1.84} spec.containers{influxdb} Created Created with docker id 4822dc9e98b5 22m 22m 1 {kubelet 10.10.1.84} spec.containers{influxdb} Started Started with docker id 4822dc9e98b5 Don't know what else to share. I can share other information if required. Please help, I couldn't find any solution for this. EDIT The response from the command as suggested in the answer below: kubectl attach -it --namespace=kube-system monitoring-influxdb-grafana-v2-c2tj9 J[04/21/16 23:30:19] [INFO] Loading configuration file /config/config.toml 0+---------------------------------------------+ 0| _____ __ _ _____ ____ | 0| |_ _| / _| | | __ \| _ \ | 0| | | _ __ | |_| |_ ___ _| | | | |_) | | 0| | | | '_ \| _| | | | \ \/ / | | | _ < | 0| _| |_| | | | | | | |_| |> <| |__| | |_) | | 0| |_____|_| |_|_| |_|\__,_/_/\_\_____/|____/ | 0+---------------------------------------------+ Thanks ### Response: To help drill down on what the problem is, I'd recommend seeing if the master is able to reach the pod at all. This'll help determine whether the issue is in your networking setup as a whole or just with the service routing from the master. You should be able to verify whether the apiserver can reach the pod by kubectl attach -it --namespace=kube-system monitoring-influxdb-grafana-v3-grbs1 and seeing whether it's able to connect. If it can connect, then there's something wrong with the service routing. If it can't, then the master is having trouble communicating with the node.
I have a K8S cluster setup on openstack using the COREOS guide. I am getting following error while accessing the GRAFANA UI on http://master-ip:8080/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/ Error: 'dial tcp 172.17.0.5:3000: i/o timeout' Trying to reach: 'http://172.17.0.5:3000/' I can access the InfluxDB UI at the influxdb-nodeip:8083. I can curl to 172.17.0.5:3000 from within the node. Steps I followed: Created the K8S cluster with 1 master and 1 node. Created namespace Setup DNS Confirmed DNS is working using busybox example. Setup InfluxDB and Grafana. Grafana container log 2016/04/21 14:53:33 [I] Listen: http://0.0.0.0:3000/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana .Grafana is up and running. Creating default influxdb datasource... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 242 100 37 100 205 3274 18143 --:--:-- --:--:-- --:--:-- 18636 HTTP/1.1 200 OK Content-Type: application/json; charset=UTF-8 Set-Cookie: grafana_sess=cd44a6ed54b863df; Path=/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana; HttpOnly Date: Thu, 21 Apr 2016 14:53:34 GMT Content-Length: 37 {"id":1,"message":"Datasource added"} Importing default dashboards... Importing /dashboards/cluster.json ... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 71639 100 49 100 71590 539 769k --:--:-- --:--:-- --:--:-- 776k HTTP/1.1 100 Continue Cluster-info cluster-info Kubernetes master is running at <master>:8080 Heapster is running at <master>:8080/api/v1/proxy/namespaces/kube-system/services/heapster KubeDNS is running at <master>:8080/api/v1/proxy/namespaces/kube-system/services/kube-dns Grafana is running at <master>:8080/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana InfluxDB is running at <master>:8080/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb version Client Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.8", GitCommit:"a8af33dc07ee08defa2d503f81e7deea32dd1d3b", GitTreeState:"clean"} Server Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.8", GitCommit:"a8af33dc07ee08defa2d503f81e7deea32dd1d3b", GitTreeState:"clean"} Node iptables: sudo iptables -n -t nat -L Chain PREROUTING (policy ACCEPT) target prot opt source destination KUBE-PORTALS-CONTAINER all -- 0.0.0.0/0 0.0.0.0/0 /* handle ClusterIPs; NOTE: this must be before the NodePort rul es */ DOCKER all -- 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL KUBE-NODEPORT-CONTAINER all -- 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL /* handle service NodePorts; NOTE : this must be the last rule in the chain */ Chain INPUT (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination KUBE-PORTALS-HOST all -- 0.0.0.0/0 0.0.0.0/0 /* handle ClusterIPs; NOTE: this must be before the NodePort rules */ DOCKER all -- 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL KUBE-NODEPORT-HOST all -- 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL /* handle service NodePorts; NOTE: thi s must be the last rule in the chain */ Chain POSTROUTING (policy ACCEPT) target prot opt source destination MASQUERADE all -- 172.17.0.0/16 0.0.0.0/0 MASQUERADE tcp -- 172.17.0.5 172.17.0.5 tcp dpt:8086 MASQUERADE tcp -- 172.17.0.5 172.17.0.5 tcp dpt:8083 Chain DOCKER (2 references) target prot opt source destination DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:8086 to:172.17.0.5:8086 DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:8083 to:172.17.0.5:8083 Chain KUBE-NODEPORT-CONTAINER (1 references) target prot opt source destination Chain KUBE-NODEPORT-HOST (1 references) target prot opt source destination Chain KUBE-PORTALS-CONTAINER (1 references) target prot opt source destination REDIRECT tcp -- 0.0.0.0/0 10.100.0.1 /* default/kubernetes: */ tcp dpt:443 redir ports 43104 REDIRECT udp -- 0.0.0.0/0 10.100.0.10 /* kube-system/kube-dns:dns */ udp dpt:53 redir ports 60423 REDIRECT tcp -- 0.0.0.0/0 10.100.0.10 /* kube-system/kube-dns:dns-tcp */ tcp dpt:53 redir ports 35036 REDIRECT tcp -- 0.0.0.0/0 10.100.176.182 /* kube-system/monitoring-grafana: */ tcp dpt:80 redir ports 41454 REDIRECT tcp -- 0.0.0.0/0 10.100.17.81 /* kube-system/heapster: */ tcp dpt:80 redir ports 40296 REDIRECT tcp -- 0.0.0.0/0 10.100.228.184 /* kube-system/monitoring-influxdb:http */ tcp dpt:8083 redir ports 39963 REDIRECT tcp -- 0.0.0.0/0 10.100.228.184 /* kube-system/monitoring-influxdb:api */ tcp dpt:8086 redir ports 40214 Chain KUBE-PORTALS-HOST (1 references) target prot opt source destination DNAT tcp -- 0.0.0.0/0 10.100.0.1 /* default/kubernetes: */ tcp dpt:443 to:10.10.1.84:43104 DNAT udp -- 0.0.0.0/0 10.100.0.10 /* kube-system/kube-dns:dns */ udp dpt:53 to:10.10.1.84:60423 DNAT tcp -- 0.0.0.0/0 10.100.0.10 /* kube-system/kube-dns:dns-tcp */ tcp dpt:53 to:10.10.1.84:35036 DNAT tcp -- 0.0.0.0/0 10.100.176.182 /* kube-system/monitoring-grafana: */ tcp dpt:80 to:10.10.1.84:41454 DNAT tcp -- 0.0.0.0/0 10.100.17.81 /* kube-system/heapster: */ tcp dpt:80 to:10.10.1.84:40296 DNAT tcp -- 0.0.0.0/0 10.100.228.184 /* kube-system/monitoring-influxdb:http */ tcp dpt:8083 to:10.10.1.84:39963 DNAT tcp -- 0.0.0.0/0 10.100.228.184 /* kube-system/monitoring-influxdb:api */ tcp dpt:8086 to:10.10.1.84:40214 describe pod --namespace=kube-system monitoring-influxdb-grafana-v3-grbs1 Name: monitoring-influxdb-grafana-v3-grbs1 Namespace: kube-system Image(s): gcr.io/google_containers/heapster_influxdb:v0.5,gcr.io/google_containers/heapster_grafana:v2.6.0-2 Node: 10.10.1.84/10.10.1.84 Start Time: Thu, 21 Apr 2016 14:53:31 +0000 Labels: k8s-app=influxGrafana,kubernetes.io/cluster-service=true,version=v3 Status: Running Reason: Message: IP: 172.17.0.5 Replication Controllers: monitoring-influxdb-grafana-v3 (1/1 replicas created) Containers: influxdb: Container ID: docker://4822dc9e98b5b423cdd1ac8fe15cb516f53ff45f48faf05b067765fdb758c96f Image: gcr.io/google_containers/heapster_influxdb:v0.5 Image ID: docker://eb8e59964b24fd1f565f9c583167864ec003e8ba6cced71f38c0725c4b4246d1 QoS Tier: memory: Guaranteed cpu: Guaranteed Limits: cpu: 100m memory: 500Mi Requests: cpu: 100m memory: 500Mi State: Running Started: Thu, 21 Apr 2016 14:53:32 +0000 Ready: True Restart Count: 0 Environment Variables: grafana: Container ID: docker://46888bd4a4b0c51ab8f03a17db2dbf5bfe329ef7c389b7422b86344a206b3653 Image: gcr.io/google_containers/heapster_grafana:v2.6.0-2 Image ID: docker://7553afcc1ffd82fe359fe7d69a5d0d7fef3020e45542caeaf95e5623ded41fbb QoS Tier: cpu: Guaranteed memory: Guaranteed Limits: cpu: 100m memory: 100Mi Requests: memory: 100Mi cpu: 100m State: Running Started: Thu, 21 Apr 2016 14:53:32 +0000 Ready: True Restart Count: 0 Environment Variables: INFLUXDB_SERVICE_URL: http://monitoring-influxdb:8086 GF_AUTH_BASIC_ENABLED: false GF_AUTH_ANONYMOUS_ENABLED: true GF_AUTH_ANONYMOUS_ORG_ROLE: Admin GF_SERVER_ROOT_URL: /api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/ Conditions: Type Status Ready True Volumes: influxdb-persistent-storage: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: grafana-persistent-storage: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: default-token-lacal: Type: Secret (a secret that should populate this volume) SecretName: default-token-lacal Events: FirstSeen LastSeen Count From SubobjectPath Reason Message ───────── ──────── ───── ──── ───────────── ────── ─────── 23m 23m 5 {scheduler } FailedScheduling Failed for reason PodFitsHostPorts and possibly others 22m 22m 1 {kubelet 10.10.1.84} implicitly required container POD Created Created with docker id 97a95bd1f80a 22m 22m 1 {scheduler } Scheduled Successfully assigned monitoring-influxdb-grafana-v3-grbs1 to 10.10.1.84 22m 22m 1 {kubelet 10.10.1.84} implicitly required container POD Pulled Container image "gcr.io/google_containers/pause:0.8.0" already present on machine 22m 22m 1 {kubelet 10.10.1.84} spec.containers{grafana} Pulled Container image "gcr.io/google_containers/heapster_grafana:v2.6.0-2" already present on machine 22m 22m 1 {kubelet 10.10.1.84} spec.containers{grafana} Created Created with docker id 46888bd4a4b0 22m 22m 1 {kubelet 10.10.1.84} spec.containers{grafana} Started Started with docker id 46888bd4a4b0 22m 22m 1 {kubelet 10.10.1.84} spec.containers{influxdb} Pulled Container image "gcr.io/google_containers/heapster_influxdb:v0.5" already present on machine 22m 22m 1 {kubelet 10.10.1.84} implicitly required container POD Started Started with docker id 97a95bd1f80a 22m 22m 1 {kubelet 10.10.1.84} spec.containers{influxdb} Created Created with docker id 4822dc9e98b5 22m 22m 1 {kubelet 10.10.1.84} spec.containers{influxdb} Started Started with docker id 4822dc9e98b5 Don't know what else to share. I can share other information if required. Please help, I couldn't find any solution for this. EDIT The response from the command as suggested in the answer below: kubectl attach -it --namespace=kube-system monitoring-influxdb-grafana-v2-c2tj9 J[04/21/16 23:30:19] [INFO] Loading configuration file /config/config.toml 0+---------------------------------------------+ 0| _____ __ _ _____ ____ | 0| |_ _| / _| | | __ \| _ \ | 0| | | _ __ | |_| |_ ___ _| | | | |_) | | 0| | | | '_ \| _| | | | \ \/ / | | | _ < | 0| _| |_| | | | | | | |_| |> <| |__| | |_) | | 0| |_____|_| |_|_| |_|\__,_/_/\_\_____/|____/ | 0+---------------------------------------------+ Thanks
You can't achieve this with current Kubernetes. You need the implementation of PetSets due in v1.3.
###Instruction: I'm using kubernetes 1.2 example to run 2 cassandra nodes for testing. https://github.com/kubernetes/kubernetes/blob/release-1.2/examples/cassandra/README.md I use daemonset to have one cassandra node by kubernetes node. Everything work fine till one cassandra node restart. IP address of the POD changes and nodetools status returns Node DOWN > kubectl exec -it cassandra-lnzhj -- nodetool status fruition Datacenter: datacenter1 ======================= Status=Up/Down |/ State=Normal/Leaving/Joining/Moving -- Address Load Tokens Owns (effective) Host ID Rack UN 10.216.1.4 25.22 MB 256 39.6% 786aede9-ec4f-4942-b52a-135bc3cd68ce rack1 UN 10.216.0.3 2.11 MB 256 40.1% 457f7322-131a-4499-b677-4d50691207ba rack1 DN 10.216.0.2 377.41 KB 256 38.8% aa2ca115-e8ea-4c62-8d57-bfc5b3fabade rack1 Then when i try to send a simple "select * from table;" on a keyspace with a replication factor of 2, I've this error: Traceback (most recent call last): File "/usr/bin/cqlsh", line 1093, in perform_simple_statement rows = self.session.execute(statement, trace=self.tracing_enabled) File "/usr/share/cassandra/lib/cassandra-driver-internal-only-2.7.2.zip/cassandra-driver-2.7.2/cassandra/cluster.py", line 1602, in execute result = future.result() File "/usr/share/cassandra/lib/cassandra-driver-internal-only-2.7.2.zip/cassandra-driver-2.7.2/cassandra/cluster.py", line 3347, in result raise self._final_exception Unavailable: code=1000 [Unavailable exception] message="Cannot achieve consistency level ONE" info={'required_replicas': 1, 'alive_replicas': 0, 'consistency': 'ONE'} How to keep POD IP address in order not to have Down node when Kubernetes restart it? Is there a better way to do it with cassandra configuration? ### Response: You can't achieve this with current Kubernetes. You need the implementation of PetSets due in v1.3.
I'm using kubernetes 1.2 example to run 2 cassandra nodes for testing. https://github.com/kubernetes/kubernetes/blob/release-1.2/examples/cassandra/README.md I use daemonset to have one cassandra node by kubernetes node. Everything work fine till one cassandra node restart. IP address of the POD changes and nodetools status returns Node DOWN > kubectl exec -it cassandra-lnzhj -- nodetool status fruition Datacenter: datacenter1 ======================= Status=Up/Down |/ State=Normal/Leaving/Joining/Moving -- Address Load Tokens Owns (effective) Host ID Rack UN 10.216.1.4 25.22 MB 256 39.6% 786aede9-ec4f-4942-b52a-135bc3cd68ce rack1 UN 10.216.0.3 2.11 MB 256 40.1% 457f7322-131a-4499-b677-4d50691207ba rack1 DN 10.216.0.2 377.41 KB 256 38.8% aa2ca115-e8ea-4c62-8d57-bfc5b3fabade rack1 Then when i try to send a simple "select * from table;" on a keyspace with a replication factor of 2, I've this error: Traceback (most recent call last): File "/usr/bin/cqlsh", line 1093, in perform_simple_statement rows = self.session.execute(statement, trace=self.tracing_enabled) File "/usr/share/cassandra/lib/cassandra-driver-internal-only-2.7.2.zip/cassandra-driver-2.7.2/cassandra/cluster.py", line 1602, in execute result = future.result() File "/usr/share/cassandra/lib/cassandra-driver-internal-only-2.7.2.zip/cassandra-driver-2.7.2/cassandra/cluster.py", line 3347, in result raise self._final_exception Unavailable: code=1000 [Unavailable exception] message="Cannot achieve consistency level ONE" info={'required_replicas': 1, 'alive_replicas': 0, 'consistency': 'ONE'} How to keep POD IP address in order not to have Down node when Kubernetes restart it? Is there a better way to do it with cassandra configuration?
The idea is to use a different 'service' and 'deployment' for each of the node you want to create. As you said, you have to create a custom NODENAME for each i.e: RABBITMQ_NODENAME=rabbit@rabbitmq-1 Also rabbitmq-1,rabbitmq-2,rabbitmq-3 have to be resolved from each nodes. For that you can use kubedns. The /etc/resolv.conf will look like: search rmq.svc.cluster.local and /etc/hosts must contains: 127.0.0.1 rabbitmq-1 # or rabbitmq-2 on node 2... The services are here to create a stable network identity for each nodes rabbitmq-1.svc.cluster.local rabbitmq-2.svc.cluster.local rabbitmq-3.svc.cluster.local The different deployments resources will allow you to mount a different volume on each node. I'm working on a deployment tool to simplify those actions: I've done a demo on how I scale and deploy rabbitmq from 1 to 3 nodes on kubernetes: https://asciinema.org/a/2ktj7kr2d2m3w25xrpz7mjkbu?speed=1.5 More generally, the complexity your facing to deploy a clustered application is addressed in the 'petset proposal': https://github.com/kubernetes/kubernetes/pull/18016
###Instruction: I'm trying to run Rabbitmq using Kubernetes on AWS. I'm using the official Rabbitmq docker container. Each time the pod restarts the rabbitmq container gets a new hostname. I've setup a service (of type LoadBalancer) for the pod with a resolvable DNS name. But when I use an EBS to make the rabbit config/messsage/queues persistent between restarts it breaks with: exception exit: {{failed_to_cluster_with, ['rabbitmq@rabbitmq-deployment-2901855891-nord3'], "Mnesia could not connect to any nodes."}, {rabbit,start,[normal,[]]}} in function application_master:init/4 (application_master.erl, line 134) rabbitmq-deployment-2901855891-nord3 is the previous hostname rabbitmq container. It is almost like Mnesia saved the old hostname :-/ The container's info looks like this: Starting broker... =INFO REPORT==== 25-Apr-2016::12:42:42 === node : rabbitmq@rabbitmq-deployment-2770204827-cboj8 home dir : /var/lib/rabbitmq config file(s) : /etc/rabbitmq/rabbitmq.config cookie hash : XXXXXXXXXXXXXXXX log : tty sasl log : tty database dir : /var/lib/rabbitmq/mnesia/rabbitmq I'm only able to set the first part of the node name to rabbitmq using the RABBITMQ_NODENAME environment variable. Setting RABBITMQ_NODENAME to a resolvable DNS name breaks with: Can't set short node name!\nPlease check your configuration\n" Setting RABBITMQ_USE_LONGNAME to true breaks with: Can't set long node name!\nPlease check your configuration\n" Update: Setting RABBITMQ_NODENAME to rabbitmq@localhost works but that negates any possibility to cluster instances. Starting broker... =INFO REPORT==== 26-Apr-2016::11:53:19 === node : rabbitmq@localhost home dir : /var/lib/rabbitmq config file(s) : /etc/rabbitmq/rabbitmq.config cookie hash : 9WtXr5XgK4KXE/soTc6Lag== log : tty sasl log : tty database dir : /var/lib/rabbitmq/mnesia/rabbitmq@localhost Setting RABBITMQ_NODENAME to the service name, in this case rabbitmq-service like so rabbitmq@rabbitmq-service also works since kubernetes service names are internally resolvable via DNS. Starting broker... =INFO REPORT==== 26-Apr-2016::11:53:19 === node : rabbitmq@rabbitmq-service home dir : /var/lib/rabbitmq config file(s) : /etc/rabbitmq/rabbitmq.config cookie hash : 9WtXr5XgK4KXE/soTc6Lag== log : tty sasl log : tty database dir : /var/lib/rabbitmq/mnesia/rabbitmq@rabbitmq-service Is this the right way though? Will I still be able to cluster multiple instances if the node names are the same? ### Response: The idea is to use a different 'service' and 'deployment' for each of the node you want to create. As you said, you have to create a custom NODENAME for each i.e: RABBITMQ_NODENAME=rabbit@rabbitmq-1 Also rabbitmq-1,rabbitmq-2,rabbitmq-3 have to be resolved from each nodes. For that you can use kubedns. The /etc/resolv.conf will look like: search rmq.svc.cluster.local and /etc/hosts must contains: 127.0.0.1 rabbitmq-1 # or rabbitmq-2 on node 2... The services are here to create a stable network identity for each nodes rabbitmq-1.svc.cluster.local rabbitmq-2.svc.cluster.local rabbitmq-3.svc.cluster.local The different deployments resources will allow you to mount a different volume on each node. I'm working on a deployment tool to simplify those actions: I've done a demo on how I scale and deploy rabbitmq from 1 to 3 nodes on kubernetes: https://asciinema.org/a/2ktj7kr2d2m3w25xrpz7mjkbu?speed=1.5 More generally, the complexity your facing to deploy a clustered application is addressed in the 'petset proposal': https://github.com/kubernetes/kubernetes/pull/18016
I'm trying to run Rabbitmq using Kubernetes on AWS. I'm using the official Rabbitmq docker container. Each time the pod restarts the rabbitmq container gets a new hostname. I've setup a service (of type LoadBalancer) for the pod with a resolvable DNS name. But when I use an EBS to make the rabbit config/messsage/queues persistent between restarts it breaks with: exception exit: {{failed_to_cluster_with, ['rabbitmq@rabbitmq-deployment-2901855891-nord3'], "Mnesia could not connect to any nodes."}, {rabbit,start,[normal,[]]}} in function application_master:init/4 (application_master.erl, line 134) rabbitmq-deployment-2901855891-nord3 is the previous hostname rabbitmq container. It is almost like Mnesia saved the old hostname :-/ The container's info looks like this: Starting broker... =INFO REPORT==== 25-Apr-2016::12:42:42 === node : rabbitmq@rabbitmq-deployment-2770204827-cboj8 home dir : /var/lib/rabbitmq config file(s) : /etc/rabbitmq/rabbitmq.config cookie hash : XXXXXXXXXXXXXXXX log : tty sasl log : tty database dir : /var/lib/rabbitmq/mnesia/rabbitmq I'm only able to set the first part of the node name to rabbitmq using the RABBITMQ_NODENAME environment variable. Setting RABBITMQ_NODENAME to a resolvable DNS name breaks with: Can't set short node name!\nPlease check your configuration\n" Setting RABBITMQ_USE_LONGNAME to true breaks with: Can't set long node name!\nPlease check your configuration\n" Update: Setting RABBITMQ_NODENAME to rabbitmq@localhost works but that negates any possibility to cluster instances. Starting broker... =INFO REPORT==== 26-Apr-2016::11:53:19 === node : rabbitmq@localhost home dir : /var/lib/rabbitmq config file(s) : /etc/rabbitmq/rabbitmq.config cookie hash : 9WtXr5XgK4KXE/soTc6Lag== log : tty sasl log : tty database dir : /var/lib/rabbitmq/mnesia/rabbitmq@localhost Setting RABBITMQ_NODENAME to the service name, in this case rabbitmq-service like so rabbitmq@rabbitmq-service also works since kubernetes service names are internally resolvable via DNS. Starting broker... =INFO REPORT==== 26-Apr-2016::11:53:19 === node : rabbitmq@rabbitmq-service home dir : /var/lib/rabbitmq config file(s) : /etc/rabbitmq/rabbitmq.config cookie hash : 9WtXr5XgK4KXE/soTc6Lag== log : tty sasl log : tty database dir : /var/lib/rabbitmq/mnesia/rabbitmq@rabbitmq-service Is this the right way though? Will I still be able to cluster multiple instances if the node names are the same?
In 1.7 you can set the hostname directly in the Deployment spec spec: replicas: 1 template: spec: hostname: myhostname containers: ... Old Answer Now that 1.2 has landed, you can set a static hostname in a Replication Controller or Deployment spec using the pod.beta.kubernetes.io/hostname annotation. spec: replicas: 1 template: metadata: annotations: pod.beta.kubernetes.io/hostname: myhostname labels: ...
###Instruction: I need to set a static hostname in a Kubernetes replication controller. Docker supports it with some runtime flags, however, Kubernetes replication controllers don't appear to support it. The environment: OS - CentOS 6.6 Approach to use sysctl to change the variable kernel.hostname does not work for a K8s replication controller. The host name is not changed. Use: sysctl kernel.hostname to read the current hostname, and sysctl kernel.hostname=NEW_HOSTNAME Is it possible to set a hostname in a Kubernetes replication controller? ### Response: In 1.7 you can set the hostname directly in the Deployment spec spec: replicas: 1 template: spec: hostname: myhostname containers: ... Old Answer Now that 1.2 has landed, you can set a static hostname in a Replication Controller or Deployment spec using the pod.beta.kubernetes.io/hostname annotation. spec: replicas: 1 template: metadata: annotations: pod.beta.kubernetes.io/hostname: myhostname labels: ...
I need to set a static hostname in a Kubernetes replication controller. Docker supports it with some runtime flags, however, Kubernetes replication controllers don't appear to support it. The environment: OS - CentOS 6.6 Approach to use sysctl to change the variable kernel.hostname does not work for a K8s replication controller. The host name is not changed. Use: sysctl kernel.hostname to read the current hostname, and sysctl kernel.hostname=NEW_HOSTNAME Is it possible to set a hostname in a Kubernetes replication controller?
It's important to note that you've set a kubeconfig setting for your client. When you run kubectl version, you're getting the version for client and the server which in your case seems to be the issue with the version command. Updating your config You need to update the setting to the appropriate information. You can use the same command you used to set the server to change it to the correct server. If you want to wipe the slate clean in terms of client config, you should remove the kubeconfig file(s). In my experience with the gcloud setup, this is just ~/.kube/config. If you are running the cluster through google cloud engine, you can use gcloud to get the kubeconfig set for you as per the container engine quick start guide. The following assumes that you have defaults for the project, zone, and cluster set. gcloud container clusters get-credentials CLUSTER_NAME Removing kubectl - this isn't necessary If your goal is to wholesale get rid of kubectl, you should remove the component rather than reseting gcloud. gcloud components remove kubectl But that won't solve your problem as it doesn't remove or reset ~/.kube/config when I run it on Mac and if you want to keep working with it, you'll need to reinstall kubectl.
###Instruction: On my ubuntu laptop I was issuing some kubectl commands including running kubernetes from a local Docker container all was well ... at some point I then issued this command kubectl config set-cluster test-doc --server=https://104.196.108.118 now my local kubectl fails to execute ... looks like the Server side needs to get reset back to default kubectl version Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.2", GitCommit:"528f879e7d3790ea4287687ef0ab3f2a01cc2718", GitTreeState:"clean"} error: couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } I deleted and reinstalled the gcloud SDK binaries and ran mv ~/.config/gcloud ~/.config/gcloud~ignore gcloud init gcloud components update kubectl How do I delete my local kubectl settings (on ubuntu 16.04) and start fresh ? ### Response: It's important to note that you've set a kubeconfig setting for your client. When you run kubectl version, you're getting the version for client and the server which in your case seems to be the issue with the version command. Updating your config You need to update the setting to the appropriate information. You can use the same command you used to set the server to change it to the correct server. If you want to wipe the slate clean in terms of client config, you should remove the kubeconfig file(s). In my experience with the gcloud setup, this is just ~/.kube/config. If you are running the cluster through google cloud engine, you can use gcloud to get the kubeconfig set for you as per the container engine quick start guide. The following assumes that you have defaults for the project, zone, and cluster set. gcloud container clusters get-credentials CLUSTER_NAME Removing kubectl - this isn't necessary If your goal is to wholesale get rid of kubectl, you should remove the component rather than reseting gcloud. gcloud components remove kubectl But that won't solve your problem as it doesn't remove or reset ~/.kube/config when I run it on Mac and if you want to keep working with it, you'll need to reinstall kubectl.
On my ubuntu laptop I was issuing some kubectl commands including running kubernetes from a local Docker container all was well ... at some point I then issued this command kubectl config set-cluster test-doc --server=https://104.196.108.118 now my local kubectl fails to execute ... looks like the Server side needs to get reset back to default kubectl version Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.2", GitCommit:"528f879e7d3790ea4287687ef0ab3f2a01cc2718", GitTreeState:"clean"} error: couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } I deleted and reinstalled the gcloud SDK binaries and ran mv ~/.config/gcloud ~/.config/gcloud~ignore gcloud init gcloud components update kubectl How do I delete my local kubectl settings (on ubuntu 16.04) and start fresh ?
In the end, it seems that the real problem was the fact that I was trying to create the database from my entrypoint script. Things such as creating a db or a user should be done at container creation time so I ended up using the standard Postgres image, which actually provides a simple and easy way to create an user and a db. This is the fully functional configuration file for Postgres. apiVersion: v1 kind: Pod metadata: name: postgres labels: name: postgres spec: containers: - name: postgres image: postgres env: - name: PGDATA value: /var/lib/postgresql/data/pgdata - name: POSTGRES_USER value: myuser - name: POSTGRES_PASSWORD value: mypassword - name: POSTGRES_DB value: mydb ports: - containerPort: 5432 volumeMounts: - mountPath: /var/lib/postgresql/data name: pg-data volumes: - name: pg-data persistentVolumeClaim: claimName: pg-data-claim Thanks to all those who helped me :)
###Instruction: I am trying to mount a persistent disk on my container which runs a Postgres custom image. I am using Kubernetes and following this tutorial. This is my db_pod.yaml file: apiVersion: v1 kind: Pod metadata: name: lp-db labels: name: lp-db spec: containers: - image: my_username/my-db name: my-db ports: - containerPort: 5432 name: my-db volumeMounts: - name: pg-data mountPath: /var/lib/postgresql/data volumes: - name: pg-data gcePersistentDisk: pdName: my-db-disk fsType: ext4 I create the disk using the command gcloud compute disks create --size 200GB my-db-disk. However, when I run the pod, delete it, and then run it again (like in the tutorial) my data is not persisted. I tried multiple versions of this file, including with PersistentVolumes and PersistentVolumeClaims, I tried changing the mountPath, but to no success. Edit Dockerfile for creating the Postgres image: FROM ubuntu:trusty RUN rm /bin/sh && \ ln -s /bin/bash /bin/sh # Get Postgres RUN echo "deb http://apt.postgresql.org/pub/repos/apt/ trusty-pgdg main" >> /etc/apt/sources.list.d/pgdg.list RUN apt-get update && \ apt-get install -y wget RUN wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add - # Install virtualenv (will be needed later) RUN apt-get update && \ apt-get install -y \ libjpeg-dev \ libpq-dev \ postgresql-9.4 \ python-dev \ python-pip \ python-virtualenv \ strace \ supervisor # Grab gosu for easy step-down from root RUN gpg --keyserver pool.sks-keyservers.net --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4 RUN apt-get update && apt-get install -y --no-install-recommends ca-certificates wget && rm -rf /var/lib/apt/lists/* \ && wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/1.2/gosu-$(dpkg --print-architecture)" \ && wget -O /usr/local/bin/gosu.asc "https://github.com/tianon/gosu/releases/download/1.2/gosu-$(dpkg --print-architecture).asc" \ && gpg --verify /usr/local/bin/gosu.asc \ && rm /usr/local/bin/gosu.asc \ && chmod +x /usr/local/bin/gosu \ && apt-get purge -y --auto-remove ca-certificates wget # make the "en_US.UTF-8" locale so postgres will be utf-8 enabled by default RUN apt-get update && apt-get install -y locales && rm -rf /var/lib/apt/lists/* \ && localedef -i en_US -c -f UTF-8 -A /usr/share/locale/locale.alias en_US.UTF-8 ENV LANG en_US.utf8 # Adjust PostgreSQL configuration so that remote connections to the database are possible. RUN echo "host all all 0.0.0.0/0 md5" >> /etc/postgresql/9.4/main/pg_hba.conf # And add ``listen_addresses`` to ``/etc/postgresql/9.4/main/postgresql.conf`` RUN echo "listen_addresses='*'" >> /etc/postgresql/9.4/main/postgresql.conf RUN echo "log_directory='/var/log/postgresql'" >> /etc/postgresql/9.4/main/postgresql.conf # Add all code from the project and all config files WORKDIR /home/projects/my-project COPY . . # Add VOLUMEs to allow backup of config, logs and databases ENV PGDATA /var/lib/postgresql/data VOLUME /var/lib/postgresql/data # Expose an entrypoint and a port RUN chmod +x scripts/sh/* EXPOSE 5432 ENTRYPOINT ["scripts/sh/entrypoint-postgres.sh"] And entrypoint script: echo " I am " && gosu postgres whoami gosu postgres /etc/init.d/postgresql start && echo 'Started postgres' gosu postgres psql --command "CREATE USER myuser WITH SUPERUSER PASSWORD 'mypassword';" && echo 'Created user' gosu postgres createdb -O myuser mydb && echo 'Created db' # This just keeps the container alive. tail -F /var/log/postgresql/postgresql-9.4-main.log ### Response: In the end, it seems that the real problem was the fact that I was trying to create the database from my entrypoint script. Things such as creating a db or a user should be done at container creation time so I ended up using the standard Postgres image, which actually provides a simple and easy way to create an user and a db. This is the fully functional configuration file for Postgres. apiVersion: v1 kind: Pod metadata: name: postgres labels: name: postgres spec: containers: - name: postgres image: postgres env: - name: PGDATA value: /var/lib/postgresql/data/pgdata - name: POSTGRES_USER value: myuser - name: POSTGRES_PASSWORD value: mypassword - name: POSTGRES_DB value: mydb ports: - containerPort: 5432 volumeMounts: - mountPath: /var/lib/postgresql/data name: pg-data volumes: - name: pg-data persistentVolumeClaim: claimName: pg-data-claim Thanks to all those who helped me :)
I am trying to mount a persistent disk on my container which runs a Postgres custom image. I am using Kubernetes and following this tutorial. This is my db_pod.yaml file: apiVersion: v1 kind: Pod metadata: name: lp-db labels: name: lp-db spec: containers: - image: my_username/my-db name: my-db ports: - containerPort: 5432 name: my-db volumeMounts: - name: pg-data mountPath: /var/lib/postgresql/data volumes: - name: pg-data gcePersistentDisk: pdName: my-db-disk fsType: ext4 I create the disk using the command gcloud compute disks create --size 200GB my-db-disk. However, when I run the pod, delete it, and then run it again (like in the tutorial) my data is not persisted. I tried multiple versions of this file, including with PersistentVolumes and PersistentVolumeClaims, I tried changing the mountPath, but to no success. Edit Dockerfile for creating the Postgres image: FROM ubuntu:trusty RUN rm /bin/sh && \ ln -s /bin/bash /bin/sh # Get Postgres RUN echo "deb http://apt.postgresql.org/pub/repos/apt/ trusty-pgdg main" >> /etc/apt/sources.list.d/pgdg.list RUN apt-get update && \ apt-get install -y wget RUN wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add - # Install virtualenv (will be needed later) RUN apt-get update && \ apt-get install -y \ libjpeg-dev \ libpq-dev \ postgresql-9.4 \ python-dev \ python-pip \ python-virtualenv \ strace \ supervisor # Grab gosu for easy step-down from root RUN gpg --keyserver pool.sks-keyservers.net --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4 RUN apt-get update && apt-get install -y --no-install-recommends ca-certificates wget && rm -rf /var/lib/apt/lists/* \ && wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/1.2/gosu-$(dpkg --print-architecture)" \ && wget -O /usr/local/bin/gosu.asc "https://github.com/tianon/gosu/releases/download/1.2/gosu-$(dpkg --print-architecture).asc" \ && gpg --verify /usr/local/bin/gosu.asc \ && rm /usr/local/bin/gosu.asc \ && chmod +x /usr/local/bin/gosu \ && apt-get purge -y --auto-remove ca-certificates wget # make the "en_US.UTF-8" locale so postgres will be utf-8 enabled by default RUN apt-get update && apt-get install -y locales && rm -rf /var/lib/apt/lists/* \ && localedef -i en_US -c -f UTF-8 -A /usr/share/locale/locale.alias en_US.UTF-8 ENV LANG en_US.utf8 # Adjust PostgreSQL configuration so that remote connections to the database are possible. RUN echo "host all all 0.0.0.0/0 md5" >> /etc/postgresql/9.4/main/pg_hba.conf # And add ``listen_addresses`` to ``/etc/postgresql/9.4/main/postgresql.conf`` RUN echo "listen_addresses='*'" >> /etc/postgresql/9.4/main/postgresql.conf RUN echo "log_directory='/var/log/postgresql'" >> /etc/postgresql/9.4/main/postgresql.conf # Add all code from the project and all config files WORKDIR /home/projects/my-project COPY . . # Add VOLUMEs to allow backup of config, logs and databases ENV PGDATA /var/lib/postgresql/data VOLUME /var/lib/postgresql/data # Expose an entrypoint and a port RUN chmod +x scripts/sh/* EXPOSE 5432 ENTRYPOINT ["scripts/sh/entrypoint-postgres.sh"] And entrypoint script: echo " I am " && gosu postgres whoami gosu postgres /etc/init.d/postgresql start && echo 'Started postgres' gosu postgres psql --command "CREATE USER myuser WITH SUPERUSER PASSWORD 'mypassword';" && echo 'Created user' gosu postgres createdb -O myuser mydb && echo 'Created db' # This just keeps the container alive. tail -F /var/log/postgresql/postgresql-9.4-main.log
I don't think that is possible today, but it seems like a potentially useful feature. I filed a feature request.
###Instruction: I have several different Deployments. Deployment A: export port 3333 Deployment B: export port 4444 I want to use a single Service(with LoadBalancer type) to export them. Service Main: export port 4545 -> Route to Deployment A's port 3333 export port 5555 -> Route to Deployment B's port 4444 The documentation say that you can export multiple ports on one services, but it doesn't say whether it works for multiple Deployments. Since Services use selector to select Deployments but in my case, there will be more than one Deployment comes from the selecting result. ### Response: I don't think that is possible today, but it seems like a potentially useful feature. I filed a feature request.
I have several different Deployments. Deployment A: export port 3333 Deployment B: export port 4444 I want to use a single Service(with LoadBalancer type) to export them. Service Main: export port 4545 -> Route to Deployment A's port 3333 export port 5555 -> Route to Deployment B's port 4444 The documentation say that you can export multiple ports on one services, but it doesn't say whether it works for multiple Deployments. Since Services use selector to select Deployments but in my case, there will be more than one Deployment comes from the selecting result.
This is what I ended up doing... I exposed the service I was interested in via loadbalancer and then accessed it through the VPN via "privateNodeIp":"nodePort" see the publishing services section of kubernetes guide
###Instruction: I have a kubernetes service exposed via a load balancer, works fine but I want access this service via a VPN service. The ip range of the service doesn't seem to match my VPC (does not know if that is a problem) I have successfully setup a VPN tunnel between my VPC and the router. Kindly, provide more info on what needs to be done. ### Response: This is what I ended up doing... I exposed the service I was interested in via loadbalancer and then accessed it through the VPN via "privateNodeIp":"nodePort" see the publishing services section of kubernetes guide
I have a kubernetes service exposed via a load balancer, works fine but I want access this service via a VPN service. The ip range of the service doesn't seem to match my VPC (does not know if that is a problem) I have successfully setup a VPN tunnel between my VPC and the router. Kindly, provide more info on what needs to be done.
You can kubectl config unset contexts.testorz to remove that entry. See http://kubernetes.io/docs/user-guide/kubectl/kubectl_config_unset/
###Instruction: I was following a tutorial on kubernetes and it told me to run the following commands: kubectl config set-cluster --server=http://127.0.0.1:8080 kubectl config set-context local --cluster=local kubectl config use-context local Now when I run kubectl config view I see an entry for local: contexts: - context: cluster: local user: "" name: local I was kind of curious about this so I ran the following command: kubectl config set-context testorz --cluster=local Now I see that when I run kubectl config view How do I remove my "testorz" context? ### Response: You can kubectl config unset contexts.testorz to remove that entry. See http://kubernetes.io/docs/user-guide/kubectl/kubectl_config_unset/
I was following a tutorial on kubernetes and it told me to run the following commands: kubectl config set-cluster --server=http://127.0.0.1:8080 kubectl config set-context local --cluster=local kubectl config use-context local Now when I run kubectl config view I see an entry for local: contexts: - context: cluster: local user: "" name: local I was kind of curious about this so I ran the following command: kubectl config set-context testorz --cluster=local Now I see that when I run kubectl config view How do I remove my "testorz" context?
Secrets are stored encoded and over time will become more protected (e.g. limited access, encrypted at rest, etc). Secrets existed before ConfigMap was created, so until recently it was common to store configuration data in secrets (e.g. conf2kube). You should use secrets for sensitive data (database passwords, private keys) and ConfigMaps for non-sensitive configuration data.
###Instruction: What are the differences between secrets and configmap in term of security? In which cases would I want to use secret instead of configmap? ### Response: Secrets are stored encoded and over time will become more protected (e.g. limited access, encrypted at rest, etc). Secrets existed before ConfigMap was created, so until recently it was common to store configuration data in secrets (e.g. conf2kube). You should use secrets for sensitive data (database passwords, private keys) and ConfigMaps for non-sensitive configuration data.
What are the differences between secrets and configmap in term of security? In which cases would I want to use secret instead of configmap?
I'm the author of both of these features. The idea is that you should: Use Secrets for things which are actually secret like API keys, credentials, etc Use ConfigMaps for not-secret configuration data In the future, there will likely be some differentiators for secrets like rotation or support for backing the secret API w/ HSMs, etc. In general, we like intent-based APIs, and the intent is definitely different for secret data vs. plain old configs.
###Instruction: Have been using Kubernetes secrets up to date. Now we have ConfigMaps as well. What is the preferred way forward - secrets or config maps? P.S. After a few iterations we have stabilised at the following rule: configMaps are per solution domain (can be shared across microservices within the domain, but ultimately are single purpose config entries) secrets are shared across solution domains, usually represent third party systems or databases ### Response: I'm the author of both of these features. The idea is that you should: Use Secrets for things which are actually secret like API keys, credentials, etc Use ConfigMaps for not-secret configuration data In the future, there will likely be some differentiators for secrets like rotation or support for backing the secret API w/ HSMs, etc. In general, we like intent-based APIs, and the intent is definitely different for secret data vs. plain old configs.
Have been using Kubernetes secrets up to date. Now we have ConfigMaps as well. What is the preferred way forward - secrets or config maps? P.S. After a few iterations we have stabilised at the following rule: configMaps are per solution domain (can be shared across microservices within the domain, but ultimately are single purpose config entries) secrets are shared across solution domains, usually represent third party systems or databases
To share network interfaces between pod and host. You need to start the pod with host mode, which means pod will share the same network namespace with the host node. In the pod configuration file, you need to specify "hostNetwork: true". And for Kubernetes version before 1.1, you need to use --host-network-sources="*" option when start kubelet. Following is the example pod configuration file: apiVersion: v1 kind: Pod metadata: name: test spec: hostNetwork: true restartPolicy: Never containers: - name: test image: ubuntu command: ["bash", "sleep 1000"] Following is the example kublet command: kubelet --api_servers=http://$MASTER_IP:8080 --address=0.0.0.0 --cluster_dns=10.0.0.10 --cluster_domain="kubernetes.local" --host-network-sources="*" --pod-infra-container-image="kiwenlau/pause:0.8.0 References: kubernetes specified host networking, but is disallowed? kubernetes network performance issue: moving service from physical machine to kubernetes get half rps drop
###Instruction: How can one create a second network interface for a pod. In particular, I actually have a use case where second network interface should be ideally shared network interface to the host('s 2nd network interface) because the host has second network domain? The external service (on a separate subnet as the host) is not controlled within the same cluster(and controlled by some other vendoring). It is limiting its access by whitelisting client ips. Please advice. Thanks. ### Response: To share network interfaces between pod and host. You need to start the pod with host mode, which means pod will share the same network namespace with the host node. In the pod configuration file, you need to specify "hostNetwork: true". And for Kubernetes version before 1.1, you need to use --host-network-sources="*" option when start kubelet. Following is the example pod configuration file: apiVersion: v1 kind: Pod metadata: name: test spec: hostNetwork: true restartPolicy: Never containers: - name: test image: ubuntu command: ["bash", "sleep 1000"] Following is the example kublet command: kubelet --api_servers=http://$MASTER_IP:8080 --address=0.0.0.0 --cluster_dns=10.0.0.10 --cluster_domain="kubernetes.local" --host-network-sources="*" --pod-infra-container-image="kiwenlau/pause:0.8.0 References: kubernetes specified host networking, but is disallowed? kubernetes network performance issue: moving service from physical machine to kubernetes get half rps drop
How can one create a second network interface for a pod. In particular, I actually have a use case where second network interface should be ideally shared network interface to the host('s 2nd network interface) because the host has second network domain? The external service (on a separate subnet as the host) is not controlled within the same cluster(and controlled by some other vendoring). It is limiting its access by whitelisting client ips. Please advice. Thanks.
4 of the metrics you mentioned can be visualized using Grafana, if you create custom dashboards and make good use of Grafana templates: total memory requested/limits per namespace total cpu requested/limits per namespace Sort by "pod_namespace" and aggregate by "cpu/usage" or "cpu/limit" or "cpu/request" total memory requested/limits per node total cpu requested/limits per node Sort by "hostname" and aggregate by "memory/limit" or "memory/request" or "cpu/usage" or "cpu/limit" or "cpu/request" See: Heapster storage schema . As soon as you understand the InfluxDB schema you can do magic. The other 2 are not collected by Heapster, it means you have to come up with you own queries on the Kubernetes API if you want to push that data somewhere, probably also InfluxDB in your case.
###Instruction: I'm running multiple clusters of Kubernetes in GKE. I'm using a heapster + influxdb to get metrics on pods performances. What is the recommended way to get and store cluster wide metrics like: number of pods in namespace total memory requested/limits per namespace total cpu requested/limits per namespace total memory requested/limits per node total cpu requested/limits per node number of pod per state (running, pending, etc) AFAIK this is out of the scope of heapster but we need to keep an eye on this to understand the state of the cluster. Thanks! ### Response: 4 of the metrics you mentioned can be visualized using Grafana, if you create custom dashboards and make good use of Grafana templates: total memory requested/limits per namespace total cpu requested/limits per namespace Sort by "pod_namespace" and aggregate by "cpu/usage" or "cpu/limit" or "cpu/request" total memory requested/limits per node total cpu requested/limits per node Sort by "hostname" and aggregate by "memory/limit" or "memory/request" or "cpu/usage" or "cpu/limit" or "cpu/request" See: Heapster storage schema . As soon as you understand the InfluxDB schema you can do magic. The other 2 are not collected by Heapster, it means you have to come up with you own queries on the Kubernetes API if you want to push that data somewhere, probably also InfluxDB in your case.
I'm running multiple clusters of Kubernetes in GKE. I'm using a heapster + influxdb to get metrics on pods performances. What is the recommended way to get and store cluster wide metrics like: number of pods in namespace total memory requested/limits per namespace total cpu requested/limits per namespace total memory requested/limits per node total cpu requested/limits per node number of pod per state (running, pending, etc) AFAIK this is out of the scope of heapster but we need to keep an eye on this to understand the state of the cluster. Thanks!
Google Container Engine is a kubernetes backed cluster manager. It makes managing simple or complex docker based applications easy. Easy in the form of configuration, updating and scaling. The container optimized compute engine images allows you to run docker containers on a single node. Note, you can create your own containerized cluster with this image if you wish, but if you're going down this path you should really reconsider container engine. It's worth noting that the container optimized image also has aspects of kubernete's in the form of a kubelet.
###Instruction: I have a little bit of an idea about their differences, but it would be great to have expert opinions. Container-Optimized Google Compute Engine Images Google Container Engine Thanks in advance :) ### Response: Google Container Engine is a kubernetes backed cluster manager. It makes managing simple or complex docker based applications easy. Easy in the form of configuration, updating and scaling. The container optimized compute engine images allows you to run docker containers on a single node. Note, you can create your own containerized cluster with this image if you wish, but if you're going down this path you should really reconsider container engine. It's worth noting that the container optimized image also has aspects of kubernete's in the form of a kubelet.
I have a little bit of an idea about their differences, but it would be great to have expert opinions. Container-Optimized Google Compute Engine Images Google Container Engine Thanks in advance :)
To pull a private DockerHub hosted image from a Kubernetes YAML: Run these commands: DOCKER_REGISTRY_SERVER=docker.io DOCKER_USER=Type your dockerhub username, same as when you `docker login` DOCKER_EMAIL=Type your dockerhub email, same as when you `docker login` DOCKER_PASSWORD=Type your dockerhub pw, same as when you `docker login` kubectl create secret docker-registry myregistrykey \ --docker-server=$DOCKER_REGISTRY_SERVER \ --docker-username=$DOCKER_USER \ --docker-password=$DOCKER_PASSWORD \ --docker-email=$DOCKER_EMAIL If your username on DockerHub is DOCKER_USER, and your private repo is called PRIVATE_REPO_NAME, and the image you want to pull is tagged as latest, create this example.yaml file: apiVersion: v1 kind: Pod metadata: name: whatever spec: containers: - name: whatever image: DOCKER_USER/PRIVATE_REPO_NAME:latest imagePullPolicy: Always command: [ "echo", "SUCCESS" ] imagePullSecrets: - name: myregistrykey Then run: kubectl create -f example.yaml
###Instruction: I am failing to pull from my private Docker Hub repository into my local Kubernetes setup running on Vagrant: Container "hellonode" in pod "hellonode-n1hox" is waiting to start: image can't be pulled Failed to pull image "username/hellonode": Error: image username/hellonode:latest not found I have set up Kubernetes locally via Vagrant as described here and created a secret named "dockerhub" with kubectl create secret docker-registry dockerhub --docker-server=https://registry.hub.docker.com/ --docker-username=username --docker-password=... --docker-email=... which I supplied as the image pull secret. I am running Kubernetes 1.2.0. ### Response: To pull a private DockerHub hosted image from a Kubernetes YAML: Run these commands: DOCKER_REGISTRY_SERVER=docker.io DOCKER_USER=Type your dockerhub username, same as when you `docker login` DOCKER_EMAIL=Type your dockerhub email, same as when you `docker login` DOCKER_PASSWORD=Type your dockerhub pw, same as when you `docker login` kubectl create secret docker-registry myregistrykey \ --docker-server=$DOCKER_REGISTRY_SERVER \ --docker-username=$DOCKER_USER \ --docker-password=$DOCKER_PASSWORD \ --docker-email=$DOCKER_EMAIL If your username on DockerHub is DOCKER_USER, and your private repo is called PRIVATE_REPO_NAME, and the image you want to pull is tagged as latest, create this example.yaml file: apiVersion: v1 kind: Pod metadata: name: whatever spec: containers: - name: whatever image: DOCKER_USER/PRIVATE_REPO_NAME:latest imagePullPolicy: Always command: [ "echo", "SUCCESS" ] imagePullSecrets: - name: myregistrykey Then run: kubectl create -f example.yaml
I am failing to pull from my private Docker Hub repository into my local Kubernetes setup running on Vagrant: Container "hellonode" in pod "hellonode-n1hox" is waiting to start: image can't be pulled Failed to pull image "username/hellonode": Error: image username/hellonode:latest not found I have set up Kubernetes locally via Vagrant as described here and created a secret named "dockerhub" with kubectl create secret docker-registry dockerhub --docker-server=https://registry.hub.docker.com/ --docker-username=username --docker-password=... --docker-email=... which I supplied as the image pull secret. I am running Kubernetes 1.2.0.
You could do it via the REST API using the PATCH verb. However, an easier way is to use kubectl patch. The following command updates your app's tag: kubectl patch deployment myapp-deployment -p \ '{"spec":{"template":{"spec":{"containers":[{"name":"myapp","image":"172.20.34.206:5000/myapp:img:3.0"}]}}}}' According to the documentation, YAML format should be accepted as well. See Kubernetes issue #458 though (and in particular this comment) which may hint at a problem.
###Instruction: I have defined a Deployment for my app: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: myapp-deployment spec: replicas: 2 template: metadata: labels: app: myapp spec: containers: - name: myapp image: 172.20.34.206:5000/myapp_img:2.0 ports: - containerPort: 8080 Now, if I want update my app's image 2.0 to 3.0, I do this: $ kubectl edit deployment/myapp-deployment vim is open. I change the image version from 2.0 to 3.0 and save. How can it be automated? Is there a way to do it just running a command? Something like: $ kubectl edit deployment/myapp-deployment --image=172.20.34.206:5000/myapp:img:3.0 I thought using Kubernetes API REST but I don't understand the documentation. ### Response: You could do it via the REST API using the PATCH verb. However, an easier way is to use kubectl patch. The following command updates your app's tag: kubectl patch deployment myapp-deployment -p \ '{"spec":{"template":{"spec":{"containers":[{"name":"myapp","image":"172.20.34.206:5000/myapp:img:3.0"}]}}}}' According to the documentation, YAML format should be accepted as well. See Kubernetes issue #458 though (and in particular this comment) which may hint at a problem.
I have defined a Deployment for my app: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: myapp-deployment spec: replicas: 2 template: metadata: labels: app: myapp spec: containers: - name: myapp image: 172.20.34.206:5000/myapp_img:2.0 ports: - containerPort: 8080 Now, if I want update my app's image 2.0 to 3.0, I do this: $ kubectl edit deployment/myapp-deployment vim is open. I change the image version from 2.0 to 3.0 and save. How can it be automated? Is there a way to do it just running a command? Something like: $ kubectl edit deployment/myapp-deployment --image=172.20.34.206:5000/myapp:img:3.0 I thought using Kubernetes API REST but I don't understand the documentation.
The kubectl command line tool is just a wrapper to execute remote HTTPS API REST calls on the kubernetes cluster. If you want to be able to do so from your own machine you need to open the correct port (443) on your master node and pass along some parameters to the kubectl tool as specified in this tutorial: https://coreos.com/kubernetes/docs/latest/configure-kubectl.html
###Instruction: I followed the guide to getting Kubernetes running in Azure here: http://kubernetes.io/docs/getting-started-guides/coreos/azure/ In order to create pods, etc., the guide has you ssh into the master node kube-00 in the cloud service and run kubectl commands there: ssh -F ./output/kube_randomid_ssh_conf kube-00 Once in you can run the following: kubectl get nodes kubectl create -f ~/guestbook-example/ Is it possible to run these kubectl commands without logging to the master node, e.g., how can I set up kubectl to connect to the cluster hosted in Azure from my development machine instead of ssh'ing into the node this way? I tried creating a context, user and cluster in the config but the values I tried using did not work. Edit For some more background the tutorial creates the azure cluster using a script using the Azure CLI. It ends up looking like this: Resource Group: kube-randomid - Cloud Service: kube-randomid - VM: etcd-00 - VM: etcd-01 - VM: etcd-02 - VM: kube-00 - VM: kube-01 - VM: kube-02 It creates a Virtual Network that all of these VM's live in. As far as I can tell all of the machines in the cloud service share a single virtual IP. ### Response: The kubectl command line tool is just a wrapper to execute remote HTTPS API REST calls on the kubernetes cluster. If you want to be able to do so from your own machine you need to open the correct port (443) on your master node and pass along some parameters to the kubectl tool as specified in this tutorial: https://coreos.com/kubernetes/docs/latest/configure-kubectl.html
I followed the guide to getting Kubernetes running in Azure here: http://kubernetes.io/docs/getting-started-guides/coreos/azure/ In order to create pods, etc., the guide has you ssh into the master node kube-00 in the cloud service and run kubectl commands there: ssh -F ./output/kube_randomid_ssh_conf kube-00 Once in you can run the following: kubectl get nodes kubectl create -f ~/guestbook-example/ Is it possible to run these kubectl commands without logging to the master node, e.g., how can I set up kubectl to connect to the cluster hosted in Azure from my development machine instead of ssh'ing into the node this way? I tried creating a context, user and cluster in the config but the values I tried using did not work. Edit For some more background the tutorial creates the azure cluster using a script using the Azure CLI. It ends up looking like this: Resource Group: kube-randomid - Cloud Service: kube-randomid - VM: etcd-00 - VM: etcd-01 - VM: etcd-02 - VM: kube-00 - VM: kube-01 - VM: kube-02 It creates a Virtual Network that all of these VM's live in. As far as I can tell all of the machines in the cloud service share a single virtual IP.
TLDR; In your usecase kubernetes is only giving you overhead. You are running 1 pod (docker container) on each instance in your instance group. You could also have your Docker container be deployed to App Engine flexible (former Managed VM's) https://cloud.google.com/appengine/docs/flexible/custom-runtimes/ and let the autoscaling of your instance group handle it. Longer answer It is not possible (yet) to link the instance scaling to the pod scaling in k8s. This is because they are two separate problems. The HPA of k8s is meant to have (small) pods scale to spread load over your cluster (big machines) so they will be scaling because of increased load. If you do not define any limits (1 pod per machine) you could set the max amount of pods to the max scaling of your cluster effectively setting all these pods in a pending state until another instance spins up. If you want your pods to let your nodes scale then the best way (we found out) is to have them 'overcrowd' an instance so the instance-group scaling will kick in. We did this by setting pretty low memory/cpu requirements for our pods and high limits, effectively allowing them to burst over the total available CPU/memory of the instance. resources: requests: cpu: 400m memory: 100Mi limits: cpu: 1000m memory: 1000Mi
###Instruction: I have a simple wordpress site defined by the ReplicationController and Service below. Once the app is deployed and running happily, I enabled autoscaling on the instance group created by Kubernetes by going to the GCE console and enabling autoscaling with the same settings (max 5, cpu 10). Autoscaling the instances and the pods seem to work decent enough except that they keep going out of sync with each other. The RC autoscaling removes the pods from the CE instances but nothing happens with the instances so they start failing requests until the LB health check fails and removes them. Is there a way to make kubernetes scale the pods AS WELL as scale the instances that they run on so this doesn't happen? Or is there a way to keep them in sync? My process is as follows: Create the cluster $ gcloud container clusters create wordpress -z us-central1-c -m f1-micro Create the rc $ kubectl create -f rc.yml Create the service $ kubectl create -f service.yml Autoscale the rc $ kubectl autoscale rc frontend --max 5 --cpu-percent=10 Then I enabled the autoscaling in the console and gave the servers load to make them scale. rc.yml apiVersion: v1 kind: ReplicationController metadata: name: frontend spec: replicas: 1 template: metadata: labels: app: wordpress spec: containers: - image: custom-wordpress-image name: wordpress ports: - containerPort: 80 hostPort: 80 service.yml apiVersion: v1 kind: Service metadata: labels: name: frontend name: frontend spec: type: LoadBalancer ports: - port: 80 targetPort: 80 protocol: TCP selector: name: wordpress Update for more information If I don't use kubernetes autoscaler and instead set the replicas to the same number as the instance group autoscaler max instance count, I seem to get the desired result. As instances are added to the instance group, kubernetes provisions them, as they are removed kubernetes updates accordingly. At this point I wonder what the purpose of the Kubernetes autoscaler is for. ### Response: TLDR; In your usecase kubernetes is only giving you overhead. You are running 1 pod (docker container) on each instance in your instance group. You could also have your Docker container be deployed to App Engine flexible (former Managed VM's) https://cloud.google.com/appengine/docs/flexible/custom-runtimes/ and let the autoscaling of your instance group handle it. Longer answer It is not possible (yet) to link the instance scaling to the pod scaling in k8s. This is because they are two separate problems. The HPA of k8s is meant to have (small) pods scale to spread load over your cluster (big machines) so they will be scaling because of increased load. If you do not define any limits (1 pod per machine) you could set the max amount of pods to the max scaling of your cluster effectively setting all these pods in a pending state until another instance spins up. If you want your pods to let your nodes scale then the best way (we found out) is to have them 'overcrowd' an instance so the instance-group scaling will kick in. We did this by setting pretty low memory/cpu requirements for our pods and high limits, effectively allowing them to burst over the total available CPU/memory of the instance. resources: requests: cpu: 400m memory: 100Mi limits: cpu: 1000m memory: 1000Mi
I have a simple wordpress site defined by the ReplicationController and Service below. Once the app is deployed and running happily, I enabled autoscaling on the instance group created by Kubernetes by going to the GCE console and enabling autoscaling with the same settings (max 5, cpu 10). Autoscaling the instances and the pods seem to work decent enough except that they keep going out of sync with each other. The RC autoscaling removes the pods from the CE instances but nothing happens with the instances so they start failing requests until the LB health check fails and removes them. Is there a way to make kubernetes scale the pods AS WELL as scale the instances that they run on so this doesn't happen? Or is there a way to keep them in sync? My process is as follows: Create the cluster $ gcloud container clusters create wordpress -z us-central1-c -m f1-micro Create the rc $ kubectl create -f rc.yml Create the service $ kubectl create -f service.yml Autoscale the rc $ kubectl autoscale rc frontend --max 5 --cpu-percent=10 Then I enabled the autoscaling in the console and gave the servers load to make them scale. rc.yml apiVersion: v1 kind: ReplicationController metadata: name: frontend spec: replicas: 1 template: metadata: labels: app: wordpress spec: containers: - image: custom-wordpress-image name: wordpress ports: - containerPort: 80 hostPort: 80 service.yml apiVersion: v1 kind: Service metadata: labels: name: frontend name: frontend spec: type: LoadBalancer ports: - port: 80 targetPort: 80 protocol: TCP selector: name: wordpress Update for more information If I don't use kubernetes autoscaler and instead set the replicas to the same number as the instance group autoscaler max instance count, I seem to get the desired result. As instances are added to the instance group, kubernetes provisions them, as they are removed kubernetes updates accordingly. At this point I wonder what the purpose of the Kubernetes autoscaler is for.
You're not doing anything wrong, unfortunately. It's an artifact of how packets are proxied from the machine that receives them to the destination container. There's been a bunch of discussion around the problem in a very long Github issue, but no solutions found yet other than running your front-end load balancer outside of the Kubernetes cluster (like using a cloud load balancer, which attach the X-FORWARDED-FOR header).
###Instruction: I've installed kubernetes 1.2.0 with the following configuration export nodes="user@10.0.0.30 user@10.0.0.32" export role="ai i" export NUM_NODES=2 export SERVICE_CLUSTER_IP_RANGE=192.168.3.0/24 export FLANNEL_NET=172.16.0.0/16 export KUBE_PROXY_EXTRA_OPTS="--proxy-mode=iptables" I've created a nginx pod and expose with load balancer and external IP address kubectl expose pod my-nginx-3800858182-6qhap --external-ip=10.0.0.50 --port=80 --target-port=80 I'm using kubernetes on bare metal so i've assigned 10.0.0.50 ip to master node. If i try curl 10.0.0.50 (from outside kubernetes) and use tcpdump on nginx pod i see traffic, the source ip is always from the kubernetes master node 17:30:55.470230 IP 172.16.60.1.43030 > 172.16.60.2.80: ... 17:30:55.470343 IP 172.16.60.2.80 > 172.16.60.1.43030: ... i'm using mode-proxy=iptables. and need to get the actual source ip. what am i doing wrong ? ### Response: You're not doing anything wrong, unfortunately. It's an artifact of how packets are proxied from the machine that receives them to the destination container. There's been a bunch of discussion around the problem in a very long Github issue, but no solutions found yet other than running your front-end load balancer outside of the Kubernetes cluster (like using a cloud load balancer, which attach the X-FORWARDED-FOR header).
I've installed kubernetes 1.2.0 with the following configuration export nodes="user@10.0.0.30 user@10.0.0.32" export role="ai i" export NUM_NODES=2 export SERVICE_CLUSTER_IP_RANGE=192.168.3.0/24 export FLANNEL_NET=172.16.0.0/16 export KUBE_PROXY_EXTRA_OPTS="--proxy-mode=iptables" I've created a nginx pod and expose with load balancer and external IP address kubectl expose pod my-nginx-3800858182-6qhap --external-ip=10.0.0.50 --port=80 --target-port=80 I'm using kubernetes on bare metal so i've assigned 10.0.0.50 ip to master node. If i try curl 10.0.0.50 (from outside kubernetes) and use tcpdump on nginx pod i see traffic, the source ip is always from the kubernetes master node 17:30:55.470230 IP 172.16.60.1.43030 > 172.16.60.2.80: ... 17:30:55.470343 IP 172.16.60.2.80 > 172.16.60.1.43030: ... i'm using mode-proxy=iptables. and need to get the actual source ip. what am i doing wrong ?
On some cloud providers, nodes will be automatically removed from the node list when the VM is deleted (because the master components can verify with an authoritative API that the VM is in fact not coming back). On bare metal (or cloud providers where that extra hook has not been implemented), the node will exist in a NotReady state indefinitely until a cluster administrator manually removes it.
###Instruction: When i do kubectl get nodes it gives me Name Status 192.168.1.10 NotReady 192.168.1.11 Ready 192.168.1.12 Ready Here node 192.168.1.10 as been completed deleted, but still this is listed in nodes list. Because of this when im trying to run kubernetes ui, it is looking for GET http://<Master-Node>/api/v1/proxy/nodes/192.168.1.10:4194/api/v1.0/machine And this is giving 503 (Service unavailable) error Looks like some sort of cached data, How do i delete this dead minion from the list? ### Response: On some cloud providers, nodes will be automatically removed from the node list when the VM is deleted (because the master components can verify with an authoritative API that the VM is in fact not coming back). On bare metal (or cloud providers where that extra hook has not been implemented), the node will exist in a NotReady state indefinitely until a cluster administrator manually removes it.
When i do kubectl get nodes it gives me Name Status 192.168.1.10 NotReady 192.168.1.11 Ready 192.168.1.12 Ready Here node 192.168.1.10 as been completed deleted, but still this is listed in nodes list. Because of this when im trying to run kubernetes ui, it is looking for GET http://<Master-Node>/api/v1/proxy/nodes/192.168.1.10:4194/api/v1.0/machine And this is giving 503 (Service unavailable) error Looks like some sort of cached data, How do i delete this dead minion from the list?
I am using latest version of kubernetes version 1.0.1 FYI, the latest version is v1.2.3. ... it says kube-system not found You can create the kube-system namespace by running kubectl create namespace kube-system. Hopefully once you've created the kube-system namespace the rest of the instructions will work.
###Instruction: enter image description hereI tried to used the instructions from this link https://github.com/kubernetes/heapster/blob/master/docs/influxdb.md but I was not able to install it. specifically I dont know what this instruction means "Ensure that kubecfg.sh is exported." I dont even know where I can find this I did this sudo find / -name "kubecfg.sh" and I found no results. moving on to the next step "kubectl create -f deploy/kube-config/influxdb/" when I did this it says kube-system not found I am using latest version of kubernetes version 1.0.1 These instructions are broken can any one provide some instructions on how to install this? I have kubernetes cluster up and running I was able to create and delete pods and so on and default is the only namespace I have when i do kubectl get pods,svc,rc --all-namespaces Changing kube-system to default in the yaml files is just getting me one step further but I am unable to access the UI and so on. so installing kube-system makes more sense however I dont know how to do it and any instructions on installing influxdb and grafana to get it up and running will be very helpful ### Response: I am using latest version of kubernetes version 1.0.1 FYI, the latest version is v1.2.3. ... it says kube-system not found You can create the kube-system namespace by running kubectl create namespace kube-system. Hopefully once you've created the kube-system namespace the rest of the instructions will work.
enter image description hereI tried to used the instructions from this link https://github.com/kubernetes/heapster/blob/master/docs/influxdb.md but I was not able to install it. specifically I dont know what this instruction means "Ensure that kubecfg.sh is exported." I dont even know where I can find this I did this sudo find / -name "kubecfg.sh" and I found no results. moving on to the next step "kubectl create -f deploy/kube-config/influxdb/" when I did this it says kube-system not found I am using latest version of kubernetes version 1.0.1 These instructions are broken can any one provide some instructions on how to install this? I have kubernetes cluster up and running I was able to create and delete pods and so on and default is the only namespace I have when i do kubectl get pods,svc,rc --all-namespaces Changing kube-system to default in the yaml files is just getting me one step further but I am unable to access the UI and so on. so installing kube-system makes more sense however I dont know how to do it and any instructions on installing influxdb and grafana to get it up and running will be very helpful
I am not sure if there is any fancy way to trigger a completed job, but one way to do it can be to delete and recreate the job. Re: rolling-update: that is required for long running pods, which is what RCs control. For jobs: You can update the podTemplateSpec in jobSpec and that will ensure that any new pod created by the job after the update will have the updated podTemplateSpec (note: already running pods will not be affected). Hope this helps!
###Instruction: I am eagerly awaiting the release of Kubernetes v1.3 in mid to late June, so that I can access cron scheduling for jobs. In the meantime, what I plan to do is the following: Deploy a job on my Kubernetes cluster Use jenkins as a cron tool to trigger the job in defined intervals (e.g. 1 hour). I have two questions: How do I update a job? For replication controllers, I would simply do a rolling update, but in the jobs API spec (http://kubernetes.io/docs/user-guide/jobs/) there are no details about how to do this. For example, lets say that I want to use my jenkins deploy system to update the job whenever I do a git commit. Is it possible to use the kubernetes API to trigger jobs? For example, I have a job that runs and then the pod is terminated on completion. Then, 1 hour later, I want to use jenkins to trigger the job again. Thanks so much! ### Response: I am not sure if there is any fancy way to trigger a completed job, but one way to do it can be to delete and recreate the job. Re: rolling-update: that is required for long running pods, which is what RCs control. For jobs: You can update the podTemplateSpec in jobSpec and that will ensure that any new pod created by the job after the update will have the updated podTemplateSpec (note: already running pods will not be affected). Hope this helps!
I am eagerly awaiting the release of Kubernetes v1.3 in mid to late June, so that I can access cron scheduling for jobs. In the meantime, what I plan to do is the following: Deploy a job on my Kubernetes cluster Use jenkins as a cron tool to trigger the job in defined intervals (e.g. 1 hour). I have two questions: How do I update a job? For replication controllers, I would simply do a rolling update, but in the jobs API spec (http://kubernetes.io/docs/user-guide/jobs/) there are no details about how to do this. For example, lets say that I want to use my jenkins deploy system to update the job whenever I do a git commit. Is it possible to use the kubernetes API to trigger jobs? For example, I have a job that runs and then the pod is terminated on completion. Then, 1 hour later, I want to use jenkins to trigger the job again. Thanks so much!
kubectl config unset takes a dot-delimited path. You can delete cluster/context/user entries by name. E.g. kubectl config unset users.gke_project_zone_name kubectl config unset contexts.aws_cluster1-kubernetes kubectl config unset clusters.foobar-baz Side note, if you teardown your cluster using cluster/kube-down.sh (or gcloud if you use Container Engine), it will delete the associated kubeconfig entries. There is also a planned kubectl config rework for a future release to make the commands more intuitive/usable/consistent.
###Instruction: kubectl config view shows contexts and clusters corresponding to clusters that I have deleted. How can I remove those entries? The command kubectl config unset clusters appears to delete all clusters. Is there a way to selectively delete cluster entries? What about contexts? ### Response: kubectl config unset takes a dot-delimited path. You can delete cluster/context/user entries by name. E.g. kubectl config unset users.gke_project_zone_name kubectl config unset contexts.aws_cluster1-kubernetes kubectl config unset clusters.foobar-baz Side note, if you teardown your cluster using cluster/kube-down.sh (or gcloud if you use Container Engine), it will delete the associated kubeconfig entries. There is also a planned kubectl config rework for a future release to make the commands more intuitive/usable/consistent.
kubectl config view shows contexts and clusters corresponding to clusters that I have deleted. How can I remove those entries? The command kubectl config unset clusters appears to delete all clusters. Is there a way to selectively delete cluster entries? What about contexts?
I had some iptables rules set which were blocking the connection on docker0 interface where kubernetes services including skydns were running. After flushing the rule it worked. Thus it appears that the first problem was with my local setup rather than skydns. However the problem reappeared after installing local docker registry and I got this error: I0427 20:30:45.183419 1 kube2sky.go:627] Ignoring error while waiting for service default/kubernetes: Get https://10.0.0.1:443/api/v1/namespaces/default/services/kubernetes: x509: certificate signed by unknown authority. Sleeping 1s before retrying. As a workaround I deleted the secrets using kubectl delete secrets/default-token-q4siz then restarted skydns and it started working again.
###Instruction: I am setting up skydns for kubernetes following this template http://kubernetes.io/docs/getting-started-guides/docker-multinode/skydns.yaml.in. But it is not able to resolve dns. After looking for solutions also added -kube_master_url in kube2sky arguments but still the same issue. Here is the skydns logs: 2016/04/23 02:49:26 skydns: falling back to default configuration, could not read from etcd: 501: All the given peers are not reachable (failed to propose on members [http://127.0.0.1:4001] twice [last error: Get http://127.0.0.1:4001/v2/keys/skydns/config?quorum=false&recursive=false&sorted=false: dial tcp 127.0.0.1:4001: connection refused]) [0] 2016/04/23 02:49:26 skydns: ready for queries on cluster.local. for tcp://0.0.0.0:53 [rcache 0] 2016/04/23 02:49:26 skydns: ready for queries on cluster.local. for udp://0.0.0.0:53 [rcache 0] 2016/04/23 02:49:33 skydns: error from backend: 501: All the given peers are not reachable (failed to propose on members [http://127.0.0.1:4001] twice [last error: Get http://127.0.0.1:4001/v2/keys/skydns/local/cluster/svc/default/kubernetes?quorum=false&recursive=true&sorted=false: dial tcp 127.0.0.1:4001: connection refused]) [0] Any pointers? Kube2sky logs: I0423 02:49:39.286489 1 kube2sky.go:436] Etcd server found: http://127.0.0.1:4001 I0423 02:49:40.295909 1 kube2sky.go:503] Using http://172.17.0.1:8080 for kubernetes master I0423 02:49:40.296183 1 kube2sky.go:504] Using kubernetes API v1 ### Response: I had some iptables rules set which were blocking the connection on docker0 interface where kubernetes services including skydns were running. After flushing the rule it worked. Thus it appears that the first problem was with my local setup rather than skydns. However the problem reappeared after installing local docker registry and I got this error: I0427 20:30:45.183419 1 kube2sky.go:627] Ignoring error while waiting for service default/kubernetes: Get https://10.0.0.1:443/api/v1/namespaces/default/services/kubernetes: x509: certificate signed by unknown authority. Sleeping 1s before retrying. As a workaround I deleted the secrets using kubectl delete secrets/default-token-q4siz then restarted skydns and it started working again.
I am setting up skydns for kubernetes following this template http://kubernetes.io/docs/getting-started-guides/docker-multinode/skydns.yaml.in. But it is not able to resolve dns. After looking for solutions also added -kube_master_url in kube2sky arguments but still the same issue. Here is the skydns logs: 2016/04/23 02:49:26 skydns: falling back to default configuration, could not read from etcd: 501: All the given peers are not reachable (failed to propose on members [http://127.0.0.1:4001] twice [last error: Get http://127.0.0.1:4001/v2/keys/skydns/config?quorum=false&recursive=false&sorted=false: dial tcp 127.0.0.1:4001: connection refused]) [0] 2016/04/23 02:49:26 skydns: ready for queries on cluster.local. for tcp://0.0.0.0:53 [rcache 0] 2016/04/23 02:49:26 skydns: ready for queries on cluster.local. for udp://0.0.0.0:53 [rcache 0] 2016/04/23 02:49:33 skydns: error from backend: 501: All the given peers are not reachable (failed to propose on members [http://127.0.0.1:4001] twice [last error: Get http://127.0.0.1:4001/v2/keys/skydns/local/cluster/svc/default/kubernetes?quorum=false&recursive=true&sorted=false: dial tcp 127.0.0.1:4001: connection refused]) [0] Any pointers? Kube2sky logs: I0423 02:49:39.286489 1 kube2sky.go:436] Etcd server found: http://127.0.0.1:4001 I0423 02:49:40.295909 1 kube2sky.go:503] Using http://172.17.0.1:8080 for kubernetes master I0423 02:49:40.296183 1 kube2sky.go:504] Using kubernetes API v1
Seems likely to be a mismatch between client and server version. Read more at this GitHub issue.
###Instruction: I am unable to install cluster/addons/cluster-monitoring/google/heapster-controller.yaml with Kubernetes 1.2.0 on CoreOS 991.1.0/GCE due to the following error: Error from server: error when creating "/tmp/heapster-controller.yaml": Deployment in version extensions/v1beta1 cannot be handled as a Deployment: json: cannot unmarshal object into Go value of type string What is going wrong here? My heapster-controller.yaml looks like this (expanded from template): apiVersion: extensions/v1beta1 kind: Deployment metadata: name: heapster-v1.0.2 namespace: kube-system labels: k8s-app: heapster kubernetes.io/cluster-service: "true" version: v1.0.2 spec: replicas: 1 selector: matchLabels: k8s-app: heapster version: v1.0.2 template: metadata: labels: k8s-app: heapster version: v1.0.2 spec: containers: - image: gcr.io/google_containers/heapster:v1.0.2 name: heapster resources: # keep request = limit to keep this container in guaranteed class limits: cpu: 100m memory: 200Mi requests: cpu: 100m memory: 200Mi command: - /heapster - --source=kubernetes.summary_api:'' - --sink=gcm - --metric_resolution=60s volumeMounts: - name: ssl-certs mountPath: /etc/ssl/certs readOnly: true - image: gcr.io/google_containers/heapster:v1.0.2 name: eventer resources: # keep request = limit to keep this container in guaranteed class limits: cpu: 100m memory: 200Mi requests: cpu: 100m memory: 200Mi command: - /eventer - --source=kubernetes:'' - --sink=gcl volumeMounts: - name: ssl-certs mountPath: /etc/ssl/certs readOnly: true - image: gcr.io/google_containers/addon-resizer:1.0 name: heapster-nanny resources: limits: cpu: 50m memory: 100Mi requests: cpu: 50m memory: 100Mi env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace command: - /pod_nanny - --cpu=100m - --extra-cpu=0m - --memory=200Mi - --extra-memory=4Mi - --threshold=5 - --deployment=heapster-v1.0.2 - --container=heapster - --poll-period=300000 - image: gcr.io/google_containers/addon-resizer:1.0 name: eventer-nanny resources: limits: cpu: 50m memory: 100Mi requests: cpu: 50m memory: 100Mi env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace command: - /pod_nanny - --cpu=100m - --extra-cpu=0m - --memory=200Mi - --extra-memory=500Ki - --threshold=5 - --deployment=heapster-v1.0.2 - --container=eventer - --poll-period=300000 volumes: - name: ssl-certs hostPath: path: "/etc/ssl/certs" ### Response: Seems likely to be a mismatch between client and server version. Read more at this GitHub issue.
I am unable to install cluster/addons/cluster-monitoring/google/heapster-controller.yaml with Kubernetes 1.2.0 on CoreOS 991.1.0/GCE due to the following error: Error from server: error when creating "/tmp/heapster-controller.yaml": Deployment in version extensions/v1beta1 cannot be handled as a Deployment: json: cannot unmarshal object into Go value of type string What is going wrong here? My heapster-controller.yaml looks like this (expanded from template): apiVersion: extensions/v1beta1 kind: Deployment metadata: name: heapster-v1.0.2 namespace: kube-system labels: k8s-app: heapster kubernetes.io/cluster-service: "true" version: v1.0.2 spec: replicas: 1 selector: matchLabels: k8s-app: heapster version: v1.0.2 template: metadata: labels: k8s-app: heapster version: v1.0.2 spec: containers: - image: gcr.io/google_containers/heapster:v1.0.2 name: heapster resources: # keep request = limit to keep this container in guaranteed class limits: cpu: 100m memory: 200Mi requests: cpu: 100m memory: 200Mi command: - /heapster - --source=kubernetes.summary_api:'' - --sink=gcm - --metric_resolution=60s volumeMounts: - name: ssl-certs mountPath: /etc/ssl/certs readOnly: true - image: gcr.io/google_containers/heapster:v1.0.2 name: eventer resources: # keep request = limit to keep this container in guaranteed class limits: cpu: 100m memory: 200Mi requests: cpu: 100m memory: 200Mi command: - /eventer - --source=kubernetes:'' - --sink=gcl volumeMounts: - name: ssl-certs mountPath: /etc/ssl/certs readOnly: true - image: gcr.io/google_containers/addon-resizer:1.0 name: heapster-nanny resources: limits: cpu: 50m memory: 100Mi requests: cpu: 50m memory: 100Mi env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace command: - /pod_nanny - --cpu=100m - --extra-cpu=0m - --memory=200Mi - --extra-memory=4Mi - --threshold=5 - --deployment=heapster-v1.0.2 - --container=heapster - --poll-period=300000 - image: gcr.io/google_containers/addon-resizer:1.0 name: eventer-nanny resources: limits: cpu: 50m memory: 100Mi requests: cpu: 50m memory: 100Mi env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace command: - /pod_nanny - --cpu=100m - --extra-cpu=0m - --memory=200Mi - --extra-memory=500Ki - --threshold=5 - --deployment=heapster-v1.0.2 - --container=eventer - --poll-period=300000 volumes: - name: ssl-certs hostPath: path: "/etc/ssl/certs"
SkyDNS defaults its forwarding nameservers to the one listed in /etc/resolv.conf. Since SkyDNS runs inside the kube-dns pod as a cluster addon, it inherits its /etc/resolv.conf from its host as described in the kube-dns doc. From your question, it looks like your host's /etc/resolv.conf is configured to use 192.168.122.1 as its nameserver and hence that becomes the forwarding server in your SkyDNS config. I believe 192.168.122.1 is not routable from your Kubernetes cluster and that's why you are seeing "failure to forward request" errors in the kube-dns logs. The simplest solution to this problem is to supply a reachable DNS server as a flag to SkyDNS in your RC config. Here is an example (it's just your RC config, but adds the -nameservers flag in the SkyDNS container spec): apiVersion: v1 kind: ReplicationController metadata: name: kube-dns-v11 namespace: kube-system labels: k8s-app: kube-dns version: v11 kubernetes.io/cluster-service: "true" spec: replicas: 1 selector: k8s-app: kube-dns version: v11 template: metadata: labels: k8s-app: kube-dns version: v11 kubernetes.io/cluster-service: "true" spec: containers: - name: etcd image: gcr.io/google_containers/etcd-amd64:2.2.1 # resources: # # TODO: Set memory limits when we've profiled the container for large # # clusters, then set request = limit to keep this container in # # guaranteed class. Currently, this container falls into the # # "burstable" category so the kubelet doesn't backoff from restarting it. # limits: # cpu: 100m # memory: 500Mi # requests: # cpu: 100m # memory: 50Mi command: - /usr/local/bin/etcd - -data-dir - /var/etcd/data - -listen-client-urls - http://127.0.0.1:2379,http://127.0.0.1:4001 - -advertise-client-urls - http://127.0.0.1:2379,http://127.0.0.1:4001 - -initial-cluster-token - skydns-etcd volumeMounts: - name: etcd-storage mountPath: /var/etcd/data - name: kube2sky image: gcr.io/google_containers/kube2sky:1.14 # resources: # # TODO: Set memory limits when we've profiled the container for large # # clusters, then set request = limit to keep this container in # # guaranteed class. Currently, this container falls into the # # "burstable" category so the kubelet doesn't backoff from restarting it. # limits: # cpu: 100m # # Kube2sky watches all pods. # memory: 200Mi # requests: # cpu: 100m # memory: 50Mi livenessProbe: httpGet: path: /healthz port: 8080 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 # successThreshold: 1 # failureThreshold: 5 readinessProbe: httpGet: path: /readiness port: 8081 scheme: HTTP # we poll on pod startup for the Kubernetes master service and # only setup the /readiness HTTP server once that's available. initialDelaySeconds: 30 timeoutSeconds: 5 args: # command = "/kube2sky" - --domain=cluster.local - name: skydns image: gcr.io/google_containers/skydns:2015-10-13-8c72f8c resources: # TODO: Set memory limits when we've profiled the container for large # clusters, then set request = limit to keep this container in # guaranteed class. Currently, this container falls into the # "burstable" category so the kubelet doesn't backoff from restarting it. limits: cpu: 100m memory: 200Mi requests: cpu: 100m memory: 50Mi args: # command = "/skydns" - -machines=http://127.0.0.1:4001 - -addr=0.0.0.0:53 - -ns-rotate=false - -domain=cluster.local - -nameservers=8.8.8.8:53,8.8.4.4:53 # Adding this flag. Dont use double quotes. ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - name: healthz image: gcr.io/google_containers/exechealthz:1.0 # resources: # # keep request = limit to keep this container in guaranteed class # limits: # cpu: 10m # memory: 20Mi # requests: # cpu: 10m # memory: 20Mi args: - -cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null - -port=8080 ports: - containerPort: 8080 protocol: TCP volumes: - name: etcd-storage emptyDir: {} dnsPolicy: Default # Don't use cluster DNS.
###Instruction: I am creating a cluster of 1 master 2 nodes kubernetes. I am trying to create the skydns based on the following: apiVersion: v1 kind: ReplicationController metadata: name: kube-dns-v11 namespace: kube-system labels: k8s-app: kube-dns version: v11 kubernetes.io/cluster-service: "true" spec: replicas: 1 selector: k8s-app: kube-dns version: v11 template: metadata: labels: k8s-app: kube-dns version: v11 kubernetes.io/cluster-service: "true" spec: containers: - name: etcd image: gcr.io/google_containers/etcd-amd64:2.2.1 # resources: # # TODO: Set memory limits when we've profiled the container for large # # clusters, then set request = limit to keep this container in # # guaranteed class. Currently, this container falls into the # # "burstable" category so the kubelet doesn't backoff from restarting it. # limits: # cpu: 100m # memory: 500Mi # requests: # cpu: 100m # memory: 50Mi command: - /usr/local/bin/etcd - -data-dir - /var/etcd/data - -listen-client-urls - http://127.0.0.1:2379,http://127.0.0.1:4001 - -advertise-client-urls - http://127.0.0.1:2379,http://127.0.0.1:4001 - -initial-cluster-token - skydns-etcd volumeMounts: - name: etcd-storage mountPath: /var/etcd/data - name: kube2sky image: gcr.io/google_containers/kube2sky:1.14 # resources: # # TODO: Set memory limits when we've profiled the container for large # # clusters, then set request = limit to keep this container in # # guaranteed class. Currently, this container falls into the # # "burstable" category so the kubelet doesn't backoff from restarting it. # limits: # cpu: 100m # # Kube2sky watches all pods. # memory: 200Mi # requests: # cpu: 100m # memory: 50Mi livenessProbe: httpGet: path: /healthz port: 8080 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 # successThreshold: 1 # failureThreshold: 5 readinessProbe: httpGet: path: /readiness port: 8081 scheme: HTTP # we poll on pod startup for the Kubernetes master service and # only setup the /readiness HTTP server once that's available. initialDelaySeconds: 30 timeoutSeconds: 5 args: # command = "/kube2sky" - --domain=cluster.local - name: skydns image: gcr.io/google_containers/skydns:2015-10-13-8c72f8c resources: # TODO: Set memory limits when we've profiled the container for large # clusters, then set request = limit to keep this container in # guaranteed class. Currently, this container falls into the # "burstable" category so the kubelet doesn't backoff from restarting it. limits: cpu: 100m memory: 200Mi requests: cpu: 100m memory: 50Mi args: # command = "/skydns" - -machines=http://127.0.0.1:4001 - -addr=0.0.0.0:53 - -ns-rotate=false - -domain=cluster.local ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - name: healthz image: gcr.io/google_containers/exechealthz:1.0 # resources: # # keep request = limit to keep this container in guaranteed class # limits: # cpu: 10m # memory: 20Mi # requests: # cpu: 10m # memory: 20Mi args: - -cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null - -port=8080 ports: - containerPort: 8080 protocol: TCP volumes: - name: etcd-storage emptyDir: {} dnsPolicy: Default # Don't use cluster DNS. However, skydns is spitting out the following: > $kubectl logs kube-dns-v11-k07j9 --namespace=kube-system skydns > 2016/04/18 12:47:05 skydns: falling back to default configuration, > could not read from etcd: 100: Key not found (/skydns) [1] 2016/04/18 > 12:47:05 skydns: ready for queries on cluster.local. for > tcp://0.0.0.0:53 [rcache 0] 2016/04/18 12:47:05 skydns: ready for > queries on cluster.local. for udp://0.0.0.0:53 [rcache 0] 2016/04/18 > 12:47:11 skydns: failure to forward request "read udp > 192.168.122.1:53: i/o timeout" 2016/04/18 12:47:15 skydns: failure to forward request "read udp 192.168.122.1:53: i/o timeout" 2016/04/18 > 12:47:19 skydns: failure to forward request "read udp > 192.168.122.1:53: i/o timeout" 2016/04/18 12:47:23 skydns: failure to forward request "read udp 192.168.122.1:53: i/o timeout" 2016/04/18 > 12:47:27 skydns: failure to forward request "read udp > 192.168.122.1:53: i/o timeout" 2016/04/18 12:47:31 skydns: failure to forward request "read udp 192.168.122.1:53: i/o timeout" 2016/04/18 > 12:47:35 skydns: failure to forward request "read udp > 192.168.122.1:53: i/o timeout" 2016/04/18 12:47:39 skydns: failure to forward request "read udp 192.168.122.1:53: i/o timeout" 2016/04/18 > 12:47:43 skydns: failure to forward request "read udp > 192.168.122.1:53: i/o timeout" 2016/04/18 12:47:47 skydns: failure to forward request "read udp 192.168.122.1:53: i/o timeout" 2016/04/18 > 12:47:51 skydns: failure to forward request "read udp > 192.168.122.1:53: i/o timeout" 2016/04/18 12:47:55 skydns: failure to forward request "read udp 192.168.122.1:53: i/o timeout" 2016/04/18 > 12:47:59 skydns: failure to forward request "read udp > 192.168.122.1:53: i/o timeout" 2016/04/18 12:48:03 skydns: failure to forward request "read udp 192.168.122.1:53: i/o timeout" After looking further, I just realized what is a 192.168.122.1? It is virtual switch on kvm. Why is skydns trying to hit my virtual switch or dns server of virtual machine? ### Response: SkyDNS defaults its forwarding nameservers to the one listed in /etc/resolv.conf. Since SkyDNS runs inside the kube-dns pod as a cluster addon, it inherits its /etc/resolv.conf from its host as described in the kube-dns doc. From your question, it looks like your host's /etc/resolv.conf is configured to use 192.168.122.1 as its nameserver and hence that becomes the forwarding server in your SkyDNS config. I believe 192.168.122.1 is not routable from your Kubernetes cluster and that's why you are seeing "failure to forward request" errors in the kube-dns logs. The simplest solution to this problem is to supply a reachable DNS server as a flag to SkyDNS in your RC config. Here is an example (it's just your RC config, but adds the -nameservers flag in the SkyDNS container spec): apiVersion: v1 kind: ReplicationController metadata: name: kube-dns-v11 namespace: kube-system labels: k8s-app: kube-dns version: v11 kubernetes.io/cluster-service: "true" spec: replicas: 1 selector: k8s-app: kube-dns version: v11 template: metadata: labels: k8s-app: kube-dns version: v11 kubernetes.io/cluster-service: "true" spec: containers: - name: etcd image: gcr.io/google_containers/etcd-amd64:2.2.1 # resources: # # TODO: Set memory limits when we've profiled the container for large # # clusters, then set request = limit to keep this container in # # guaranteed class. Currently, this container falls into the # # "burstable" category so the kubelet doesn't backoff from restarting it. # limits: # cpu: 100m # memory: 500Mi # requests: # cpu: 100m # memory: 50Mi command: - /usr/local/bin/etcd - -data-dir - /var/etcd/data - -listen-client-urls - http://127.0.0.1:2379,http://127.0.0.1:4001 - -advertise-client-urls - http://127.0.0.1:2379,http://127.0.0.1:4001 - -initial-cluster-token - skydns-etcd volumeMounts: - name: etcd-storage mountPath: /var/etcd/data - name: kube2sky image: gcr.io/google_containers/kube2sky:1.14 # resources: # # TODO: Set memory limits when we've profiled the container for large # # clusters, then set request = limit to keep this container in # # guaranteed class. Currently, this container falls into the # # "burstable" category so the kubelet doesn't backoff from restarting it. # limits: # cpu: 100m # # Kube2sky watches all pods. # memory: 200Mi # requests: # cpu: 100m # memory: 50Mi livenessProbe: httpGet: path: /healthz port: 8080 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 # successThreshold: 1 # failureThreshold: 5 readinessProbe: httpGet: path: /readiness port: 8081 scheme: HTTP # we poll on pod startup for the Kubernetes master service and # only setup the /readiness HTTP server once that's available. initialDelaySeconds: 30 timeoutSeconds: 5 args: # command = "/kube2sky" - --domain=cluster.local - name: skydns image: gcr.io/google_containers/skydns:2015-10-13-8c72f8c resources: # TODO: Set memory limits when we've profiled the container for large # clusters, then set request = limit to keep this container in # guaranteed class. Currently, this container falls into the # "burstable" category so the kubelet doesn't backoff from restarting it. limits: cpu: 100m memory: 200Mi requests: cpu: 100m memory: 50Mi args: # command = "/skydns" - -machines=http://127.0.0.1:4001 - -addr=0.0.0.0:53 - -ns-rotate=false - -domain=cluster.local - -nameservers=8.8.8.8:53,8.8.4.4:53 # Adding this flag. Dont use double quotes. ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - name: healthz image: gcr.io/google_containers/exechealthz:1.0 # resources: # # keep request = limit to keep this container in guaranteed class # limits: # cpu: 10m # memory: 20Mi # requests: # cpu: 10m # memory: 20Mi args: - -cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null - -port=8080 ports: - containerPort: 8080 protocol: TCP volumes: - name: etcd-storage emptyDir: {} dnsPolicy: Default # Don't use cluster DNS.
I am creating a cluster of 1 master 2 nodes kubernetes. I am trying to create the skydns based on the following: apiVersion: v1 kind: ReplicationController metadata: name: kube-dns-v11 namespace: kube-system labels: k8s-app: kube-dns version: v11 kubernetes.io/cluster-service: "true" spec: replicas: 1 selector: k8s-app: kube-dns version: v11 template: metadata: labels: k8s-app: kube-dns version: v11 kubernetes.io/cluster-service: "true" spec: containers: - name: etcd image: gcr.io/google_containers/etcd-amd64:2.2.1 # resources: # # TODO: Set memory limits when we've profiled the container for large # # clusters, then set request = limit to keep this container in # # guaranteed class. Currently, this container falls into the # # "burstable" category so the kubelet doesn't backoff from restarting it. # limits: # cpu: 100m # memory: 500Mi # requests: # cpu: 100m # memory: 50Mi command: - /usr/local/bin/etcd - -data-dir - /var/etcd/data - -listen-client-urls - http://127.0.0.1:2379,http://127.0.0.1:4001 - -advertise-client-urls - http://127.0.0.1:2379,http://127.0.0.1:4001 - -initial-cluster-token - skydns-etcd volumeMounts: - name: etcd-storage mountPath: /var/etcd/data - name: kube2sky image: gcr.io/google_containers/kube2sky:1.14 # resources: # # TODO: Set memory limits when we've profiled the container for large # # clusters, then set request = limit to keep this container in # # guaranteed class. Currently, this container falls into the # # "burstable" category so the kubelet doesn't backoff from restarting it. # limits: # cpu: 100m # # Kube2sky watches all pods. # memory: 200Mi # requests: # cpu: 100m # memory: 50Mi livenessProbe: httpGet: path: /healthz port: 8080 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 # successThreshold: 1 # failureThreshold: 5 readinessProbe: httpGet: path: /readiness port: 8081 scheme: HTTP # we poll on pod startup for the Kubernetes master service and # only setup the /readiness HTTP server once that's available. initialDelaySeconds: 30 timeoutSeconds: 5 args: # command = "/kube2sky" - --domain=cluster.local - name: skydns image: gcr.io/google_containers/skydns:2015-10-13-8c72f8c resources: # TODO: Set memory limits when we've profiled the container for large # clusters, then set request = limit to keep this container in # guaranteed class. Currently, this container falls into the # "burstable" category so the kubelet doesn't backoff from restarting it. limits: cpu: 100m memory: 200Mi requests: cpu: 100m memory: 50Mi args: # command = "/skydns" - -machines=http://127.0.0.1:4001 - -addr=0.0.0.0:53 - -ns-rotate=false - -domain=cluster.local ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - name: healthz image: gcr.io/google_containers/exechealthz:1.0 # resources: # # keep request = limit to keep this container in guaranteed class # limits: # cpu: 10m # memory: 20Mi # requests: # cpu: 10m # memory: 20Mi args: - -cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null - -port=8080 ports: - containerPort: 8080 protocol: TCP volumes: - name: etcd-storage emptyDir: {} dnsPolicy: Default # Don't use cluster DNS. However, skydns is spitting out the following: > $kubectl logs kube-dns-v11-k07j9 --namespace=kube-system skydns > 2016/04/18 12:47:05 skydns: falling back to default configuration, > could not read from etcd: 100: Key not found (/skydns) [1] 2016/04/18 > 12:47:05 skydns: ready for queries on cluster.local. for > tcp://0.0.0.0:53 [rcache 0] 2016/04/18 12:47:05 skydns: ready for > queries on cluster.local. for udp://0.0.0.0:53 [rcache 0] 2016/04/18 > 12:47:11 skydns: failure to forward request "read udp > 192.168.122.1:53: i/o timeout" 2016/04/18 12:47:15 skydns: failure to forward request "read udp 192.168.122.1:53: i/o timeout" 2016/04/18 > 12:47:19 skydns: failure to forward request "read udp > 192.168.122.1:53: i/o timeout" 2016/04/18 12:47:23 skydns: failure to forward request "read udp 192.168.122.1:53: i/o timeout" 2016/04/18 > 12:47:27 skydns: failure to forward request "read udp > 192.168.122.1:53: i/o timeout" 2016/04/18 12:47:31 skydns: failure to forward request "read udp 192.168.122.1:53: i/o timeout" 2016/04/18 > 12:47:35 skydns: failure to forward request "read udp > 192.168.122.1:53: i/o timeout" 2016/04/18 12:47:39 skydns: failure to forward request "read udp 192.168.122.1:53: i/o timeout" 2016/04/18 > 12:47:43 skydns: failure to forward request "read udp > 192.168.122.1:53: i/o timeout" 2016/04/18 12:47:47 skydns: failure to forward request "read udp 192.168.122.1:53: i/o timeout" 2016/04/18 > 12:47:51 skydns: failure to forward request "read udp > 192.168.122.1:53: i/o timeout" 2016/04/18 12:47:55 skydns: failure to forward request "read udp 192.168.122.1:53: i/o timeout" 2016/04/18 > 12:47:59 skydns: failure to forward request "read udp > 192.168.122.1:53: i/o timeout" 2016/04/18 12:48:03 skydns: failure to forward request "read udp 192.168.122.1:53: i/o timeout" After looking further, I just realized what is a 192.168.122.1? It is virtual switch on kvm. Why is skydns trying to hit my virtual switch or dns server of virtual machine?
The custom metrics proposal is out of date. Please refer to the user guide, which is currently under review.
###Instruction: I am trying to set up autoscaling on a Kubernetes 1.2.3 (beta) cluster based on custom metrics. (I already tried CPU-based autoscaling on the cluster, and it worked fine.) I tried to follow their custom metrics proposal, but I'm having problems in creating the necessary set-up. This is what I have done so far: Added a custom metrics annotation to the pod spec being deployed (similar to the configuration provided in their proposal): apiVersion: v1 kind: ReplicationController metadata: name: metrix namespace: "default" spec: replicas: 1 template: metadata: labels: app: metrix annotations: metrics.alpha.kubernetes.io/custom-endpoints: > [ { "api": "prometheus", "path": "/status", "port": "9090", "names": ["test1"] }, { "api": "prometheus", "path": "/metrics", "port": "9090" "names": ["test2"] } ] spec: containers: - name: metrix image: janaka/prometheus-ep:v1 resources: requests: cpu: 400m Created a Docker container tagged janaka/prometheus-ep:v1 (local) running a Prometheus-compatible server on port 9090, with /status and /metrics endpoints Enabled custom metrics on the kubelet by appending --enable-custom-metrics=true to KUBELET_OPTS at /etc/default/kubelet (based on the kubelet CLI reference) and restarted the kubelet All pods (in default and kube-system namespaces) are running, and the heapster pod log doesn't contain any 'anomalous' outputs either (except for a small glitch at startup, due to temporary unavailability of InfluxDB): $ kubesys logs -f heapster-daftr I0427 05:07:45.807277 1 heapster.go:60] /heapster --source=kubernetes:https://kubernetes.default --sink=influxdb:http://monitoring-influxdb:8086 I0427 05:07:45.807359 1 heapster.go:61] Heapster version 1.1.0-beta1 I0427 05:07:45.807638 1 configs.go:60] Using Kubernetes client with master "https://kubernetes.default" and version "v1" I0427 05:07:45.807661 1 configs.go:61] Using kubelet port 10255 E0427 05:08:15.847319 1 influxdb.go:185] issues while creating an InfluxDB sink: failed to ping InfluxDB server at "monitoring-influxdb:8086" - Get http://monitoring-influxdb:8086/ping: dial tcp xxx.xxx.xxx.xxx:8086: i/o timeout, will retry on use I0427 05:08:15.847376 1 influxdb.go:199] created influxdb sink with options: host:monitoring-influxdb:8086 user:root db:k8s I0427 05:08:15.847412 1 heapster.go:87] Starting with InfluxDB Sink I0427 05:08:15.847427 1 heapster.go:87] Starting with Metric Sink I0427 05:08:15.877349 1 heapster.go:166] Starting heapster on port 8082 I0427 05:08:35.000342 1 manager.go:79] Scraping metrics start: 2016-04-27 05:08:00 +0000 UTC, end: 2016-04-27 05:08:30 +0000 UTC I0427 05:08:35.035800 1 manager.go:152] ScrapeMetrics: time: 35.209696ms size: 24 I0427 05:08:35.044674 1 influxdb.go:177] Created database "k8s" on influxDB server at "monitoring-influxdb:8086" I0427 05:09:05.000441 1 manager.go:79] Scraping metrics start: 2016-04-27 05:08:30 +0000 UTC, end: 2016-04-27 05:09:00 +0000 UTC I0427 05:09:06.682941 1 manager.go:152] ScrapeMetrics: time: 1.682157776s size: 24 I0427 06:43:38.767146 1 manager.go:79] Scraping metrics start: 2016-04-27 05:09:00 +0000 UTC, end: 2016-04-27 05:09:30 +0000 UTC I0427 06:43:38.810243 1 manager.go:152] ScrapeMetrics: time: 42.940682ms size: 1 I0427 06:44:05.012989 1 manager.go:79] Scraping metrics start: 2016-04-27 06:43:30 +0000 UTC, end: 2016-04-27 06:44:00 +0000 UTC I0427 06:44:05.063583 1 manager.go:152] ScrapeMetrics: time: 50.368106ms size: 24 I0427 06:44:35.002038 1 manager.go:79] Scraping metrics start: 2016-04-27 06:44:00 +0000 UTC, end: 2016-04-27 06:44:30 +0000 UTC However, the custom endpoints are not being scraped. (I verified it by adding stderr logs for startup and endpoint handlers of my server; only the server initialization logs are displayed on kubectl logs of the pod.) As I'm a newbie to Kubernetes, any help is appreciated. (From what I understood from the proposal as well as this issue, we don't have to run a separate Prometheus collector in the cluster as cAdvisor should already pull data from the endpoints defined in the pod spec. Is this true, or do I need a separate Prometheus collector as well?) ### Response: The custom metrics proposal is out of date. Please refer to the user guide, which is currently under review.
I am trying to set up autoscaling on a Kubernetes 1.2.3 (beta) cluster based on custom metrics. (I already tried CPU-based autoscaling on the cluster, and it worked fine.) I tried to follow their custom metrics proposal, but I'm having problems in creating the necessary set-up. This is what I have done so far: Added a custom metrics annotation to the pod spec being deployed (similar to the configuration provided in their proposal): apiVersion: v1 kind: ReplicationController metadata: name: metrix namespace: "default" spec: replicas: 1 template: metadata: labels: app: metrix annotations: metrics.alpha.kubernetes.io/custom-endpoints: > [ { "api": "prometheus", "path": "/status", "port": "9090", "names": ["test1"] }, { "api": "prometheus", "path": "/metrics", "port": "9090" "names": ["test2"] } ] spec: containers: - name: metrix image: janaka/prometheus-ep:v1 resources: requests: cpu: 400m Created a Docker container tagged janaka/prometheus-ep:v1 (local) running a Prometheus-compatible server on port 9090, with /status and /metrics endpoints Enabled custom metrics on the kubelet by appending --enable-custom-metrics=true to KUBELET_OPTS at /etc/default/kubelet (based on the kubelet CLI reference) and restarted the kubelet All pods (in default and kube-system namespaces) are running, and the heapster pod log doesn't contain any 'anomalous' outputs either (except for a small glitch at startup, due to temporary unavailability of InfluxDB): $ kubesys logs -f heapster-daftr I0427 05:07:45.807277 1 heapster.go:60] /heapster --source=kubernetes:https://kubernetes.default --sink=influxdb:http://monitoring-influxdb:8086 I0427 05:07:45.807359 1 heapster.go:61] Heapster version 1.1.0-beta1 I0427 05:07:45.807638 1 configs.go:60] Using Kubernetes client with master "https://kubernetes.default" and version "v1" I0427 05:07:45.807661 1 configs.go:61] Using kubelet port 10255 E0427 05:08:15.847319 1 influxdb.go:185] issues while creating an InfluxDB sink: failed to ping InfluxDB server at "monitoring-influxdb:8086" - Get http://monitoring-influxdb:8086/ping: dial tcp xxx.xxx.xxx.xxx:8086: i/o timeout, will retry on use I0427 05:08:15.847376 1 influxdb.go:199] created influxdb sink with options: host:monitoring-influxdb:8086 user:root db:k8s I0427 05:08:15.847412 1 heapster.go:87] Starting with InfluxDB Sink I0427 05:08:15.847427 1 heapster.go:87] Starting with Metric Sink I0427 05:08:15.877349 1 heapster.go:166] Starting heapster on port 8082 I0427 05:08:35.000342 1 manager.go:79] Scraping metrics start: 2016-04-27 05:08:00 +0000 UTC, end: 2016-04-27 05:08:30 +0000 UTC I0427 05:08:35.035800 1 manager.go:152] ScrapeMetrics: time: 35.209696ms size: 24 I0427 05:08:35.044674 1 influxdb.go:177] Created database "k8s" on influxDB server at "monitoring-influxdb:8086" I0427 05:09:05.000441 1 manager.go:79] Scraping metrics start: 2016-04-27 05:08:30 +0000 UTC, end: 2016-04-27 05:09:00 +0000 UTC I0427 05:09:06.682941 1 manager.go:152] ScrapeMetrics: time: 1.682157776s size: 24 I0427 06:43:38.767146 1 manager.go:79] Scraping metrics start: 2016-04-27 05:09:00 +0000 UTC, end: 2016-04-27 05:09:30 +0000 UTC I0427 06:43:38.810243 1 manager.go:152] ScrapeMetrics: time: 42.940682ms size: 1 I0427 06:44:05.012989 1 manager.go:79] Scraping metrics start: 2016-04-27 06:43:30 +0000 UTC, end: 2016-04-27 06:44:00 +0000 UTC I0427 06:44:05.063583 1 manager.go:152] ScrapeMetrics: time: 50.368106ms size: 24 I0427 06:44:35.002038 1 manager.go:79] Scraping metrics start: 2016-04-27 06:44:00 +0000 UTC, end: 2016-04-27 06:44:30 +0000 UTC However, the custom endpoints are not being scraped. (I verified it by adding stderr logs for startup and endpoint handlers of my server; only the server initialization logs are displayed on kubectl logs of the pod.) As I'm a newbie to Kubernetes, any help is appreciated. (From what I understood from the proposal as well as this issue, we don't have to run a separate Prometheus collector in the cluster as cAdvisor should already pull data from the endpoints defined in the pod spec. Is this true, or do I need a separate Prometheus collector as well?)
I'd suggest you to have a look at events, see this topic for some guidance. Generally each object should generate events you can watch and be notified of such errors.
###Instruction: I am using fabric8 to develop a cluster management layer on top of Kubernetes, and I am confused as to what the 'official' API is to obtain notifications of errors when things go wrong when instantiating pods/rep controllers & services etc. In the section "Pod Deployment Code" I have a stripped down version of what we do for pods. In the event that everything goes correctly, our code is fine. We rely on setting 'watches' as you can see in the method deployPodWithWatch. All I do in the given eventReceived callback is to print the event, but our real code will break apart a notification like this: got action: MODIFIED / Pod(apiVersion=v1, kind=Pod, metadata=...etc etc status=PodStatus( conditions=[ and pick out the 'status' element of the Pod and when we get PodCondition(status=True, type=Ready), we know that our pod has been successfully deployed. In the happy path case this works great. And you can actually run the code supplied with variable k8sUrl set to the proper url for your site (hopefully your k8s installation does not require auth which is site specific so i didn't provide code for that). However, suppose you change the variable imageName to "nginBoo". There is no public docker image of that name, so after you run the code, set your kubernetes context to the namespace "junk", and do a describe pod podboy you will see two status messages at the end with the following values for Reason / Message Reason message failedSync Error syncing pod, skipping... failed Failed to pull image "nginBoo": API error (500): Error parsing reference: "nginBoo" is not a valid repository/tag I would like to implement a watch callback so that it catches these types of errors. However, the only thing that I see are 'MODIFIED' events wherein the Pod has a field like this: state=ContainerState(running=null, terminated=null, waiting=ContainerStateWaiting( reason=API error (500): Error parsing reference: "nginBoo" is not a valid repository/tag I suppose I could look for a reason code that contained the string 'API error' but this seems to be very much an implementation-dependent hack -- it might not cover all cases, and maybe it will change under my feet with future versions. I'd like some more 'official' way of figuring out if there was an error, but my searches have come up dry -- so I humbly request guidance from all of you k8s experts out there. Thanks ! Pod Deployment Code import com.fasterxml.jackson.databind.ObjectMapper import scala.collection.JavaConverters._ import com.ning.http.client.ws.WebSocket import com.typesafe.scalalogging.StrictLogging import io.fabric8.kubernetes.api.model.{DoneableNamespace, Namespace, Pod, ReplicationController} import io.fabric8.kubernetes.client.DefaultKubernetesClient.ConfigBuilder import io.fabric8.kubernetes.client.Watcher.Action import io.fabric8.kubernetes.client.dsl.Resource import io.fabric8.kubernetes.client.{DefaultKubernetesClient, Watcher} object ErrorTest extends App with StrictLogging { // corresponds to --insecure-skip-tls-verify=true, according to io.fabric8.kubernetes.api.model.Cluster val trustCerts = true val k8sUrl = "http://localhost:8080" val namespaceName = "junk" // replace this with name of a namespace that you know exists val imageName: String = "nginx" def go(): Unit = { val kube = getConnection dumpNamespaces(kube) deployPodWithWatch(kube, getPod(image = imageName)) } def deployPodWithWatch(kube: DefaultKubernetesClient, pod: Pod): Unit = { kube.pods().inNamespace(namespaceName).create(pod) /* create the pod ! */ val podWatchWebSocket: WebSocket = /* create watch on the pod */ kube.pods().inNamespace(namespaceName).withName(pod.getMetadata.getName).watch(getPodWatch) } def getPod(image: String): Pod = { val jsonTemplate = """ |{ | "kind": "Pod", | "apiVersion": "v1", | "metadata": { | "name": "podboy", | "labels": { | "app": "nginx" | } | }, | "spec": { | "containers": [ | { | "name": "podboy", | "image": "<image>", | "ports": [ | { | "containerPort": 80, | "protocol": "TCP" | } | ] | } | ] | } |} """. stripMargin val replacement: String = "image\": \"" + image val json = jsonTemplate.replaceAll("image\": \"<image>", replacement) System.out.println("json:" + json); new ObjectMapper().readValue(json, classOf[Pod]) } def dumpNamespaces(kube: DefaultKubernetesClient): Unit = { val namespaceNames = kube.namespaces().list().getItems.asScala.map { (ns: Namespace) => { ns.getMetadata.getName } } System.out.println("namespaces are:" + namespaceNames); } def getConnection = { val configBuilder = new ConfigBuilder() val config = configBuilder. trustCerts(trustCerts). masterUrl(k8sUrl). build() new DefaultKubernetesClient(config) } def getPodWatch: Watcher[Pod] = { new Watcher[Pod]() { def eventReceived(action: Action, watchedPod: Pod) { System.out.println("got action: " + action + " / " + watchedPod) } } } go() } ### Response: I'd suggest you to have a look at events, see this topic for some guidance. Generally each object should generate events you can watch and be notified of such errors.
I am using fabric8 to develop a cluster management layer on top of Kubernetes, and I am confused as to what the 'official' API is to obtain notifications of errors when things go wrong when instantiating pods/rep controllers & services etc. In the section "Pod Deployment Code" I have a stripped down version of what we do for pods. In the event that everything goes correctly, our code is fine. We rely on setting 'watches' as you can see in the method deployPodWithWatch. All I do in the given eventReceived callback is to print the event, but our real code will break apart a notification like this: got action: MODIFIED / Pod(apiVersion=v1, kind=Pod, metadata=...etc etc status=PodStatus( conditions=[ and pick out the 'status' element of the Pod and when we get PodCondition(status=True, type=Ready), we know that our pod has been successfully deployed. In the happy path case this works great. And you can actually run the code supplied with variable k8sUrl set to the proper url for your site (hopefully your k8s installation does not require auth which is site specific so i didn't provide code for that). However, suppose you change the variable imageName to "nginBoo". There is no public docker image of that name, so after you run the code, set your kubernetes context to the namespace "junk", and do a describe pod podboy you will see two status messages at the end with the following values for Reason / Message Reason message failedSync Error syncing pod, skipping... failed Failed to pull image "nginBoo": API error (500): Error parsing reference: "nginBoo" is not a valid repository/tag I would like to implement a watch callback so that it catches these types of errors. However, the only thing that I see are 'MODIFIED' events wherein the Pod has a field like this: state=ContainerState(running=null, terminated=null, waiting=ContainerStateWaiting( reason=API error (500): Error parsing reference: "nginBoo" is not a valid repository/tag I suppose I could look for a reason code that contained the string 'API error' but this seems to be very much an implementation-dependent hack -- it might not cover all cases, and maybe it will change under my feet with future versions. I'd like some more 'official' way of figuring out if there was an error, but my searches have come up dry -- so I humbly request guidance from all of you k8s experts out there. Thanks ! Pod Deployment Code import com.fasterxml.jackson.databind.ObjectMapper import scala.collection.JavaConverters._ import com.ning.http.client.ws.WebSocket import com.typesafe.scalalogging.StrictLogging import io.fabric8.kubernetes.api.model.{DoneableNamespace, Namespace, Pod, ReplicationController} import io.fabric8.kubernetes.client.DefaultKubernetesClient.ConfigBuilder import io.fabric8.kubernetes.client.Watcher.Action import io.fabric8.kubernetes.client.dsl.Resource import io.fabric8.kubernetes.client.{DefaultKubernetesClient, Watcher} object ErrorTest extends App with StrictLogging { // corresponds to --insecure-skip-tls-verify=true, according to io.fabric8.kubernetes.api.model.Cluster val trustCerts = true val k8sUrl = "http://localhost:8080" val namespaceName = "junk" // replace this with name of a namespace that you know exists val imageName: String = "nginx" def go(): Unit = { val kube = getConnection dumpNamespaces(kube) deployPodWithWatch(kube, getPod(image = imageName)) } def deployPodWithWatch(kube: DefaultKubernetesClient, pod: Pod): Unit = { kube.pods().inNamespace(namespaceName).create(pod) /* create the pod ! */ val podWatchWebSocket: WebSocket = /* create watch on the pod */ kube.pods().inNamespace(namespaceName).withName(pod.getMetadata.getName).watch(getPodWatch) } def getPod(image: String): Pod = { val jsonTemplate = """ |{ | "kind": "Pod", | "apiVersion": "v1", | "metadata": { | "name": "podboy", | "labels": { | "app": "nginx" | } | }, | "spec": { | "containers": [ | { | "name": "podboy", | "image": "<image>", | "ports": [ | { | "containerPort": 80, | "protocol": "TCP" | } | ] | } | ] | } |} """. stripMargin val replacement: String = "image\": \"" + image val json = jsonTemplate.replaceAll("image\": \"<image>", replacement) System.out.println("json:" + json); new ObjectMapper().readValue(json, classOf[Pod]) } def dumpNamespaces(kube: DefaultKubernetesClient): Unit = { val namespaceNames = kube.namespaces().list().getItems.asScala.map { (ns: Namespace) => { ns.getMetadata.getName } } System.out.println("namespaces are:" + namespaceNames); } def getConnection = { val configBuilder = new ConfigBuilder() val config = configBuilder. trustCerts(trustCerts). masterUrl(k8sUrl). build() new DefaultKubernetesClient(config) } def getPodWatch: Watcher[Pod] = { new Watcher[Pod]() { def eventReceived(action: Action, watchedPod: Pod) { System.out.println("got action: " + action + " / " + watchedPod) } } } go() }
Update Firewall Rules for: IP: 130.211.0.0/22 tcp:30000-32767 Create NodePort type service: apiVersion: v1 kind: Service metadata: name: yourservicenodeport labels: name: your-service-node-port spec: type: NodePort ports: - port: 80 nodePort: 30001 selector: name: yourpods Create health check. For the nodeport which is in this case: 30001 Create an ingress service: apiVersion: extensions/v1beta1 kind: Ingress metadata: name: youTheking-ingress-service spec: backend: serviceName: yourservice servicePort: 80 Wait for few minutes, be patient. Change the health check on http load balancer. a. Go to the Load Balancing on Networking Tab. b. Click Advance menu. c. Go Backend Services and Edit. d. Update health check option and use the one created for nodeport service. Repeat step 5 for instance group to be recognized health. SSL is needed, go back to the load balancer, edit, click Frontend Configuration, then add https with cert. You are ready to roll.
###Instruction: I am running a restfull service behind self signed cert thru NGINX in google cloud kubernetes infrastructure. Kubernetes service loader exposes 443 and routes the traffic those containers. All is working just fine as expected other than asking internal clients to ignore the self sign cert warning! It is time for to move to CA cert thus only option as far as I see is https loader but I couldnt figure out how we can reroute the traffic to service loader or directly to pods as service loader(http loader) Any help apprecaited ### Response: Update Firewall Rules for: IP: 130.211.0.0/22 tcp:30000-32767 Create NodePort type service: apiVersion: v1 kind: Service metadata: name: yourservicenodeport labels: name: your-service-node-port spec: type: NodePort ports: - port: 80 nodePort: 30001 selector: name: yourpods Create health check. For the nodeport which is in this case: 30001 Create an ingress service: apiVersion: extensions/v1beta1 kind: Ingress metadata: name: youTheking-ingress-service spec: backend: serviceName: yourservice servicePort: 80 Wait for few minutes, be patient. Change the health check on http load balancer. a. Go to the Load Balancing on Networking Tab. b. Click Advance menu. c. Go Backend Services and Edit. d. Update health check option and use the one created for nodeport service. Repeat step 5 for instance group to be recognized health. SSL is needed, go back to the load balancer, edit, click Frontend Configuration, then add https with cert. You are ready to roll.
I am running a restfull service behind self signed cert thru NGINX in google cloud kubernetes infrastructure. Kubernetes service loader exposes 443 and routes the traffic those containers. All is working just fine as expected other than asking internal clients to ignore the self sign cert warning! It is time for to move to CA cert thus only option as far as I see is https loader but I couldnt figure out how we can reroute the traffic to service loader or directly to pods as service loader(http loader) Any help apprecaited
I was stuck on this for awhile too. I found the solution was surprisingly simple. If you already have a master up, start the following processes on an unconnected minion in the order they appear. (I imagine you can start a similiar network overlay first if you're using a different one). /opt/bin/flanneld --etcd-endpoints=http://<masterip>:4001 /opt/bin/kubelet --address=0.0.0.0 --port=10250 --hostname-override=<localhostip> --api-servers=http://<masterip>:8080 --logtostderr=true --cluster-domain=cluster.local --cluster-dns=192.168.3.10 /opt/bin/kube-proxy --master=http://<masterip> --logtostderr=true If in doubt, just look at one of your connected minon's processes to see what flags that each process is running with.
###Instruction: I have a Kubernetes cluster running on 3 servers, a master and 2 minions. I would like to add another minion. Is it possible to add a minion without having to do the complete installation again? So far when searching for guides to do this, I can only find excellent guides on getting the whole cluster up. ### Response: I was stuck on this for awhile too. I found the solution was surprisingly simple. If you already have a master up, start the following processes on an unconnected minion in the order they appear. (I imagine you can start a similiar network overlay first if you're using a different one). /opt/bin/flanneld --etcd-endpoints=http://<masterip>:4001 /opt/bin/kubelet --address=0.0.0.0 --port=10250 --hostname-override=<localhostip> --api-servers=http://<masterip>:8080 --logtostderr=true --cluster-domain=cluster.local --cluster-dns=192.168.3.10 /opt/bin/kube-proxy --master=http://<masterip> --logtostderr=true If in doubt, just look at one of your connected minon's processes to see what flags that each process is running with.
I have a Kubernetes cluster running on 3 servers, a master and 2 minions. I would like to add another minion. Is it possible to add a minion without having to do the complete installation again? So far when searching for guides to do this, I can only find excellent guides on getting the whole cluster up.
First we have to find out which rc is running : kubectl get rc --namespace=kube-system We have to delete Running rc : kubectl delete rc above_running_rc_name --namespace=kube-system Then cluster down script "KUBERNETES_PROVIDER=ubuntu ./kube-down.sh", will execute without Error "waiting for tearing down pods" EXAMPLE :: root@ubuntu:~/kubernetes/cluster# KUBERNETES_PROVIDER=ubuntu ./kube-down.sh Bringing down cluster using provider: ubuntu Identity added: /root/.ssh/id_rsa (/root/.ssh/id_rsa) No resources found No resources found service "kubernetes" deleted No resources found waiting for tearing down pods waiting for tearing down pods ^C root@ubuntu:~/kubernetes/cluster# kubectl get rc --namespace=kube-system CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE kubernetes-dashboard-v1.0.1 kubernetes-dashboard gcr.io/google_containers/kubernetes-dashboard-amd64:v1.0.1 k8s-app=kubernetes-dashboard 1 44m root@ubuntu:~/kubernetes/cluster# root@ubuntu:~/kubernetes/cluster# kubectl delete rc kubernetes-dashboard-v1.0.1 --namespace=kube-system replicationcontroller "kubernetes-dashboard-v1.0.1" deleted root@ubuntu:~/kubernetes/cluster# KUBERNETES_PROVIDER=ubuntu ./kube-down.sh Bringing down cluster using provider: ubuntu Identity added: /root/.ssh/id_rsa (/root/.ssh/id_rsa) No resources found No resources found service "kubernetes" deleted No resources found Cleaning on master 172.27.59.208 26979 etcd stop/waiting Connection to 172.27.59.208 closed. Connection to 172.27.59.208 closed. Connection to 172.27.59.208 closed. Cleaning on node 172.27.59.233 2165 flanneld stop/waiting Connection to 172.27.59.233 closed. Connection to 172.27.59.233 closed. Done
###Instruction: I have a Kubernetes cluster installed in my Ubuntu machines. It consists of three machines: one master/node and two nodes. When I turn down the cluster, it never stops printing "waiting for tearing down pods": root@kubernetes01:~/kubernetes/cluster# KUBERNETES_PROVIDER=ubuntu ./kube-down.sh Bringing down cluster using provider: ubuntu Identity added: /root/.ssh/id_rsa (/root/.ssh/id_rsa) No resources found No resources found service "kubernetes" deleted No resources found waiting for tearing down pods waiting for tearing down pods waiting for tearing down pods waiting for tearing down pods waiting for tearing down pods waiting for tearing down pods waiting for tearing down pods waiting for tearing down pods waiting for tearing down pods waiting for tearing down pods waiting for tearing down pods waiting for tearing down pods waiting for tearing down pods waiting for tearing down pods waiting for tearing down pods waiting for tearing down pods waiting for tearing down pods waiting for tearing down pods There is no pods nor services running when I turn it down. Finally, I have to force stop by killing processes and stoping services. ### Response: First we have to find out which rc is running : kubectl get rc --namespace=kube-system We have to delete Running rc : kubectl delete rc above_running_rc_name --namespace=kube-system Then cluster down script "KUBERNETES_PROVIDER=ubuntu ./kube-down.sh", will execute without Error "waiting for tearing down pods" EXAMPLE :: root@ubuntu:~/kubernetes/cluster# KUBERNETES_PROVIDER=ubuntu ./kube-down.sh Bringing down cluster using provider: ubuntu Identity added: /root/.ssh/id_rsa (/root/.ssh/id_rsa) No resources found No resources found service "kubernetes" deleted No resources found waiting for tearing down pods waiting for tearing down pods ^C root@ubuntu:~/kubernetes/cluster# kubectl get rc --namespace=kube-system CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE kubernetes-dashboard-v1.0.1 kubernetes-dashboard gcr.io/google_containers/kubernetes-dashboard-amd64:v1.0.1 k8s-app=kubernetes-dashboard 1 44m root@ubuntu:~/kubernetes/cluster# root@ubuntu:~/kubernetes/cluster# kubectl delete rc kubernetes-dashboard-v1.0.1 --namespace=kube-system replicationcontroller "kubernetes-dashboard-v1.0.1" deleted root@ubuntu:~/kubernetes/cluster# KUBERNETES_PROVIDER=ubuntu ./kube-down.sh Bringing down cluster using provider: ubuntu Identity added: /root/.ssh/id_rsa (/root/.ssh/id_rsa) No resources found No resources found service "kubernetes" deleted No resources found Cleaning on master 172.27.59.208 26979 etcd stop/waiting Connection to 172.27.59.208 closed. Connection to 172.27.59.208 closed. Connection to 172.27.59.208 closed. Cleaning on node 172.27.59.233 2165 flanneld stop/waiting Connection to 172.27.59.233 closed. Connection to 172.27.59.233 closed. Done
I have a Kubernetes cluster installed in my Ubuntu machines. It consists of three machines: one master/node and two nodes. When I turn down the cluster, it never stops printing "waiting for tearing down pods": root@kubernetes01:~/kubernetes/cluster# KUBERNETES_PROVIDER=ubuntu ./kube-down.sh Bringing down cluster using provider: ubuntu Identity added: /root/.ssh/id_rsa (/root/.ssh/id_rsa) No resources found No resources found service "kubernetes" deleted No resources found waiting for tearing down pods waiting for tearing down pods waiting for tearing down pods waiting for tearing down pods waiting for tearing down pods waiting for tearing down pods waiting for tearing down pods waiting for tearing down pods waiting for tearing down pods waiting for tearing down pods waiting for tearing down pods waiting for tearing down pods waiting for tearing down pods waiting for tearing down pods waiting for tearing down pods waiting for tearing down pods waiting for tearing down pods waiting for tearing down pods There is no pods nor services running when I turn it down. Finally, I have to force stop by killing processes and stoping services.
Sounds like you would need an extra service running either in your nsqd container or in a separate container in the same pod. The role of that service would be to pole the API regularly in order to fetch the list of endpoints. Assuming that you enabled Service Accounts (enabled by default), here is a proof of concept on the shell using curl and jq from inside a pod: # Read token and CA cert from Service Account CACERT="/var/run/secrets/kubernetes.io/serviceaccount/ca.crt" TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token) # Replace the namespace ("kube-system") and service name ("kube-dns") ENDPOINTS=$(curl -s --cacert "$CACERT" -H "Authorization: Bearer $TOKEN" \ https://kubernetes.default.svc/api/v1/namespaces/kube-system/endpoints/kube-dns \ ) # Filter the JSON output echo "$ENDPOINTS" | jq -r .subsets[].addresses[].ip # output: # 10.100.42.3 # 10.100.67.3 Take a look at the source code of Kube2sky for a good implementation of that kind of service in Go.
###Instruction: I have a set of pods providing nsqlookupd service. Now I need each nsqd container to have a list of nsqlookupd servers to connect to (while service will point to different every time) simultaneously. Something similar I get with kubectl describe service nsqlookupd ... Endpoints: .... but I want to have it in a variable within my deployment definition or somehow from within nsqd container ### Response: Sounds like you would need an extra service running either in your nsqd container or in a separate container in the same pod. The role of that service would be to pole the API regularly in order to fetch the list of endpoints. Assuming that you enabled Service Accounts (enabled by default), here is a proof of concept on the shell using curl and jq from inside a pod: # Read token and CA cert from Service Account CACERT="/var/run/secrets/kubernetes.io/serviceaccount/ca.crt" TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token) # Replace the namespace ("kube-system") and service name ("kube-dns") ENDPOINTS=$(curl -s --cacert "$CACERT" -H "Authorization: Bearer $TOKEN" \ https://kubernetes.default.svc/api/v1/namespaces/kube-system/endpoints/kube-dns \ ) # Filter the JSON output echo "$ENDPOINTS" | jq -r .subsets[].addresses[].ip # output: # 10.100.42.3 # 10.100.67.3 Take a look at the source code of Kube2sky for a good implementation of that kind of service in Go.
I have a set of pods providing nsqlookupd service. Now I need each nsqd container to have a list of nsqlookupd servers to connect to (while service will point to different every time) simultaneously. Something similar I get with kubectl describe service nsqlookupd ... Endpoints: .... but I want to have it in a variable within my deployment definition or somehow from within nsqd container
So it seems that there is no support for this on kube-aws currently, quoting one of the authors: We are currently working on a kube-was distribution for this approach that includes Kibana for visualizing the elastic search data. Also a suggested workaround appears in this issue page including extra details regarding it's status: https://github.com/coreos/coreos-kubernetes/issues/320
###Instruction: I'm following the k8s logging instructions on how to configure cluster level logging. I'm using kube-aws cli Tool to configure the cluster, and I can't seem to find a way to make it work. I've tried setting the env vars as they mentioned in the k8s logging guide (KUBE_ENABLE_NODE_LOGGING and KUBE_LOGGING_DESTINATION) before running kube-aws up but that didn't seem to change anything. After that, I've tried running the es and kibana rc's and services manually by taking them from the cluster/addons/fluentd-elasticsearch directory on k8s github repo, but that ran only those specific services and not the fluentd-elasticsearch service which supposed to run also by the tutorial example. running kubectl get pods --namespace=kube-system returns the following: where we can see that the fluentd-elasticsearch-kubernetes-node is missing. Also tried connecting to the cluster but failed with: unauthorized following the k8s logging instructions and running the command kubectl config view didn't return any username and password, and when tried accessing the es url, I didn't get any dialog with asking for username and password. Not sure if it related to the first issue. Not sure what I'm missing here. Thanks. ### Response: So it seems that there is no support for this on kube-aws currently, quoting one of the authors: We are currently working on a kube-was distribution for this approach that includes Kibana for visualizing the elastic search data. Also a suggested workaround appears in this issue page including extra details regarding it's status: https://github.com/coreos/coreos-kubernetes/issues/320
I'm following the k8s logging instructions on how to configure cluster level logging. I'm using kube-aws cli Tool to configure the cluster, and I can't seem to find a way to make it work. I've tried setting the env vars as they mentioned in the k8s logging guide (KUBE_ENABLE_NODE_LOGGING and KUBE_LOGGING_DESTINATION) before running kube-aws up but that didn't seem to change anything. After that, I've tried running the es and kibana rc's and services manually by taking them from the cluster/addons/fluentd-elasticsearch directory on k8s github repo, but that ran only those specific services and not the fluentd-elasticsearch service which supposed to run also by the tutorial example. running kubectl get pods --namespace=kube-system returns the following: where we can see that the fluentd-elasticsearch-kubernetes-node is missing. Also tried connecting to the cluster but failed with: unauthorized following the k8s logging instructions and running the command kubectl config view didn't return any username and password, and when tried accessing the es url, I didn't get any dialog with asking for username and password. Not sure if it related to the first issue. Not sure what I'm missing here. Thanks.
The short answer is yes. kube-proxy generates IPtables rules for each service so that it can be accessed from both hosts and pods. When in doubt, execute iptables -t nat -n -v on a host where kube-proxy is running and search for the IP of the service. You can post the output here as a comment if you need help.
###Instruction: I am using k8s 1.2 on ubuntu 14.04.4. Here is some info on my one k8s minion node: # cat /etc/os-release NAME="Ubuntu" VERSION="14.04.4 LTS, Trusty Tahr" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 14.04.4 LTS" VERSION_ID="14.04" HOME_URL="http://www.ubuntu.com/" SUPPORT_URL="http://help.ubuntu.com/" BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/" # uname -a Linux k8s-010 3.19.0-47-generic #53~14.04.1-Ubuntu SMP Mon Jan 18 16:09:14 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux You see, I upgraded the linux kernel to 3.19.0-47. here are kube-proxy log on this node: # cat /var/log/upstart/kube-proxy.log.1 I0429 17:55:11.397842 985 server.go:200] Using iptables Proxier. I0429 17:55:11.397941 985 server.go:213] Tearing down userspace rules. I0429 17:55:12.408962 985 conntrack.go:36] Setting nf_conntrack_max to 262144 I0429 17:55:12.409050 985 conntrack.go:41] Setting conntrack hashsize to 65536 I0429 17:55:12.409288 985 conntrack.go:46] Setting nf_conntrack_tcp_timeout_established to 86400 I have some cluster-ips on my k8s: $ kubectl --namespace=enlighten get svc NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE mongo 192.168.15.118 <none> 27017/TCP 1d mysql 192.168.4.191 <none> 3306/TCP 23h redis 192.168.193.134 <none> 6379/TCP 1d I found my pod in this cluster can access this CLusterIP. But If I tried to access this clusterIP from the node host directly, It failed. I used iptables-save to dump the rules. Since there are a lot of rules, I filter the relevance in: # iptables-save |grep -e mysql -e 192.168.4.191 -e "-A KUBE-MARK-MASQ" -e KUBE-SEP-HLHD6IM4G76NUUUM :KUBE-SEP-HLHD6IM4G76NUUUM - [0:0] -A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000 -A KUBE-SEP-HLHD6IM4G76NUUUM -s 172.17.45.2/32 -m comment --comment "enlighten/mysql:" -j KUBE-MARK-MASQ -A KUBE-SEP-HLHD6IM4G76NUUUM -p tcp -m comment --comment "enlighten/mysql:" -m tcp -j DNAT --to-destination 172.17.45.2:3306 -A KUBE-SERVICES -d 192.168.4.191/32 -p tcp -m comment --comment "enlighten/mysql: cluster IP" -m tcp --dport 3306 -j KUBE-SVC-FIZ3LEJD226YKZ4B -A KUBE-SVC-FIZ3LEJD226YKZ4B -m comment --comment "enlighten/mysql:" -j KUBE-SEP-HLHD6IM4G76NUUUM Is it WAD or an issue? ### Response: The short answer is yes. kube-proxy generates IPtables rules for each service so that it can be accessed from both hosts and pods. When in doubt, execute iptables -t nat -n -v on a host where kube-proxy is running and search for the IP of the service. You can post the output here as a comment if you need help.
I am using k8s 1.2 on ubuntu 14.04.4. Here is some info on my one k8s minion node: # cat /etc/os-release NAME="Ubuntu" VERSION="14.04.4 LTS, Trusty Tahr" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 14.04.4 LTS" VERSION_ID="14.04" HOME_URL="http://www.ubuntu.com/" SUPPORT_URL="http://help.ubuntu.com/" BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/" # uname -a Linux k8s-010 3.19.0-47-generic #53~14.04.1-Ubuntu SMP Mon Jan 18 16:09:14 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux You see, I upgraded the linux kernel to 3.19.0-47. here are kube-proxy log on this node: # cat /var/log/upstart/kube-proxy.log.1 I0429 17:55:11.397842 985 server.go:200] Using iptables Proxier. I0429 17:55:11.397941 985 server.go:213] Tearing down userspace rules. I0429 17:55:12.408962 985 conntrack.go:36] Setting nf_conntrack_max to 262144 I0429 17:55:12.409050 985 conntrack.go:41] Setting conntrack hashsize to 65536 I0429 17:55:12.409288 985 conntrack.go:46] Setting nf_conntrack_tcp_timeout_established to 86400 I have some cluster-ips on my k8s: $ kubectl --namespace=enlighten get svc NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE mongo 192.168.15.118 <none> 27017/TCP 1d mysql 192.168.4.191 <none> 3306/TCP 23h redis 192.168.193.134 <none> 6379/TCP 1d I found my pod in this cluster can access this CLusterIP. But If I tried to access this clusterIP from the node host directly, It failed. I used iptables-save to dump the rules. Since there are a lot of rules, I filter the relevance in: # iptables-save |grep -e mysql -e 192.168.4.191 -e "-A KUBE-MARK-MASQ" -e KUBE-SEP-HLHD6IM4G76NUUUM :KUBE-SEP-HLHD6IM4G76NUUUM - [0:0] -A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000 -A KUBE-SEP-HLHD6IM4G76NUUUM -s 172.17.45.2/32 -m comment --comment "enlighten/mysql:" -j KUBE-MARK-MASQ -A KUBE-SEP-HLHD6IM4G76NUUUM -p tcp -m comment --comment "enlighten/mysql:" -m tcp -j DNAT --to-destination 172.17.45.2:3306 -A KUBE-SERVICES -d 192.168.4.191/32 -p tcp -m comment --comment "enlighten/mysql: cluster IP" -m tcp --dport 3306 -j KUBE-SVC-FIZ3LEJD226YKZ4B -A KUBE-SVC-FIZ3LEJD226YKZ4B -m comment --comment "enlighten/mysql:" -j KUBE-SEP-HLHD6IM4G76NUUUM Is it WAD or an issue?
The kube-aws tool utilizes an autoscaling group for the worker machines. You could resize this based on CloudWatch metrics like CPU/RAM, although this isn't set up by default. To scale it manually up or down, you can use the AWS console.
###Instruction: While trying to install Kubernetes on AWS, I have come across two major ways of doing things. The first one is to use the deploy scripts that come packaged with Kubernetes to create a Kubernetes cluster on AWS. Another one is to use CoreOS based kube-aws tool to run a cluster. What are the major advantages and disadvantages of the two approaches? However, I have not been able to understand how one can resize an existing live cluster without downtime in either of the two systems. Is there any way to deploy a resizable Kubernetes cluster on AWS using tools or scripts? Alternatively, is it even possible to resize a live cluster? If yes, how can we do that without using a tool or a script (assuming none is available). ### Response: The kube-aws tool utilizes an autoscaling group for the worker machines. You could resize this based on CloudWatch metrics like CPU/RAM, although this isn't set up by default. To scale it manually up or down, you can use the AWS console.
While trying to install Kubernetes on AWS, I have come across two major ways of doing things. The first one is to use the deploy scripts that come packaged with Kubernetes to create a Kubernetes cluster on AWS. Another one is to use CoreOS based kube-aws tool to run a cluster. What are the major advantages and disadvantages of the two approaches? However, I have not been able to understand how one can resize an existing live cluster without downtime in either of the two systems. Is there any way to deploy a resizable Kubernetes cluster on AWS using tools or scripts? Alternatively, is it even possible to resize a live cluster? If yes, how can we do that without using a tool or a script (assuming none is available).
Your service's selector is wrong. It should be selecting a label from the pod template, not a label on the RC itself. If you change the following in your service: "selector": { "app": "foo-frontend-rc" }, to: "selector": { "app": "foo-frontend" }, It should fix it. Update Change your service definition to { "kind": "Service", "apiVersion": "v1", "metadata": { "name": "foo-frontend-service" }, "spec": { "selector": { "app": "foo-frontend" }, "ports": [ { "protocol": "TCP", "port": 80, "targetPort": 3009, "nodePort": 30009 } ], "type": "LoadBalancer" } }
###Instruction: I can do a deploy like this, but cannot do it via command line. I was looking at doing it like this kubectl create -f kubernetes-rc.json { "kind": "ReplicationController", "apiVersion": "v1", "metadata": { "name": "foo-frontend-rc", "labels": { "www": true }, "namespace": "foo" }, "spec": { "replicas": 1, "template": { "metadata": { "labels": { "app": "foo-frontend" } }, "spec": { "containers": [ { "name": "foo-frontend", "image": "gcr.io/atomic-griffin-130023/foo-frontend:b3fc862", "ports": [ { "containerPort": 3009, "protocol": "TCP" } ], "imagePullPolicy": "IfNotPresent" } ], "restartPolicy": "Always", "dnsPolicy": "ClusterFirst" } } } } and kubectl create -f kubernetes-service.json { "kind": "Service", "apiVersion": "v1", "metadata": { "name": "foo-frontend-service" }, "spec": { "selector": { "app": "foo-frontend-rc" }, "ports": [ { "protocol": "TCP", "port": 80, "targetPort": 3009 } ] } } to no avail. It creates the rc, but it won’t expose the service externally. ### Response: Your service's selector is wrong. It should be selecting a label from the pod template, not a label on the RC itself. If you change the following in your service: "selector": { "app": "foo-frontend-rc" }, to: "selector": { "app": "foo-frontend" }, It should fix it. Update Change your service definition to { "kind": "Service", "apiVersion": "v1", "metadata": { "name": "foo-frontend-service" }, "spec": { "selector": { "app": "foo-frontend" }, "ports": [ { "protocol": "TCP", "port": 80, "targetPort": 3009, "nodePort": 30009 } ], "type": "LoadBalancer" } }
I can do a deploy like this, but cannot do it via command line. I was looking at doing it like this kubectl create -f kubernetes-rc.json { "kind": "ReplicationController", "apiVersion": "v1", "metadata": { "name": "foo-frontend-rc", "labels": { "www": true }, "namespace": "foo" }, "spec": { "replicas": 1, "template": { "metadata": { "labels": { "app": "foo-frontend" } }, "spec": { "containers": [ { "name": "foo-frontend", "image": "gcr.io/atomic-griffin-130023/foo-frontend:b3fc862", "ports": [ { "containerPort": 3009, "protocol": "TCP" } ], "imagePullPolicy": "IfNotPresent" } ], "restartPolicy": "Always", "dnsPolicy": "ClusterFirst" } } } } and kubectl create -f kubernetes-service.json { "kind": "Service", "apiVersion": "v1", "metadata": { "name": "foo-frontend-service" }, "spec": { "selector": { "app": "foo-frontend-rc" }, "ports": [ { "protocol": "TCP", "port": 80, "targetPort": 3009 } ] } } to no avail. It creates the rc, but it won’t expose the service externally.
The Kubernetes API doesn't currently expose a way to change the logging behavior. It'll rotate the log files as appropriate to avoid filling up the disk, but if you need more control, you'll have to modify the docker daemon on each node to change its logging driver. Or if you want to do it for a specific application, change the command in your x.yaml file that you're using to start the app to redirect stdout and stderr to /dev/null inside the container.
###Instruction: I have tried kubectl create -f x.yaml --logtostderr=true but it didn't work. ### Response: The Kubernetes API doesn't currently expose a way to change the logging behavior. It'll rotate the log files as appropriate to avoid filling up the disk, but if you need more control, you'll have to modify the docker daemon on each node to change its logging driver. Or if you want to do it for a specific application, change the command in your x.yaml file that you're using to start the app to redirect stdout and stderr to /dev/null inside the container.
I have tried kubectl create -f x.yaml --logtostderr=true but it didn't work.
If you want a way to do it using pure python, with no libraries, here's one using multiline strings and format: def writeConfig(**kwargs): template = """ apiVersion: v1 kind: pod metadata: name: {name} spec: replicas: {replicas} template: metadata: labels: run: {name} spec: containers: - name: {name} image: {image} ports: - containerPort: 80""" with open('somefile.yaml', 'w') as yfile: yfile.write(template.format(**kwargs)) # usage: writeConfig(name="someName", image="myImg", replicas="many")
###Instruction: Actually, i have kubernetes cluster set up. I want to generate yaml config file dynamically based on a template using python. template.yaml apiVersion: v1 kind: pod metadata: name: $name spec: replicas: $replicas template: metadata: labels: run: $name spec: containers: - name: $name image: $image ports: - containerPort: 80 Placeholders name, replicas and image are the input of my python method. Any help will be appreciated. ### Response: If you want a way to do it using pure python, with no libraries, here's one using multiline strings and format: def writeConfig(**kwargs): template = """ apiVersion: v1 kind: pod metadata: name: {name} spec: replicas: {replicas} template: metadata: labels: run: {name} spec: containers: - name: {name} image: {image} ports: - containerPort: 80""" with open('somefile.yaml', 'w') as yfile: yfile.write(template.format(**kwargs)) # usage: writeConfig(name="someName", image="myImg", replicas="many")
Actually, i have kubernetes cluster set up. I want to generate yaml config file dynamically based on a template using python. template.yaml apiVersion: v1 kind: pod metadata: name: $name spec: replicas: $replicas template: metadata: labels: run: $name spec: containers: - name: $name image: $image ports: - containerPort: 80 Placeholders name, replicas and image are the input of my python method. Any help will be appreciated.
It's not really what it was designed for, but you could run kubectl attach $POD. It'll show you the output of the pod while it's running and automatically terminate once the pod is done running. Of course, you'll have to handle the error that it prints if the pod is already done running, since it's only really meant for use on pods that are currently running.
###Instruction: What's the best way in script to wait for a job or pod to complete in Kubernetes or Google Container Engine? In particular, it would be better to be notified rather than polling for status in kubectl, but I'd be happy with a fairly efficient loop without any slips between the cracks. Essentially, I'd like the equivalent of a plain docker run since that blocks until command termination, but I don't want to use docker directly in this case. I looked at Github Issue #1899 but it looks unresolved as yet. ### Response: It's not really what it was designed for, but you could run kubectl attach $POD. It'll show you the output of the pod while it's running and automatically terminate once the pod is done running. Of course, you'll have to handle the error that it prints if the pod is already done running, since it's only really meant for use on pods that are currently running.
What's the best way in script to wait for a job or pod to complete in Kubernetes or Google Container Engine? In particular, it would be better to be notified rather than polling for status in kubectl, but I'd be happy with a fairly efficient loop without any slips between the cracks. Essentially, I'd like the equivalent of a plain docker run since that blocks until command termination, but I don't want to use docker directly in this case. I looked at Github Issue #1899 but it looks unresolved as yet.
K8s doesn't have special TLS support for the ordinary services. You need to use one of the following methods: using Ingress: see http://kubernetes.io/docs/user-guide/ingress/#tls. You need to choose a Ingress controller which implements the Ingress functionalities, you can use GLBC if you are on GCE, or you can use the nginx one. Both of them supports TLS. Please note that the Ingress is still beta feature with limitations. The service-loadbalancer in the contrib repo also supports tls: https://github.com/kubernetes/contrib/tree/master/service-loadbalancer#ssl-termination
###Instruction: Can’t find any resources that simply say here’s where your cert goes and here’s how to enable it. I have the cert there when I run gcloud compute ssl-certificates list. I have a cluster with kubernetes running and exposing http traffic via this service: { "kind": "Service", "apiVersion": "v1", "metadata": { "name": "foo-frontend-service" }, "spec": { "selector": { "app": "foo-frontend-rc" }, "ports": [ { "protocol": "TCP", "port": 80, "targetPort": 3009 } ] } } Need to know how to put the cert in the right place to be utilized Need to know how to reconfigure my service Need to know what my new SSL endpoint will be. Is it the same? ### Response: K8s doesn't have special TLS support for the ordinary services. You need to use one of the following methods: using Ingress: see http://kubernetes.io/docs/user-guide/ingress/#tls. You need to choose a Ingress controller which implements the Ingress functionalities, you can use GLBC if you are on GCE, or you can use the nginx one. Both of them supports TLS. Please note that the Ingress is still beta feature with limitations. The service-loadbalancer in the contrib repo also supports tls: https://github.com/kubernetes/contrib/tree/master/service-loadbalancer#ssl-termination
Can’t find any resources that simply say here’s where your cert goes and here’s how to enable it. I have the cert there when I run gcloud compute ssl-certificates list. I have a cluster with kubernetes running and exposing http traffic via this service: { "kind": "Service", "apiVersion": "v1", "metadata": { "name": "foo-frontend-service" }, "spec": { "selector": { "app": "foo-frontend-rc" }, "ports": [ { "protocol": "TCP", "port": 80, "targetPort": 3009 } ] } } Need to know how to put the cert in the right place to be utilized Need to know how to reconfigure my service Need to know what my new SSL endpoint will be. Is it the same?
I believe we have working steps to remove fluentd from a cluster which already has it installed. Delete fluentd-elastisearch addon (on master) rm (or mv) /etc/kubernetes/addons/fluentd-elasticsearch/ Delete remnant ReplicationControllers: kubectl --namespace=kube-system delete rc elasticsearch-logging-v1 kibana-logging-v1 Disable logging in salt-stack (on master). This will prevent nodes spawned in the future from having the fluentd static pod. In /srv/pillar/cluster-params.sls change existing settings to enable_node_logging: 'false' logging_destination: 'none' salt '*' saltutil.clear_cache salt '*' saltutil.sync_all On existing nodes, manually remove the fluentd static pod rm /etc/kubernetes/manifests/fluentd-es.yaml
###Instruction: We have a Kubernetes 1.1 cluster on AWS provisioned using kube-up.sh. Part of the base installation includes fluentd-elastisearch. We want to uninstall it. Specifically, we have been unsuccessful in removing the static pods running one-per-node. We do not use the Kubernetes-hosted fluentd-elastisearch, but instead use an externally hosted instance. As far as I can tell, fluentd-elastisearch is not required to run Kubernetes, and so I have been trying to remove it from our cluster. There seem to be two parts to the elastisearch setup. The first is the addon defined on the master in /etc/kubernetes/addons/fluentd-elasticsearch. We moved this file out of the addons directory and manually deleted the associated Replication Controllers. This leaves the static pods: kube-ac --namespace=kube-system get pods NAME READY STATUS RESTARTS AGE fluentd-elasticsearch-ip-10-0-5-105.us-west-2.compute.internal 1/1 Running 1 6d fluentd-elasticsearch-ip-10-0-5-124.us-west-2.compute.internal 1/1 Running 0 6d fluentd-elasticsearch-ip-10-0-5-180.us-west-2.compute.internal 1/1 Running 0 6d fluentd-elasticsearch-ip-10-0-5-231.us-west-2.compute.internal 1/1 Running 0 6d We believe the static pods are launched on each node due to the presence on each node of /etc/kubernetes/manifests/fluentd-es.yaml. This file appears to be placed by salt configuration /srv/pillar/cluster-params.sls which contains enable_node_logging: 'true'. We flipped the flag to 'false', killed the existing nodes, allowing new ones be provisioned via the Auto Scaling Group. Unfortunately the newly spawned hosts still have the static fluentd-elasticsearch pods. There are a couple of other possible files we think may be involved, on the master host: /var/cache/kubernetes-install/kubernetes/saltbase/salt/fluentd-es/fluentd-es.yaml /var/cache/salt/minion/files/base/fluentd-es/fluentd-es.yaml We are hitting a wall with our lack of salt experience. Pointers most welcome. ### Response: I believe we have working steps to remove fluentd from a cluster which already has it installed. Delete fluentd-elastisearch addon (on master) rm (or mv) /etc/kubernetes/addons/fluentd-elasticsearch/ Delete remnant ReplicationControllers: kubectl --namespace=kube-system delete rc elasticsearch-logging-v1 kibana-logging-v1 Disable logging in salt-stack (on master). This will prevent nodes spawned in the future from having the fluentd static pod. In /srv/pillar/cluster-params.sls change existing settings to enable_node_logging: 'false' logging_destination: 'none' salt '*' saltutil.clear_cache salt '*' saltutil.sync_all On existing nodes, manually remove the fluentd static pod rm /etc/kubernetes/manifests/fluentd-es.yaml
We have a Kubernetes 1.1 cluster on AWS provisioned using kube-up.sh. Part of the base installation includes fluentd-elastisearch. We want to uninstall it. Specifically, we have been unsuccessful in removing the static pods running one-per-node. We do not use the Kubernetes-hosted fluentd-elastisearch, but instead use an externally hosted instance. As far as I can tell, fluentd-elastisearch is not required to run Kubernetes, and so I have been trying to remove it from our cluster. There seem to be two parts to the elastisearch setup. The first is the addon defined on the master in /etc/kubernetes/addons/fluentd-elasticsearch. We moved this file out of the addons directory and manually deleted the associated Replication Controllers. This leaves the static pods: kube-ac --namespace=kube-system get pods NAME READY STATUS RESTARTS AGE fluentd-elasticsearch-ip-10-0-5-105.us-west-2.compute.internal 1/1 Running 1 6d fluentd-elasticsearch-ip-10-0-5-124.us-west-2.compute.internal 1/1 Running 0 6d fluentd-elasticsearch-ip-10-0-5-180.us-west-2.compute.internal 1/1 Running 0 6d fluentd-elasticsearch-ip-10-0-5-231.us-west-2.compute.internal 1/1 Running 0 6d We believe the static pods are launched on each node due to the presence on each node of /etc/kubernetes/manifests/fluentd-es.yaml. This file appears to be placed by salt configuration /srv/pillar/cluster-params.sls which contains enable_node_logging: 'true'. We flipped the flag to 'false', killed the existing nodes, allowing new ones be provisioned via the Auto Scaling Group. Unfortunately the newly spawned hosts still have the static fluentd-elasticsearch pods. There are a couple of other possible files we think may be involved, on the master host: /var/cache/kubernetes-install/kubernetes/saltbase/salt/fluentd-es/fluentd-es.yaml /var/cache/salt/minion/files/base/fluentd-es/fluentd-es.yaml We are hitting a wall with our lack of salt experience. Pointers most welcome.
https://github.com/kubernetes/ingress-gce#frontend-https You can block HTTP through the annotation kubernetes.io/ingress.allow-http: "false" or redirect HTTP to HTTPS by specifying a custom backend. Unfortunately GCE doesn't handle redirection or rewriting at the L7 layer directly for you, yet. (see https://github.com/kubernetes/ingress-gce#ingress-cannot-redirect-http-to-https) Update: GCP now handles redirection rules for load balancers, including HTTP to HTTPS. There doesn't appear to be a method to create these through Kubernetes YAML yet.
###Instruction: Is there a way to force an SSL upgrade for incoming connections on the ingress load-balancer? Or if that is not possible with, can I disable port :80? I haven't found a good documentation pages that outlines such an option in the YAML file. Thanks a lot in advance! ### Response: https://github.com/kubernetes/ingress-gce#frontend-https You can block HTTP through the annotation kubernetes.io/ingress.allow-http: "false" or redirect HTTP to HTTPS by specifying a custom backend. Unfortunately GCE doesn't handle redirection or rewriting at the L7 layer directly for you, yet. (see https://github.com/kubernetes/ingress-gce#ingress-cannot-redirect-http-to-https) Update: GCP now handles redirection rules for load balancers, including HTTP to HTTPS. There doesn't appear to be a method to create these through Kubernetes YAML yet.
Is there a way to force an SSL upgrade for incoming connections on the ingress load-balancer? Or if that is not possible with, can I disable port :80? I haven't found a good documentation pages that outlines such an option in the YAML file. Thanks a lot in advance!
I don't think hack/local-up-cluster.sh is designed to be run on a cloud provider. However, cluster/kube-up.sh is designed to work when building from source: $ make release $ export KUBERNETES_PROVIDER=aws $ cluster/kube-up.sh # Uses the release built in step 1 There are lots of options which can be configured, and you can find more details here (just ignore the part about https://get.k8s.io).
###Instruction: Is it possible to run kubernetes from source (./hack/local-up-cluster.sh) and still properly configure the cloud provider from this type of setup? For example, if an instance is running on AWS EC2 and all prerequisites are met including proper exports, aws cli and configs but keep getting an error stating that the cloud provider was not found. KUBERNETES_PROVIDER=aws, Zone is set to us-west-2a, etc... Failed to get AWS Cloud Provider. plugin.host.GetCloudProvider returned <nil> instead ### Response: I don't think hack/local-up-cluster.sh is designed to be run on a cloud provider. However, cluster/kube-up.sh is designed to work when building from source: $ make release $ export KUBERNETES_PROVIDER=aws $ cluster/kube-up.sh # Uses the release built in step 1 There are lots of options which can be configured, and you can find more details here (just ignore the part about https://get.k8s.io).
Is it possible to run kubernetes from source (./hack/local-up-cluster.sh) and still properly configure the cloud provider from this type of setup? For example, if an instance is running on AWS EC2 and all prerequisites are met including proper exports, aws cli and configs but keep getting an error stating that the cloud provider was not found. KUBERNETES_PROVIDER=aws, Zone is set to us-west-2a, etc... Failed to get AWS Cloud Provider. plugin.host.GetCloudProvider returned <nil> instead
I believe you're looking for Cluster Federation (a.k.a. "Ubernetes"). This feature is under active development, but exists as of 1.2 in a "lite" form: see the documentation for more details. However, as the documentation states, the current version cannot span clouds or regions, so will probably not work for your use case.
###Instruction: Environment background I currently have two vmware datacenters Kubernetes 1.2.2 running on coreos In datacenter 'L', I have a kubernetes cluster of 3 masters and 5 minions In datacenter 'O', I was planning on standing 5 minions only etcd is running on the 3 kubernetes masters In both datacenters I would have production and development pods dns addon is running in my cluster I have been using kubernetes over the last month. Ive read about labels, namespaces and selectors, but I am confused on the best design approach for my environment. Right now I have a 'dev' namespace and a 'prod' namespace. That works fine to split the prod and dev pods. But when I involve another datacenter it gets complicated for me. Lets say in both datacenters I would like to run a rabbitmq instance in a dev namespace. Its tricky because I wouldnt want resin in datacenter 'O' connecting to the 'L' rabbitmq instance over the WAN. I could add a label to all nodes in each datacenter and make two rabbitmq pods with logic to select a node in each datacenter. But if I was to stand up a dev resin pod, I would want that pod to connect to the rabbitmq instance in the same datacenter as the pod. I considered maybe making a pod template for each datacenter: resinO, resinL and having them respectively connect into a service of similar name rabbitmqO or rabbitmqL. I just didnt want to go too far down this path if there is a kubernetes feature that may help me and I have over looked it. ### Response: I believe you're looking for Cluster Federation (a.k.a. "Ubernetes"). This feature is under active development, but exists as of 1.2 in a "lite" form: see the documentation for more details. However, as the documentation states, the current version cannot span clouds or regions, so will probably not work for your use case.
Environment background I currently have two vmware datacenters Kubernetes 1.2.2 running on coreos In datacenter 'L', I have a kubernetes cluster of 3 masters and 5 minions In datacenter 'O', I was planning on standing 5 minions only etcd is running on the 3 kubernetes masters In both datacenters I would have production and development pods dns addon is running in my cluster I have been using kubernetes over the last month. Ive read about labels, namespaces and selectors, but I am confused on the best design approach for my environment. Right now I have a 'dev' namespace and a 'prod' namespace. That works fine to split the prod and dev pods. But when I involve another datacenter it gets complicated for me. Lets say in both datacenters I would like to run a rabbitmq instance in a dev namespace. Its tricky because I wouldnt want resin in datacenter 'O' connecting to the 'L' rabbitmq instance over the WAN. I could add a label to all nodes in each datacenter and make two rabbitmq pods with logic to select a node in each datacenter. But if I was to stand up a dev resin pod, I would want that pod to connect to the rabbitmq instance in the same datacenter as the pod. I considered maybe making a pod template for each datacenter: resinO, resinL and having them respectively connect into a service of similar name rabbitmqO or rabbitmqL. I just didnt want to go too far down this path if there is a kubernetes feature that may help me and I have over looked it.
Make sure that the Instance has the necessary OAuth2 scopes (changing the scopes will require re-creating the Instance). To run ./cluster/kube-up.sh, you most likely need compute scope (to create other Instances) and devstorage.full_control or devstorage.read_write (to push Kubernetes binaries to Google Cloud Storage). To simplify, your Instance can just have cloud-platform scope, which includes access to all Google Cloud APIs.
###Instruction: I am getting some issues while creating a Kubernetes cluster on a Google Cloud instance. I am running below command for creating a Kubernetes Cluster. ./cluster/kube-up.sh Please see error below from the console: Creating gs://kubernetes-staging-82aa46c1a5 Creating gs://kubernetes-staging-82aa46c1a5/... AccessDeniedException: 403 Insufficient Permission How can I solve this error? ### Response: Make sure that the Instance has the necessary OAuth2 scopes (changing the scopes will require re-creating the Instance). To run ./cluster/kube-up.sh, you most likely need compute scope (to create other Instances) and devstorage.full_control or devstorage.read_write (to push Kubernetes binaries to Google Cloud Storage). To simplify, your Instance can just have cloud-platform scope, which includes access to all Google Cloud APIs.
I am getting some issues while creating a Kubernetes cluster on a Google Cloud instance. I am running below command for creating a Kubernetes Cluster. ./cluster/kube-up.sh Please see error below from the console: Creating gs://kubernetes-staging-82aa46c1a5 Creating gs://kubernetes-staging-82aa46c1a5/... AccessDeniedException: 403 Insufficient Permission How can I solve this error?
There isn't a fully supported way to reconfigure the kube-env in GKE. As you've found, you can hack the instance template, but this isn't guaranteed to work across upgrades. An alternative is to create your cluster without gcp logging enabled and then create a DaemonSet that places a fluentd-elasticsearch pod on each of your nodes. Using this technique you don't need to write a (brittle) startup script or rely on the fact that the built-in startup script happens to work when setting LOGGING_DESTINATION=elasticsearch (which may break across upgrades even if it wasn't getting overwritten).
###Instruction: We are using elasticsearch/kibana instead of gcp for logging (based on what is described here). To have fluentd-elsticsearch pod's launched we've set LOGGING_DESTINATION=elasticsearch and ENABLE_NODE_LOGGING="true" in the "Compute Instance Template" -> "Custom metadata" -> "kube-env". While this works fine when done manually it gets overwritten with every gcloud container clusters upgrade as a new Instance Template with defaults (LOGGING_DESTINATION=gcp ...) is created. My question is: How do I persist this kind of configuration for GKE/GCE? I thought about adding a k8s-user-startup-script but that's also defined in the Instance Template and therefore is overwritten by gcloud container clusters upgrade. I've also tried to add a k8s-user-startup-script to the project metadata but that is not taken into account. //EDIT Current workaround (without recreating Instance Template and Instances) for manually switching back to elasticsearch is: for node in $(kubectl get nodes -o name | cut -f2 -d/); do gcloud compute ssh $node \ --command="sudo cp -a /srv/salt/fluentd-es/fluentd-es.yaml /etc/kubernetes/manifests/; sudo rm /etc/kubernetes/manifests/fluentd-gcp.yaml"; done kubelet will pick that up, kill fluentd-gcp and start fluentd-es. //EDIT #2 Now running a "startup-script" DaemonSet for this: kind: DaemonSet apiVersion: extensions/v1beta1 metadata: name: startup-script namespace: kube-system labels: app: startup-script spec: template: metadata: labels: app: startup-script spec: hostPID: true containers: - name: startup-script image: gcr.io/google-containers/startup-script:v1 securityContext: privileged: true env: - name: STARTUP_SCRIPT value: | #! /bin/bash set -o errexit set -o pipefail set -o nounset # Replace Google-Cloud-Logging with EFK if [[ ! -f /etc/kubernetes/manifests/fluentd-es.yaml ]]; then if [[ -f /home/kubernetes/kube-manifests/kubernetes/fluentd-es.yaml ]]; then # GCI images cp -a /home/kubernetes/kube-manifests/kubernetes/fluentd-es.yaml /etc/kubernetes/manifests/ elif [[ -f /srv/salt/fluentd-es/fluentd-es.yaml ]]; then # Debian based GKE images cp -a /srv/salt/fluentd-es/fluentd-es.yaml /etc/kubernetes/manifests/ fi test -f /etc/kubernetes/manifests/fluentd-es.yaml && rm /etc/kubernetes/manifests/fluentd-gcp.yaml fi ### Response: There isn't a fully supported way to reconfigure the kube-env in GKE. As you've found, you can hack the instance template, but this isn't guaranteed to work across upgrades. An alternative is to create your cluster without gcp logging enabled and then create a DaemonSet that places a fluentd-elasticsearch pod on each of your nodes. Using this technique you don't need to write a (brittle) startup script or rely on the fact that the built-in startup script happens to work when setting LOGGING_DESTINATION=elasticsearch (which may break across upgrades even if it wasn't getting overwritten).
We are using elasticsearch/kibana instead of gcp for logging (based on what is described here). To have fluentd-elsticsearch pod's launched we've set LOGGING_DESTINATION=elasticsearch and ENABLE_NODE_LOGGING="true" in the "Compute Instance Template" -> "Custom metadata" -> "kube-env". While this works fine when done manually it gets overwritten with every gcloud container clusters upgrade as a new Instance Template with defaults (LOGGING_DESTINATION=gcp ...) is created. My question is: How do I persist this kind of configuration for GKE/GCE? I thought about adding a k8s-user-startup-script but that's also defined in the Instance Template and therefore is overwritten by gcloud container clusters upgrade. I've also tried to add a k8s-user-startup-script to the project metadata but that is not taken into account. //EDIT Current workaround (without recreating Instance Template and Instances) for manually switching back to elasticsearch is: for node in $(kubectl get nodes -o name | cut -f2 -d/); do gcloud compute ssh $node \ --command="sudo cp -a /srv/salt/fluentd-es/fluentd-es.yaml /etc/kubernetes/manifests/; sudo rm /etc/kubernetes/manifests/fluentd-gcp.yaml"; done kubelet will pick that up, kill fluentd-gcp and start fluentd-es. //EDIT #2 Now running a "startup-script" DaemonSet for this: kind: DaemonSet apiVersion: extensions/v1beta1 metadata: name: startup-script namespace: kube-system labels: app: startup-script spec: template: metadata: labels: app: startup-script spec: hostPID: true containers: - name: startup-script image: gcr.io/google-containers/startup-script:v1 securityContext: privileged: true env: - name: STARTUP_SCRIPT value: | #! /bin/bash set -o errexit set -o pipefail set -o nounset # Replace Google-Cloud-Logging with EFK if [[ ! -f /etc/kubernetes/manifests/fluentd-es.yaml ]]; then if [[ -f /home/kubernetes/kube-manifests/kubernetes/fluentd-es.yaml ]]; then # GCI images cp -a /home/kubernetes/kube-manifests/kubernetes/fluentd-es.yaml /etc/kubernetes/manifests/ elif [[ -f /srv/salt/fluentd-es/fluentd-es.yaml ]]; then # Debian based GKE images cp -a /srv/salt/fluentd-es/fluentd-es.yaml /etc/kubernetes/manifests/ fi test -f /etc/kubernetes/manifests/fluentd-es.yaml && rm /etc/kubernetes/manifests/fluentd-gcp.yaml fi
This is caused by the --enable-debugging-handlers flag being set to false, which prevents the kubelet from attaching to containers and fetching the logs. Restarting the kubelet without this flag (it defaults to true) should fix it.
###Instruction: I'm trying to get logs from my pod, but it doesn't work for some reason though kubectl describe pod works well, docker logs works well. I have Kubernetes 1.2.3 Debian 8 x64 installed manually on a single node $ kubectl logs -f web-backend-alzc1 --namespace=my-namespace --v=6 round_trippers.go:286] GET http://localhost:8080/api 200 OK in 0 milliseconds round_trippers.go:286] GET http://localhost:8080/apis 200 OK in 0 milliseconds round_trippers.go:286] GET http://localhost:8080/api/v1/namespaces/my-namespace/pods/web-backend-alzc1 200 OK in 1 milliseconds round_trippers.go:286] GET http://localhost:8080/api 200 OK in 0 milliseconds round_trippers.go:286] GET http://localhost:8080/apis 200 OK in 0 milliseconds round_trippers.go:286] GET http://localhost:8080/api/v1/namespaces/my-namespace/pods/web-backend-alzc1/log?follow=true 404 Not Found in 1 milliseconds helpers.go:172] server response object: [{ "metadata": {}, "status": "Failure", "message": "the server could not find the requested resource ( pods/log web-backend-alzc1)", "reason": "NotFound", "details": { "name": "web-backend-alzc1", "kind": "pods/log" }, "code": 404 }] helpers.go:107] Error from server: the server could not find the requested resource ( pods/log web-backend-alzc1) Is there something I should describe in RC scheme to enable logs for this pod? I tried to recreate RC and look at journalctl, I see these messages: hyperkube[443]: I0510 12:14:13.754922 443 hairpin.go:51] Unable to find pair interface, setting up all interfaces: exec: "ethtool": executable file not found in $PATH hyperkube[443]: I0510 12:14:13.756866 443 provider.go:91] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider hyperkube[435]: W0510 12:14:38.835863 435 request.go:344] Field selector: v1 - serviceaccounts - metadata.name - default: need to check if this is versioned correctly. ### Response: This is caused by the --enable-debugging-handlers flag being set to false, which prevents the kubelet from attaching to containers and fetching the logs. Restarting the kubelet without this flag (it defaults to true) should fix it.
I'm trying to get logs from my pod, but it doesn't work for some reason though kubectl describe pod works well, docker logs works well. I have Kubernetes 1.2.3 Debian 8 x64 installed manually on a single node $ kubectl logs -f web-backend-alzc1 --namespace=my-namespace --v=6 round_trippers.go:286] GET http://localhost:8080/api 200 OK in 0 milliseconds round_trippers.go:286] GET http://localhost:8080/apis 200 OK in 0 milliseconds round_trippers.go:286] GET http://localhost:8080/api/v1/namespaces/my-namespace/pods/web-backend-alzc1 200 OK in 1 milliseconds round_trippers.go:286] GET http://localhost:8080/api 200 OK in 0 milliseconds round_trippers.go:286] GET http://localhost:8080/apis 200 OK in 0 milliseconds round_trippers.go:286] GET http://localhost:8080/api/v1/namespaces/my-namespace/pods/web-backend-alzc1/log?follow=true 404 Not Found in 1 milliseconds helpers.go:172] server response object: [{ "metadata": {}, "status": "Failure", "message": "the server could not find the requested resource ( pods/log web-backend-alzc1)", "reason": "NotFound", "details": { "name": "web-backend-alzc1", "kind": "pods/log" }, "code": 404 }] helpers.go:107] Error from server: the server could not find the requested resource ( pods/log web-backend-alzc1) Is there something I should describe in RC scheme to enable logs for this pod? I tried to recreate RC and look at journalctl, I see these messages: hyperkube[443]: I0510 12:14:13.754922 443 hairpin.go:51] Unable to find pair interface, setting up all interfaces: exec: "ethtool": executable file not found in $PATH hyperkube[443]: I0510 12:14:13.756866 443 provider.go:91] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider hyperkube[435]: W0510 12:14:38.835863 435 request.go:344] Field selector: v1 - serviceaccounts - metadata.name - default: need to check if this is versioned correctly.
Google Container Engine doesn't support custom startup scripts for nodes. As I mentioned in Recommended way to persistently change kube-env variables you can use a DaemonSet to customize your nodes. A DaemonSet running in privileged mode can do pretty much anything that you could do with a startup script, with the caveat that it is done slightly later in the node bring-up lifecycle. Since a DaemonSet will run on all nodes in your cluster, it will be automatically applied to any new nodes that join (via cluster resize) and because it is a Kubernetes API object, it will be persisted across OS upgrades.
###Instruction: This is like a follow-up question of: Recommended way to persistently change kube-env variables I was playing around with the possibility to define a k8s-user-startup-script for GKE instances (I want to install additional software to each node). Adding k8s-user-startup-script to an Instance Group Template "Custom Metadata" works, but that is overwritten by gcloud container clusters upgrade which creates a new Instance Template without "inheriting" the additional k8s-user-startup-script Metadata from the current template. I've also tried to add a k8s-user-startup-script to the project metadata (I thought that would be inherited by all instances of my project like described here) but that is not taken into account. What is the correct way to define a k8s-user-startup-script that persists cluster upgrades? Or, more general, what is the desired way to customize the GKE nodes? ### Response: Google Container Engine doesn't support custom startup scripts for nodes. As I mentioned in Recommended way to persistently change kube-env variables you can use a DaemonSet to customize your nodes. A DaemonSet running in privileged mode can do pretty much anything that you could do with a startup script, with the caveat that it is done slightly later in the node bring-up lifecycle. Since a DaemonSet will run on all nodes in your cluster, it will be automatically applied to any new nodes that join (via cluster resize) and because it is a Kubernetes API object, it will be persisted across OS upgrades.
This is like a follow-up question of: Recommended way to persistently change kube-env variables I was playing around with the possibility to define a k8s-user-startup-script for GKE instances (I want to install additional software to each node). Adding k8s-user-startup-script to an Instance Group Template "Custom Metadata" works, but that is overwritten by gcloud container clusters upgrade which creates a new Instance Template without "inheriting" the additional k8s-user-startup-script Metadata from the current template. I've also tried to add a k8s-user-startup-script to the project metadata (I thought that would be inherited by all instances of my project like described here) but that is not taken into account. What is the correct way to define a k8s-user-startup-script that persists cluster upgrades? Or, more general, what is the desired way to customize the GKE nodes?
The Kubernetes NodeStatus has a field called Capacity, which is a list of resources with their corresponding amounts. You can also see the Capacity in the output of kubectl describe nodes e.g. $ kubectl describe nodes my-node-1 Name: my-node-1 ... Capacity: cpu: 1 memory: 3801020Ki pods: 110
###Instruction: Is there a way to obtain hardware information (e.g. number of CPU cores, capacity of RAM) of an OpenShift 3.0 node programmatically? I could not find anything useful in the API references for OpenShift or Kubernetes (except for NodeSystemInfo in the Kubernetes API, which does not contain most of the hardware-level specs). ### Response: The Kubernetes NodeStatus has a field called Capacity, which is a list of resources with their corresponding amounts. You can also see the Capacity in the output of kubectl describe nodes e.g. $ kubectl describe nodes my-node-1 Name: my-node-1 ... Capacity: cpu: 1 memory: 3801020Ki pods: 110
Is there a way to obtain hardware information (e.g. number of CPU cores, capacity of RAM) of an OpenShift 3.0 node programmatically? I could not find anything useful in the API references for OpenShift or Kubernetes (except for NodeSystemInfo in the Kubernetes API, which does not contain most of the hardware-level specs).
Today it is not feasible to run Kubernetes natively within the Bluemix platform. We are investigating the ability to support the native API and CLI so please check back for future announcements in this space.
###Instruction: I am newbie in Kubernetes. I didn't find option for IBM Bluemix container in your_provider for cluster configuration. (I am referring to this link). Cloud you please tell me, Is it possible to integrate Bluemix with Kubernetes or not, If so,what can be a good starting point? ### Response: Today it is not feasible to run Kubernetes natively within the Bluemix platform. We are investigating the ability to support the native API and CLI so please check back for future announcements in this space.
I am newbie in Kubernetes. I didn't find option for IBM Bluemix container in your_provider for cluster configuration. (I am referring to this link). Cloud you please tell me, Is it possible to integrate Bluemix with Kubernetes or not, If so,what can be a good starting point?
As documented in this doc, there are two factors affect the reaction time of the autoscaler: --horizontal-pod-autoscaler-sync-period, which defines how often the autoscaler checks the status of the controlled resources. The default value is 30s. It can be changed via the flag of the controller-manager. upscaleForbiddenWindow, which defines how often the autoscaler can scale up the resource. The default value is 3 mins. Currently it's not adjustable. According to the log you pasted, if the load is stable, the autoscaler should reacted in 30s after CPU usage reaches 55%, is that the case?
###Instruction: I have used following command for autoscaling. kubectl autoscale deployment catch-node --cpu-percent=50 --min=1 --max=10 The status of autoscaling in my case on load test is as like below . 27th minute NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE catch-node Deployment/catch-node/scale 50% 20% 1 10 27m NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE catch-node 1 1 1 1 27m 29th minute NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE catch-node Deployment/catch-node/scale 50% 35% 1 10 29m NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE catch-node 1 1 1 1 29m 31st minute NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE catch-node Deployment/catch-node/scale 50% 55% 1 10 31m NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE catch-node 1 1 1 1 31m 34th minute NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE catch-node Deployment/catch-node/scale 50% 190% 1 10 34m NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE catch-node 4 4 4 4 34m Here i am getting connection refusing error in the time between transition of 1 pod to 4pods on autoscaling. Please let me know how much time it will take to bring up new pods once it exceed the CPU % limit given during autoscale .Also please let me know is there any method to reduce this time .once all new pods comes up, the issue is not there . Thanks in advance ### Response: As documented in this doc, there are two factors affect the reaction time of the autoscaler: --horizontal-pod-autoscaler-sync-period, which defines how often the autoscaler checks the status of the controlled resources. The default value is 30s. It can be changed via the flag of the controller-manager. upscaleForbiddenWindow, which defines how often the autoscaler can scale up the resource. The default value is 3 mins. Currently it's not adjustable. According to the log you pasted, if the load is stable, the autoscaler should reacted in 30s after CPU usage reaches 55%, is that the case?
I have used following command for autoscaling. kubectl autoscale deployment catch-node --cpu-percent=50 --min=1 --max=10 The status of autoscaling in my case on load test is as like below . 27th minute NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE catch-node Deployment/catch-node/scale 50% 20% 1 10 27m NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE catch-node 1 1 1 1 27m 29th minute NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE catch-node Deployment/catch-node/scale 50% 35% 1 10 29m NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE catch-node 1 1 1 1 29m 31st minute NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE catch-node Deployment/catch-node/scale 50% 55% 1 10 31m NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE catch-node 1 1 1 1 31m 34th minute NAME REFERENCE TARGET CURRENT MINPODS MAXPODS AGE catch-node Deployment/catch-node/scale 50% 190% 1 10 34m NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE catch-node 4 4 4 4 34m Here i am getting connection refusing error in the time between transition of 1 pod to 4pods on autoscaling. Please let me know how much time it will take to bring up new pods once it exceed the CPU % limit given during autoscale .Also please let me know is there any method to reduce this time .once all new pods comes up, the issue is not there . Thanks in advance
You can always interactively edit the resources in your cluster. For your autoscale controller called web, you can edit it via: kubectl edit hpa web If you're looking for a more programmatic way to update your horizontal pod autoscaler, you would have better luck describing your autoscaler entity in a yaml file, as well. For example, here's a simple Replication Controller, paired with a Horizontal Pod Autoscale entity: apiVersion: v1 kind: ReplicationController metadata: name: nginx spec: replicas: 2 template: metadata: labels: run: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 --- apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: nginx namespace: default spec: maxReplicas: 3 minReplicas: 2 scaleTargetRef: apiVersion: v1 kind: ReplicationController name: nginx With those contents in a file called nginx.yaml, updating the autoscaler could be done via kubectl apply -f nginx.yaml.
###Instruction: I have created a Kubernetes autoscaler, but I need to change its parameters. How do I update it? I've tried the following, but it fails: βœ— kubectl autoscale -f docker/production/web-controller.yaml --min=2 --max=6 Error from server: horizontalpodautoscalers.extensions "web" already exists ### Response: You can always interactively edit the resources in your cluster. For your autoscale controller called web, you can edit it via: kubectl edit hpa web If you're looking for a more programmatic way to update your horizontal pod autoscaler, you would have better luck describing your autoscaler entity in a yaml file, as well. For example, here's a simple Replication Controller, paired with a Horizontal Pod Autoscale entity: apiVersion: v1 kind: ReplicationController metadata: name: nginx spec: replicas: 2 template: metadata: labels: run: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 --- apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: nginx namespace: default spec: maxReplicas: 3 minReplicas: 2 scaleTargetRef: apiVersion: v1 kind: ReplicationController name: nginx With those contents in a file called nginx.yaml, updating the autoscaler could be done via kubectl apply -f nginx.yaml.
I have created a Kubernetes autoscaler, but I need to change its parameters. How do I update it? I've tried the following, but it fails: βœ— kubectl autoscale -f docker/production/web-controller.yaml --min=2 --max=6 Error from server: horizontalpodautoscalers.extensions "web" already exists
Are you just going to have a single prod cluster or multiple prod clusters? One thing to consider is that updating the cluster management software (to a new k8s release) can impact your application. If you only plan to have a single prod cluster, I'd recommend running qa and dev separately so that you can upgrade those clusters first to shake out any issues. If you are going to have multiple prod clusters, then you can upgrade them one at a time to ensure application availability and sharing the clusters between environments makes a lot more sense.
###Instruction: I am working on a new project with Kubernetes and I need three environments: DEV,QA and PROD. What is most recommended, create Multiple Clusters or create one big cluster separating environments by namespace. ### Response: Are you just going to have a single prod cluster or multiple prod clusters? One thing to consider is that updating the cluster management software (to a new k8s release) can impact your application. If you only plan to have a single prod cluster, I'd recommend running qa and dev separately so that you can upgrade those clusters first to shake out any issues. If you are going to have multiple prod clusters, then you can upgrade them one at a time to ensure application availability and sharing the clusters between environments makes a lot more sense.
I am working on a new project with Kubernetes and I need three environments: DEV,QA and PROD. What is most recommended, create Multiple Clusters or create one big cluster separating environments by namespace.
The issue is that your kubeconfig is not right. To auto-generate it run: gcloud container clusters get-credentials "CLUSTER NAME" This worked for me.
###Instruction: I have followed the helloword tutorial on http://kubernetes.io/docs/hellonode/. When I run: kubectl run hello-node --image=gcr.io/PROJECT_ID/hello-node:v1 --port=8080 I get: The connection to the server localhost:8080 was refused - did you specify the right host or port? Why does the command line try to connect to the localhost? ### Response: The issue is that your kubeconfig is not right. To auto-generate it run: gcloud container clusters get-credentials "CLUSTER NAME" This worked for me.
I have followed the helloword tutorial on http://kubernetes.io/docs/hellonode/. When I run: kubectl run hello-node --image=gcr.io/PROJECT_ID/hello-node:v1 --port=8080 I get: The connection to the server localhost:8080 was refused - did you specify the right host or port? Why does the command line try to connect to the localhost?
The most direct (and interactive) way should be to execute kubectl edit secret <my secret>. Run kubectl get secrets if you'd like to see the list of secrets managed by Kubernetes.
###Instruction: How can I modify the values in a Kubernetes secret using kubectl? I created the secret with kubernetes create secret generic, but there does not seem to be a way to modify a secret. For example, to add a new secret-value to it, or to change a secret-value in it. I assume i can go 'low-level', and write the yaml-file and do a kubectl edit but I hope there is a simpler way. (I'm using kubernetes 1.2.x) ### Response: The most direct (and interactive) way should be to execute kubectl edit secret <my secret>. Run kubectl get secrets if you'd like to see the list of secrets managed by Kubernetes.
How can I modify the values in a Kubernetes secret using kubectl? I created the secret with kubernetes create secret generic, but there does not seem to be a way to modify a secret. For example, to add a new secret-value to it, or to change a secret-value in it. I assume i can go 'low-level', and write the yaml-file and do a kubectl edit but I hope there is a simpler way. (I'm using kubernetes 1.2.x)
You need to register the API in the APIServer code. See: https://github.com/kubernetes/kubernetes/blob/master/pkg/master/master.go#L245 for an example for v1beta1.
###Instruction: I'm working to add port range support to kubernetes so I need to modify the API and schemas to support a new parameter: portrange instead of the traditional port. To do it, I've follow the official guideline but yet, I'm blocked because when I try to execute: cluster/kubectl.sh create -f restcomm_rc_range.yml returns: error validating "restcomm_rc_range.yml": error validating data: API version "ranges/v2 alpha1" isn't supported, only supports API versions ["extensions/v1beta1" "authentication.k8s.io/v1beta1" "v1" "apps/v1alpha1" "autoscaling/v1" "metrics/v1alpha1" "policy/v1alpha1" "authorization.k8s.io/v1beta1" "batch/v1" "batch/v2alpha1" "componentconfig/v1alpha1"]; if you choose to ignore these errors, turn validation off with --validate=false I've previously executed: ./hack/update-all.sh -a && ./hack/build-go.sh ./hack/local-up-cluster.sh And I've modified all the files, but clearly I'm missing some step here to register the API in the API Server. Any guidance about how to do it? Is this solution suitable for the issue? ### Response: You need to register the API in the APIServer code. See: https://github.com/kubernetes/kubernetes/blob/master/pkg/master/master.go#L245 for an example for v1beta1.
I'm working to add port range support to kubernetes so I need to modify the API and schemas to support a new parameter: portrange instead of the traditional port. To do it, I've follow the official guideline but yet, I'm blocked because when I try to execute: cluster/kubectl.sh create -f restcomm_rc_range.yml returns: error validating "restcomm_rc_range.yml": error validating data: API version "ranges/v2 alpha1" isn't supported, only supports API versions ["extensions/v1beta1" "authentication.k8s.io/v1beta1" "v1" "apps/v1alpha1" "autoscaling/v1" "metrics/v1alpha1" "policy/v1alpha1" "authorization.k8s.io/v1beta1" "batch/v1" "batch/v2alpha1" "componentconfig/v1alpha1"]; if you choose to ignore these errors, turn validation off with --validate=false I've previously executed: ./hack/update-all.sh -a && ./hack/build-go.sh ./hack/local-up-cluster.sh And I've modified all the files, but clearly I'm missing some step here to register the API in the API Server. Any guidance about how to do it? Is this solution suitable for the issue?
Removing old replicasets is part of the Deployment object, but it is optional. You can set .spec.revisionHistoryLimit to tell the Deployment how many old replicasets to keep around. Here is a YAML example: apiVersion: apps/v1 kind: Deployment # ... spec: # ... revisionHistoryLimit: 0 # Default to 10 if not specified # ...
###Instruction: Every time a deployment gets updated, a new replica set is added to a long list. Should the old rs be cleaned? ### Response: Removing old replicasets is part of the Deployment object, but it is optional. You can set .spec.revisionHistoryLimit to tell the Deployment how many old replicasets to keep around. Here is a YAML example: apiVersion: apps/v1 kind: Deployment # ... spec: # ... revisionHistoryLimit: 0 # Default to 10 if not specified # ...
Every time a deployment gets updated, a new replica set is added to a long list. Should the old rs be cleaned?
The disclaimer at the bottom of that section explains why it won't work by default in GKE: Note that autoscaling will work properly only if node metrics are accessible in Google Cloud Monitoring. To make the metrics accessible, you need to create your cluster with KUBE_ENABLE_CLUSTER_MONITORING equal to google or googleinfluxdb (googleinfluxdb is the default value). Please also make sure that you have Google Cloud Monitoring API enabled in Google Developer Console. You might be able to get it working by standing up a heapster instance in your cluster configured with --sink=gcm (like this), but I think it was more of an older proof of concept than a well-maintained, production-grade configuration. The community is working hard on a better, more-fully-supported version of node autoscaling in the upcoming 1.3 release.
###Instruction: According to Kubernetes documentation, If you are using GCE, you can configure your cluster so that the number of nodes will be automatically scaled based on: CPU and memory utilization. Amount of of CPU and memory requested by the pods (called also reservation). Is this actually true? I am running mainly Jobs on my cluster, and would like to spin up new instances to service them on demand. CPU usage doesn't work well as a scaling metric for this workload. From Google's CKE documentation, however, this only appears to be possible by using Cloud Monitoring metrics -- relying on a third-party service that you then have to customize. This seems like a perplexing gap in basic functionality that Kubernetes itself claims to support. Is there any simpler way to achieve the very simple goal of having the GCE instance group autoscale based on the CPU requirements that I'm quite explictly specifying in my GKE Jobs? ### Response: The disclaimer at the bottom of that section explains why it won't work by default in GKE: Note that autoscaling will work properly only if node metrics are accessible in Google Cloud Monitoring. To make the metrics accessible, you need to create your cluster with KUBE_ENABLE_CLUSTER_MONITORING equal to google or googleinfluxdb (googleinfluxdb is the default value). Please also make sure that you have Google Cloud Monitoring API enabled in Google Developer Console. You might be able to get it working by standing up a heapster instance in your cluster configured with --sink=gcm (like this), but I think it was more of an older proof of concept than a well-maintained, production-grade configuration. The community is working hard on a better, more-fully-supported version of node autoscaling in the upcoming 1.3 release.
According to Kubernetes documentation, If you are using GCE, you can configure your cluster so that the number of nodes will be automatically scaled based on: CPU and memory utilization. Amount of of CPU and memory requested by the pods (called also reservation). Is this actually true? I am running mainly Jobs on my cluster, and would like to spin up new instances to service them on demand. CPU usage doesn't work well as a scaling metric for this workload. From Google's CKE documentation, however, this only appears to be possible by using Cloud Monitoring metrics -- relying on a third-party service that you then have to customize. This seems like a perplexing gap in basic functionality that Kubernetes itself claims to support. Is there any simpler way to achieve the very simple goal of having the GCE instance group autoscale based on the CPU requirements that I'm quite explictly specifying in my GKE Jobs?
Namespaces will not bring you isolation, at the moment it's just a different subdomain in dns. It's better to have namespace per application. I highly recommend you to have two clusters for prod (in case of updating k8s) and one-two for dev/qa.
###Instruction: I am working on a new project with Kubernetes and I need three environments: DEV,QA and PROD. What is most recommended, create Multiple Clusters or create one big cluster separating environments by namespace. ### Response: Namespaces will not bring you isolation, at the moment it's just a different subdomain in dns. It's better to have namespace per application. I highly recommend you to have two clusters for prod (in case of updating k8s) and one-two for dev/qa.
I am working on a new project with Kubernetes and I need three environments: DEV,QA and PROD. What is most recommended, create Multiple Clusters or create one big cluster separating environments by namespace.
With a lot help from kubernetes irc and slack, I fixed this a while back. If I remember correctly, I had the ingress service listening on a port that was already being used, I think for vagrant. These commands really help: kubectl get pod <nginx-ingress pod> -o json kubectl exec <nginx-ingress pod> -- cat /etc/nginx/nginx.conf kubectl get pods -o wide kubectl logs <nginx-ingress pod> --previous
###Instruction: Is there anything special about running ingress controllers on Kubernetes CoreOS Vagrant Multi-Machine? I followed the example but when I run kubectl -f I do not get an address. Example: http://kubernetes.io/v1.1/docs/user-guide/ingress.html#single-service-ingress Setup: https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant.html I looked at networking in kubernetes. Everything looks like it should run without further configuration. My goal is to create a local testing environment before I build out a production platform. I'm thinking there's something about how they setup their virtualbox networking. I'm about to dive into the CoreOS cloud config but thought I would ask first. UPDATE Yes I'm running an ingress controller. https://github.com/kubernetes/contrib/blob/master/Ingress/controllers/nginx-alpha/rc.yaml It runs without giving an error. It's just when I run kubectl -f I do not get an address. I'm thinking there's either two things: I have to do something extra in networking for CoreOS-Kubernetes vagrant multi-node. It's running right, but I'm point my localhost to the wrong IP. I'm using a 172.17.4.x ip, I also have 10.0.0.x . I can access services through the 172.17.4.x using a NodePort, but I can get to my Ingress. Here is the code: apiVersion: v1 kind: ReplicationController metadata: name: nginx-ingress labels: app: nginx-ingress spec: replicas: 1 selector: app: nginx-ingress template: metadata: labels: app: nginx-ingress spec: containers: - image: gcr.io/google_containers/nginx-ingress:0.1 imagePullPolicy: Always name: nginx ports: - containerPort: 80 hostPort: 80 Update 2 Output of commands: kubectl get pods NAME READY STATUS RESTARTS AGE echoheaders-kkja7 1/1 Running 0 24m nginx-ingress-2wwnk 1/1 Running 0 25m kubectl logs nginx-ingress-2wwnk --previous Pod "nginx-ingress-2wwnk" in namespace "default": previous terminated container "nginx" not found kubectl exec nginx-ingress-2wwnk -- cat /etc/nginx/nginx.conf events { worker_connections 1024; } http { }% I'm running an echoheaders service on NodePort. When I type the node IP and port on my browser, I get that just fine. I restarted all nodes in virtualbox too. ### Response: With a lot help from kubernetes irc and slack, I fixed this a while back. If I remember correctly, I had the ingress service listening on a port that was already being used, I think for vagrant. These commands really help: kubectl get pod <nginx-ingress pod> -o json kubectl exec <nginx-ingress pod> -- cat /etc/nginx/nginx.conf kubectl get pods -o wide kubectl logs <nginx-ingress pod> --previous
Is there anything special about running ingress controllers on Kubernetes CoreOS Vagrant Multi-Machine? I followed the example but when I run kubectl -f I do not get an address. Example: http://kubernetes.io/v1.1/docs/user-guide/ingress.html#single-service-ingress Setup: https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant.html I looked at networking in kubernetes. Everything looks like it should run without further configuration. My goal is to create a local testing environment before I build out a production platform. I'm thinking there's something about how they setup their virtualbox networking. I'm about to dive into the CoreOS cloud config but thought I would ask first. UPDATE Yes I'm running an ingress controller. https://github.com/kubernetes/contrib/blob/master/Ingress/controllers/nginx-alpha/rc.yaml It runs without giving an error. It's just when I run kubectl -f I do not get an address. I'm thinking there's either two things: I have to do something extra in networking for CoreOS-Kubernetes vagrant multi-node. It's running right, but I'm point my localhost to the wrong IP. I'm using a 172.17.4.x ip, I also have 10.0.0.x . I can access services through the 172.17.4.x using a NodePort, but I can get to my Ingress. Here is the code: apiVersion: v1 kind: ReplicationController metadata: name: nginx-ingress labels: app: nginx-ingress spec: replicas: 1 selector: app: nginx-ingress template: metadata: labels: app: nginx-ingress spec: containers: - image: gcr.io/google_containers/nginx-ingress:0.1 imagePullPolicy: Always name: nginx ports: - containerPort: 80 hostPort: 80 Update 2 Output of commands: kubectl get pods NAME READY STATUS RESTARTS AGE echoheaders-kkja7 1/1 Running 0 24m nginx-ingress-2wwnk 1/1 Running 0 25m kubectl logs nginx-ingress-2wwnk --previous Pod "nginx-ingress-2wwnk" in namespace "default": previous terminated container "nginx" not found kubectl exec nginx-ingress-2wwnk -- cat /etc/nginx/nginx.conf events { worker_connections 1024; } http { }% I'm running an echoheaders service on NodePort. When I type the node IP and port on my browser, I get that just fine. I restarted all nodes in virtualbox too.
Kubelet talks to the docker daemon using the docker API over the docker socket. You can override this with --docker-endpoint= argument to the kubelet. Pods may not be being spwaned for any number of reasons. Check the logs of your scheduler, controller-manager and kubelet.
###Instruction: I wondered about how kubelet communicates with docker containers. Where this configuration has defined? I searched a lot but didn't find anything informative. I am using https kube API server. I am able to create pods but containers are not getting spawned ? Any one knows what may be the cause ? Thanks in advance. ### Response: Kubelet talks to the docker daemon using the docker API over the docker socket. You can override this with --docker-endpoint= argument to the kubelet. Pods may not be being spwaned for any number of reasons. Check the logs of your scheduler, controller-manager and kubelet.
I wondered about how kubelet communicates with docker containers. Where this configuration has defined? I searched a lot but didn't find anything informative. I am using https kube API server. I am able to create pods but containers are not getting spawned ? Any one knows what may be the cause ? Thanks in advance.
You need to set the clusterIP to an IP in the "SERVICE_CLUSTER_IP_RANGE", which is a CIDR range (typically a /16 or smaller) passed to the kube-apiserver process in the --service-cluster-ip-range flag. If you pick any address in that range you should no longer receive the above error.
###Instruction: After writing a skydns-svc.yml file with the IP 192.168.3.10 I recieve the following error: Error: The Service "kube-dns" is invalid:spec.clusterIP: Invalid value: "192.168.3.10": provided IP is not in the valid range skydns-svc.yml apiVersion: v1 kind: Service metadata: name: kube-dns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" kubernetes.io/name: "KubeDNS" spec: selector: k8s-app: kube-dns clusterIP: 192.168.3.10 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP How is this not in a valid range and how could I go about resolving the error? Using V1.2 ### Response: You need to set the clusterIP to an IP in the "SERVICE_CLUSTER_IP_RANGE", which is a CIDR range (typically a /16 or smaller) passed to the kube-apiserver process in the --service-cluster-ip-range flag. If you pick any address in that range you should no longer receive the above error.
After writing a skydns-svc.yml file with the IP 192.168.3.10 I recieve the following error: Error: The Service "kube-dns" is invalid:spec.clusterIP: Invalid value: "192.168.3.10": provided IP is not in the valid range skydns-svc.yml apiVersion: v1 kind: Service metadata: name: kube-dns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" kubernetes.io/name: "KubeDNS" spec: selector: k8s-app: kube-dns clusterIP: 192.168.3.10 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP How is this not in a valid range and how could I go about resolving the error? Using V1.2
This is a real problem, and Kubernetes doesn't have Pod QoS guarantees yet. To be completely safe, your cluster should be big enough to handle any expected cluster shrinkage, but that's not always practical. At the moment, manually shrinking the competing, lower-priority deployments would probably be the easiest way to get a cluster back working. There is work being done trying to get Pod QoS policies into Kubernetes. You can follow along/chime in on https://github.com/kubernetes/kubernetes/pull/14943
###Instruction: Imagine this hypothetical situation (that just bit me in practice): All worker instances in a Kubernetes cluster die (say due to a spot price fluctuations), and a new one comes back automatically. The scheduler then attempts to schedule pods onto the node in some arbitrary order but they can't all fit because the number of nodes is smaller than before. All default namespace pods make it on but the kube-system namespace DNS pod doesn't Now most everything trying to run on the cluster is hung because there's no DNS on the cluster. Is there any way to use the QoS tiers in Kubernetes to get the scheduler to proritize scheduling the kube-system pods before the other namespaces? Or is there some other way I should be fixing this problem? ### Response: This is a real problem, and Kubernetes doesn't have Pod QoS guarantees yet. To be completely safe, your cluster should be big enough to handle any expected cluster shrinkage, but that's not always practical. At the moment, manually shrinking the competing, lower-priority deployments would probably be the easiest way to get a cluster back working. There is work being done trying to get Pod QoS policies into Kubernetes. You can follow along/chime in on https://github.com/kubernetes/kubernetes/pull/14943
Imagine this hypothetical situation (that just bit me in practice): All worker instances in a Kubernetes cluster die (say due to a spot price fluctuations), and a new one comes back automatically. The scheduler then attempts to schedule pods onto the node in some arbitrary order but they can't all fit because the number of nodes is smaller than before. All default namespace pods make it on but the kube-system namespace DNS pod doesn't Now most everything trying to run on the cluster is hung because there's no DNS on the cluster. Is there any way to use the QoS tiers in Kubernetes to get the scheduler to proritize scheduling the kube-system pods before the other namespaces? Or is there some other way I should be fixing this problem?
You can tell Kubernetes to put the pod name in an environment variable of your choice using the downward API. For example: apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "env" ] env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: MY_POD_IP valueFrom: fieldRef: fieldPath: status.podIP restartPolicy: Never
###Instruction: Is there a way to programmatically get the name of the pod that a container belongs to in Kubernetes? If so how? I'm using fabric8's java client but curl or something similar will be fine as well. Note that I don't want to find the pod using a specific label since then (I assume) I may not always find the right pod if it's scaled with a replication controller. ### Response: You can tell Kubernetes to put the pod name in an environment variable of your choice using the downward API. For example: apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "env" ] env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: MY_POD_IP valueFrom: fieldRef: fieldPath: status.podIP restartPolicy: Never
Is there a way to programmatically get the name of the pod that a container belongs to in Kubernetes? If so how? I'm using fabric8's java client but curl or something similar will be fine as well. Note that I don't want to find the pod using a specific label since then (I assume) I may not always find the right pod if it's scaled with a replication controller.
Google's Network load balancing (L3) load balancing is specifically per-region (these are the load balancers that are automatically configured if you create a service of type LoadBalancer). As Alex mentioned in his answer, if you use network load balancing you will need to configure one load balancer per region and then use DNS to spread user requests to each of your load balancers. Google's HTTP(S) load balancing is cross-region (e.g. global). This means that you get a single IP that will balance across all of your HTTP(S) backends, which can be spread across multiple clusters in multiple regions. For cross cluster load balancing, you must configure the HTTP(S) load balancer yourself as described in Is it possible to use 1 Kubernetes ingress object to route traffic to k8s services in different clusters?. In either case, you will need to create a different service for for each URL path that you want to route to a unique backend. The services don't have to use different pods, although you may want to if they receive different amounts of traffic and you want to scale them independently. If you use the HTTP(S) load balancer, you can define these services and the URL mapping as part of the load balancer configuration and let the HTTP(S) balancer do the request inspection / routing for you. If you use the network load balancer, then you will need to run an HTTP(S) server yourself that terminates the connection, inspects the request, and routes it to the appropriate service. Instead of all this, can I actually get a Kubernetes cluster to span different regions? Not out of the box. You can configure a multi-zone cluster (within a region), but we don't offer explicit support for configuring a cluster than spans regions. While you could do this manually yourself, we don't recommend it as there are many parameters baked into the cluster management software that have been tuned with the assumption of low-latency communication between the master and nodes within the cluster.
###Instruction: How do I achieve cross-region load balancing on Google Container Engine? I will have one Kubernetes cluster per region in several regions and I need to route traffic from a single domain name to the geographically closest cluster. Some options I've investigated: Kubernetes LoadBalancers seem to be restricted to one cluster. I'm not sure how you get Kubernetes Ingress to talk to different clusters. (It sounds like this object is backed by Compute Engine HTTP load balancers though.) Compute Engine HTTP Load Balancers talking to exposed clusters sounds right, but the link I referenced seems to have some old terms like gcloud beta. Instead of all this, can I actually get a Kubernetes cluster to span different regions? Now if I want to route different URL paths to different containers within a pod, where do I do that? If it's at the Ingress or HTTP Load Balancer level, then I don't have enough granularity to address particular containers. Does that mean I would have to use a different pod + service for each different URL path? ### Response: Google's Network load balancing (L3) load balancing is specifically per-region (these are the load balancers that are automatically configured if you create a service of type LoadBalancer). As Alex mentioned in his answer, if you use network load balancing you will need to configure one load balancer per region and then use DNS to spread user requests to each of your load balancers. Google's HTTP(S) load balancing is cross-region (e.g. global). This means that you get a single IP that will balance across all of your HTTP(S) backends, which can be spread across multiple clusters in multiple regions. For cross cluster load balancing, you must configure the HTTP(S) load balancer yourself as described in Is it possible to use 1 Kubernetes ingress object to route traffic to k8s services in different clusters?. In either case, you will need to create a different service for for each URL path that you want to route to a unique backend. The services don't have to use different pods, although you may want to if they receive different amounts of traffic and you want to scale them independently. If you use the HTTP(S) load balancer, you can define these services and the URL mapping as part of the load balancer configuration and let the HTTP(S) balancer do the request inspection / routing for you. If you use the network load balancer, then you will need to run an HTTP(S) server yourself that terminates the connection, inspects the request, and routes it to the appropriate service. Instead of all this, can I actually get a Kubernetes cluster to span different regions? Not out of the box. You can configure a multi-zone cluster (within a region), but we don't offer explicit support for configuring a cluster than spans regions. While you could do this manually yourself, we don't recommend it as there are many parameters baked into the cluster management software that have been tuned with the assumption of low-latency communication between the master and nodes within the cluster.
How do I achieve cross-region load balancing on Google Container Engine? I will have one Kubernetes cluster per region in several regions and I need to route traffic from a single domain name to the geographically closest cluster. Some options I've investigated: Kubernetes LoadBalancers seem to be restricted to one cluster. I'm not sure how you get Kubernetes Ingress to talk to different clusters. (It sounds like this object is backed by Compute Engine HTTP load balancers though.) Compute Engine HTTP Load Balancers talking to exposed clusters sounds right, but the link I referenced seems to have some old terms like gcloud beta. Instead of all this, can I actually get a Kubernetes cluster to span different regions? Now if I want to route different URL paths to different containers within a pod, where do I do that? If it's at the Ingress or HTTP Load Balancer level, then I don't have enough granularity to address particular containers. Does that mean I would have to use a different pod + service for each different URL path?
The port is only forwarded while the kubectl process is running, so you can just kill the kubectl process that's forwarding the port. In most cases that'll just mean pressing CTRL+C in the terminal where the port-forward command is running.
###Instruction: If I forwarded a port using kubectl port-forward mypod 9000:9000 How can I undo that so that I can bind port 9000 with another program? Additionally, how can I test to see what ports are forwarded? ### Response: The port is only forwarded while the kubectl process is running, so you can just kill the kubectl process that's forwarding the port. In most cases that'll just mean pressing CTRL+C in the terminal where the port-forward command is running.
If I forwarded a port using kubectl port-forward mypod 9000:9000 How can I undo that so that I can bind port 9000 with another program? Additionally, how can I test to see what ports are forwarded?
Looking at the stack trace indicates your using an older version of the fabric8 kubernetes client (we migrated to okhttp from async-http-client a while back). First you need to upgrade to the latest version (io.fabric8:kubernetes-client:1.3.90 at time of writing). Here's a snippet for watching events: Watch watch = client.events().inAnyNamespace().watch(new Watcher<Event>() { @Override public void eventReceived(Action action, Event resource) { logger.info("{}: {}", action, resource); } @Override public void onClose(KubernetesClientException e) { if (e != null) { e.printStackTrace(); logger.error(e.getMessage(), e); } } }); Unless you want to watch the namespace you want to watch with client.events().inNamespace(...).watch(...).
###Instruction: I've been experimenting with the fabric8 events() API in my attempts to get a working solution for this question that I asked a while back. The answer to the aforementioned question was "use the events() API !" which seems like an excellent idea... However, I am running into a strange error when I try to use that API in even the most basic way I can imagine. I have included (in the section SAMPLE PROGRAM) a simple program which illustrates the problem. Below are excerpts of the code which are most relevant: val eventWatcher: Watcher[Event] = new Watcher[Event]() { def eventReceived(action: Action, event: Event) { logger.info("notified!") } } ... kube = getConnection // get client connection to k8s server kube.events().watch(eventWatcher) After I set up the event watch I create a pod. The pod creates just fine. However, when the 'eventReceived' call back is being invoked I get the error and stack trace that I list below in ERRORS. I never get into the eventReceived method (and the string 'notified' never appears in the logs). Any tips, suggestions or guidance, greatly appreciated ! - chris SAMPLE PROGRAM import com.fasterxml.jackson.databind.ObjectMapper import scala.collection.JavaConverters._ import com.ning.http.client.ws.WebSocket import com.typesafe.scalalogging.StrictLogging import io.fabric8.kubernetes.api.model._ import io.fabric8.kubernetes.client.DefaultKubernetesClient.ConfigBuilder import io.fabric8.kubernetes.client.Watcher.Action import io.fabric8.kubernetes.client.dsl.Resource import io.fabric8.kubernetes.client.{DefaultKubernetesClient, Watcher} object ErrorTest extends App with StrictLogging { // corresponds to --insecure-skip-tls-verify=true, according to io.fabric8.kubernetes.api.model.Cluster val trustCerts = true val k8sUrl = "http://localhost:8080" val namespaceName = "default" // replace this with name of a namespace that you know exists val imageName: String = "nginx" // make image name to load to pod a variable so we can experiment with err conditions val eventWatcher: Watcher[Event] = new Watcher[Event]() { def eventReceived(action: Action, event: Event) { logger.info("notified!") } } def go(): Unit = { val kube = getConnection dumpNamespaces(kube) kube.events().watch(eventWatcher) deployPodWithWatch(kube, getPod(image = imageName)) } def deployPodWithWatch(kube: DefaultKubernetesClient, pod: Pod): Unit = { kube.pods().inNamespace(namespaceName).create(pod) /* create the pod ! */ } def getPod(image: String): Pod = { val jsonTemplate = """ |{ | "kind": "Pod", | "apiVersion": "v1", | "metadata": { | "name": "podboy", | "labels": { | "app": "nginx" | } | }, | "spec": { | "containers": [ | { | "name": "podboy", | "image": "<image>", | "ports": [ | { | "containerPort": 80, | "protocol": "TCP" | } | ] | } | ] | } |} """. stripMargin val replacement: String = "image\": \"" + image val json = jsonTemplate.replaceAll("image\": \"<image>", replacement) System.out.println("json:" + json); new ObjectMapper().readValue(json, classOf[Pod]) } def dumpNamespaces(kube: DefaultKubernetesClient): Unit = { val namespaceNames = kube.namespaces().list().getItems.asScala.map { (ns: Namespace) => { ns.getMetadata.getName } } System.out.println("namespaces are:" + namespaceNames); } def getConnection = { val configBuilder = new ConfigBuilder() val config = configBuilder. trustCerts(trustCerts). masterUrl(k8sUrl). build() new DefaultKubernetesClient(config) } go() } ERRORS 22:03:16.656 [New I/O worker #2] ERROR i.f.k.c.dsl.internal.BaseOperation$3 - Could not deserialize watch event: {"type":"MODIFIED","object":{"kind":"Event","apiVersion":"v1","metadata":{"name":"spark-master-rc-zlxpp.144f898e0549644b","namespace":"dummyowner-workflow-dda91220-b63-1544803905","selfLink":"/api/v1/namespaces/dummyowner-workflow-dda91220-b63-1544803905/events/spark-master-rc-zlxpp.144f898e0549644b","resourceVersion":"204015294","creationTimestamp":null,"deletionTimestamp":"2016-05-18T06:03:06Z"},"involvedObject":{"kind":"Pod","namespace":"dummyowner-workflow-dda91220-b63-1544803905","name":"spark-master-rc-zlxpp","uid":"d413698c-19c7-11e6-9f26-74dbd1a09231","apiVersion":"v1","resourceVersion":"203578210","fieldPath":"spec.containers{spark-master}"},"reason":"Failed","message":"Failed to pull image \"ecr.vip.ebayc3.com/krylov/spark-1.5.1:${env}\": image pull failed for ecr.vip.ebayc3.com/krylov/spark-1.5.1:${env}, this may be because there are no credentials on this request. details: (Tag ${env} not found in repository ecr.vip.ebayc3.com/krylov/spark-1.5.1)","source":{"component":"kubelet","host":"kubernetes-minion-105-4040.slc01.dev.ebayc3.com"},"firstTimestamp":"2016-05-18T02:47:26Z","lastTimestamp":"2016-05-18T05:03:05Z","count":815}} com.fasterxml.jackson.databind.JsonMappingException: No resource type found for kind:Event at [Source: {"type":"MODIFIED","object":{"kind":"Event","apiVersion":"v1","metadata":{"name":"spark-master-rc-zlxpp.144f898e0549644b","namespace":"dummyowner-workflow-dda91220-b63-1544803905","selfLink":"/api/v1/namespaces/dummyowner-workflow-dda91220-b63-1544803905/events/spark-master-rc-zlxpp.144f898e0549644b","resourceVersion":"204015294","creationTimestamp":null,"deletionTimestamp":"2016-05-18T06:03:06Z"},"involvedObject":{"kind":"Pod","namespace":"dummyowner-workflow-dda91220-b63-1544803905","name":"spark-master-rc-zlxpp","uid":"d413698c-19c7-11e6-9f26-74dbd1a09231","apiVersion":"v1","resourceVersion":"203578210","fieldPath":"spec.containers{spark-master}"},"reason":"Failed","message":"Failed to pull image \"ecr.vip.ebayc3.com/krylov/spark-1.5.1:${env}\": image pull failed for ecr.vip.ebayc3.com/krylov/spark-1.5.1:${env}, this may be because there are no credentials on this request. details: (Tag ${env} not found in repository ecr.vip.ebayc3.com/krylov/spark-1.5.1)","source":{"component":"kubelet","host":"kubernetes-minion-105-4040.slc01.dev.ebayc3.com"},"firstTimestamp":"2016-05-18T02:47:26Z","lastTimestamp":"2016-05-18T05:03:05Z","count":815}}; line: 1, column: 1156] (through reference chain: io.fabric8.kubernetes.api.model.WatchEvent["object"]) at com.fasterxml.jackson.databind.JsonMappingException.from(JsonMappingException.java:164) ~[jackson-databind-2.4.1.jar:2.4.1] at com.fasterxml.jackson.databind.DeserializationContext.mappingException(DeserializationContext.java:757) ~[jackson-databind-2.4.1.jar:2.4.1] at io.fabric8.kubernetes.internal.KubernetesDeserializer.deserialize(KubernetesDeserializer.java:41) ~[kubernetes-model-1.0.3.jar:1.0.3] at io.fabric8.kubernetes.internal.KubernetesDeserializer.deserialize(KubernetesDeserializer.java:29) ~[kubernetes-model-1.0.3.jar:1.0.3] at com.fasterxml.jackson.databind.deser.SettableBeanProperty.deserialize(SettableBeanProperty.java:538) ~[jackson-databind-2.4.1.jar:2.4.1] at com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:99) ~[jackson-databind-2.4.1.jar:2.4.1] at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:242) ~[jackson-databind-2.4.1.jar:2.4.1] at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:118) ~[jackson-databind-2.4.1.jar:2.4.1] at com.fasterxml.jackson.databind.ObjectReader._bindAndClose(ObjectReader.java:1269) ~[jackson-databind-2.4.1.jar:2.4.1] at com.fasterxml.jackson.databind.ObjectReader.readValue(ObjectReader.java:896) ~[jackson-databind-2.4.1.jar:2.4.1] at io.fabric8.kubernetes.client.dsl.internal.BaseOperation$3.onMessage(BaseOperation.java:422) ~[kubernetes-client-1.2.2.jar:na] at com.ning.http.client.providers.netty.ws.NettyWebSocket.notifyTextListeners(NettyWebSocket.java:240) [async-http-client-1.9.29.jar:na] at com.ning.http.client.providers.netty.ws.NettyWebSocket.onTextFragment(NettyWebSocket.java:281) [async-http-client-1.9.29.jar:na] at com.ning.http.client.providers.netty.handler.WebSocketProtocol.handle(WebSocketProtocol.java:162) [async-http-client-1.9.29.jar:na] at com.ning.http.client.providers.netty.handler.Processor.messageReceived(Processor.java:88) [async-http-client-1.9.29.jar:na] at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) [netty-3.10.3.Final.jar:na] at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) [netty-3.10.3.Final.jar:na] at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) [netty-3.10.3.Final.jar:na] at org.jboss.netty.handler.codec.oneone.OneToOneDecoder.handleUpstream(OneToOneDecoder.java:68) [netty-3.10.3.Final.jar:na] at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) [netty-3.10.3.Final.jar:na] at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) [netty-3.10.3.Final.jar:na] at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) [netty-3.10.3.Final.jar:na] at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) [netty-3.10.3.Final.jar:na] at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:536) [netty-3.10.3.Final.jar:na] at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435) [netty-3.10.3.Final.jar:na] at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) [netty-3.10.3.Final.jar:na] at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) [netty-3.10.3.Final.jar:na] at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) [netty-3.10.3.Final.jar:na] at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268) [netty-3.10.3.Final.jar:na] at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255) [netty-3.10.3.Final.jar:na] at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) [netty-3.10.3.Final.jar:na] at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108) [netty-3.10.3.Final.jar:na] at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337) [netty-3.10.3.Final.jar:na] at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) [netty-3.10.3.Final.jar:na] at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) [netty-3.10.3.Final.jar:na] at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) [netty-3.10.3.Final.jar:na] at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) [netty-3.10.3.Final.jar:na] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_45] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_45] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45] ### Response: Looking at the stack trace indicates your using an older version of the fabric8 kubernetes client (we migrated to okhttp from async-http-client a while back). First you need to upgrade to the latest version (io.fabric8:kubernetes-client:1.3.90 at time of writing). Here's a snippet for watching events: Watch watch = client.events().inAnyNamespace().watch(new Watcher<Event>() { @Override public void eventReceived(Action action, Event resource) { logger.info("{}: {}", action, resource); } @Override public void onClose(KubernetesClientException e) { if (e != null) { e.printStackTrace(); logger.error(e.getMessage(), e); } } }); Unless you want to watch the namespace you want to watch with client.events().inNamespace(...).watch(...).
I've been experimenting with the fabric8 events() API in my attempts to get a working solution for this question that I asked a while back. The answer to the aforementioned question was "use the events() API !" which seems like an excellent idea... However, I am running into a strange error when I try to use that API in even the most basic way I can imagine. I have included (in the section SAMPLE PROGRAM) a simple program which illustrates the problem. Below are excerpts of the code which are most relevant: val eventWatcher: Watcher[Event] = new Watcher[Event]() { def eventReceived(action: Action, event: Event) { logger.info("notified!") } } ... kube = getConnection // get client connection to k8s server kube.events().watch(eventWatcher) After I set up the event watch I create a pod. The pod creates just fine. However, when the 'eventReceived' call back is being invoked I get the error and stack trace that I list below in ERRORS. I never get into the eventReceived method (and the string 'notified' never appears in the logs). Any tips, suggestions or guidance, greatly appreciated ! - chris SAMPLE PROGRAM import com.fasterxml.jackson.databind.ObjectMapper import scala.collection.JavaConverters._ import com.ning.http.client.ws.WebSocket import com.typesafe.scalalogging.StrictLogging import io.fabric8.kubernetes.api.model._ import io.fabric8.kubernetes.client.DefaultKubernetesClient.ConfigBuilder import io.fabric8.kubernetes.client.Watcher.Action import io.fabric8.kubernetes.client.dsl.Resource import io.fabric8.kubernetes.client.{DefaultKubernetesClient, Watcher} object ErrorTest extends App with StrictLogging { // corresponds to --insecure-skip-tls-verify=true, according to io.fabric8.kubernetes.api.model.Cluster val trustCerts = true val k8sUrl = "http://localhost:8080" val namespaceName = "default" // replace this with name of a namespace that you know exists val imageName: String = "nginx" // make image name to load to pod a variable so we can experiment with err conditions val eventWatcher: Watcher[Event] = new Watcher[Event]() { def eventReceived(action: Action, event: Event) { logger.info("notified!") } } def go(): Unit = { val kube = getConnection dumpNamespaces(kube) kube.events().watch(eventWatcher) deployPodWithWatch(kube, getPod(image = imageName)) } def deployPodWithWatch(kube: DefaultKubernetesClient, pod: Pod): Unit = { kube.pods().inNamespace(namespaceName).create(pod) /* create the pod ! */ } def getPod(image: String): Pod = { val jsonTemplate = """ |{ | "kind": "Pod", | "apiVersion": "v1", | "metadata": { | "name": "podboy", | "labels": { | "app": "nginx" | } | }, | "spec": { | "containers": [ | { | "name": "podboy", | "image": "<image>", | "ports": [ | { | "containerPort": 80, | "protocol": "TCP" | } | ] | } | ] | } |} """. stripMargin val replacement: String = "image\": \"" + image val json = jsonTemplate.replaceAll("image\": \"<image>", replacement) System.out.println("json:" + json); new ObjectMapper().readValue(json, classOf[Pod]) } def dumpNamespaces(kube: DefaultKubernetesClient): Unit = { val namespaceNames = kube.namespaces().list().getItems.asScala.map { (ns: Namespace) => { ns.getMetadata.getName } } System.out.println("namespaces are:" + namespaceNames); } def getConnection = { val configBuilder = new ConfigBuilder() val config = configBuilder. trustCerts(trustCerts). masterUrl(k8sUrl). build() new DefaultKubernetesClient(config) } go() } ERRORS 22:03:16.656 [New I/O worker #2] ERROR i.f.k.c.dsl.internal.BaseOperation$3 - Could not deserialize watch event: {"type":"MODIFIED","object":{"kind":"Event","apiVersion":"v1","metadata":{"name":"spark-master-rc-zlxpp.144f898e0549644b","namespace":"dummyowner-workflow-dda91220-b63-1544803905","selfLink":"/api/v1/namespaces/dummyowner-workflow-dda91220-b63-1544803905/events/spark-master-rc-zlxpp.144f898e0549644b","resourceVersion":"204015294","creationTimestamp":null,"deletionTimestamp":"2016-05-18T06:03:06Z"},"involvedObject":{"kind":"Pod","namespace":"dummyowner-workflow-dda91220-b63-1544803905","name":"spark-master-rc-zlxpp","uid":"d413698c-19c7-11e6-9f26-74dbd1a09231","apiVersion":"v1","resourceVersion":"203578210","fieldPath":"spec.containers{spark-master}"},"reason":"Failed","message":"Failed to pull image \"ecr.vip.ebayc3.com/krylov/spark-1.5.1:${env}\": image pull failed for ecr.vip.ebayc3.com/krylov/spark-1.5.1:${env}, this may be because there are no credentials on this request. details: (Tag ${env} not found in repository ecr.vip.ebayc3.com/krylov/spark-1.5.1)","source":{"component":"kubelet","host":"kubernetes-minion-105-4040.slc01.dev.ebayc3.com"},"firstTimestamp":"2016-05-18T02:47:26Z","lastTimestamp":"2016-05-18T05:03:05Z","count":815}} com.fasterxml.jackson.databind.JsonMappingException: No resource type found for kind:Event at [Source: {"type":"MODIFIED","object":{"kind":"Event","apiVersion":"v1","metadata":{"name":"spark-master-rc-zlxpp.144f898e0549644b","namespace":"dummyowner-workflow-dda91220-b63-1544803905","selfLink":"/api/v1/namespaces/dummyowner-workflow-dda91220-b63-1544803905/events/spark-master-rc-zlxpp.144f898e0549644b","resourceVersion":"204015294","creationTimestamp":null,"deletionTimestamp":"2016-05-18T06:03:06Z"},"involvedObject":{"kind":"Pod","namespace":"dummyowner-workflow-dda91220-b63-1544803905","name":"spark-master-rc-zlxpp","uid":"d413698c-19c7-11e6-9f26-74dbd1a09231","apiVersion":"v1","resourceVersion":"203578210","fieldPath":"spec.containers{spark-master}"},"reason":"Failed","message":"Failed to pull image \"ecr.vip.ebayc3.com/krylov/spark-1.5.1:${env}\": image pull failed for ecr.vip.ebayc3.com/krylov/spark-1.5.1:${env}, this may be because there are no credentials on this request. details: (Tag ${env} not found in repository ecr.vip.ebayc3.com/krylov/spark-1.5.1)","source":{"component":"kubelet","host":"kubernetes-minion-105-4040.slc01.dev.ebayc3.com"},"firstTimestamp":"2016-05-18T02:47:26Z","lastTimestamp":"2016-05-18T05:03:05Z","count":815}}; line: 1, column: 1156] (through reference chain: io.fabric8.kubernetes.api.model.WatchEvent["object"]) at com.fasterxml.jackson.databind.JsonMappingException.from(JsonMappingException.java:164) ~[jackson-databind-2.4.1.jar:2.4.1] at com.fasterxml.jackson.databind.DeserializationContext.mappingException(DeserializationContext.java:757) ~[jackson-databind-2.4.1.jar:2.4.1] at io.fabric8.kubernetes.internal.KubernetesDeserializer.deserialize(KubernetesDeserializer.java:41) ~[kubernetes-model-1.0.3.jar:1.0.3] at io.fabric8.kubernetes.internal.KubernetesDeserializer.deserialize(KubernetesDeserializer.java:29) ~[kubernetes-model-1.0.3.jar:1.0.3] at com.fasterxml.jackson.databind.deser.SettableBeanProperty.deserialize(SettableBeanProperty.java:538) ~[jackson-databind-2.4.1.jar:2.4.1] at com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:99) ~[jackson-databind-2.4.1.jar:2.4.1] at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:242) ~[jackson-databind-2.4.1.jar:2.4.1] at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:118) ~[jackson-databind-2.4.1.jar:2.4.1] at com.fasterxml.jackson.databind.ObjectReader._bindAndClose(ObjectReader.java:1269) ~[jackson-databind-2.4.1.jar:2.4.1] at com.fasterxml.jackson.databind.ObjectReader.readValue(ObjectReader.java:896) ~[jackson-databind-2.4.1.jar:2.4.1] at io.fabric8.kubernetes.client.dsl.internal.BaseOperation$3.onMessage(BaseOperation.java:422) ~[kubernetes-client-1.2.2.jar:na] at com.ning.http.client.providers.netty.ws.NettyWebSocket.notifyTextListeners(NettyWebSocket.java:240) [async-http-client-1.9.29.jar:na] at com.ning.http.client.providers.netty.ws.NettyWebSocket.onTextFragment(NettyWebSocket.java:281) [async-http-client-1.9.29.jar:na] at com.ning.http.client.providers.netty.handler.WebSocketProtocol.handle(WebSocketProtocol.java:162) [async-http-client-1.9.29.jar:na] at com.ning.http.client.providers.netty.handler.Processor.messageReceived(Processor.java:88) [async-http-client-1.9.29.jar:na] at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) [netty-3.10.3.Final.jar:na] at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) [netty-3.10.3.Final.jar:na] at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) [netty-3.10.3.Final.jar:na] at org.jboss.netty.handler.codec.oneone.OneToOneDecoder.handleUpstream(OneToOneDecoder.java:68) [netty-3.10.3.Final.jar:na] at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) [netty-3.10.3.Final.jar:na] at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) [netty-3.10.3.Final.jar:na] at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) [netty-3.10.3.Final.jar:na] at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) [netty-3.10.3.Final.jar:na] at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:536) [netty-3.10.3.Final.jar:na] at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435) [netty-3.10.3.Final.jar:na] at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) [netty-3.10.3.Final.jar:na] at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) [netty-3.10.3.Final.jar:na] at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) [netty-3.10.3.Final.jar:na] at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268) [netty-3.10.3.Final.jar:na] at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255) [netty-3.10.3.Final.jar:na] at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) [netty-3.10.3.Final.jar:na] at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108) [netty-3.10.3.Final.jar:na] at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337) [netty-3.10.3.Final.jar:na] at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) [netty-3.10.3.Final.jar:na] at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) [netty-3.10.3.Final.jar:na] at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) [netty-3.10.3.Final.jar:na] at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) [netty-3.10.3.Final.jar:na] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_45] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_45] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
Apparently, the L7 ingress doesn't currently monitor the TLS secret for changes. But a PR to solve this has been merged, so it should only be a matter of time.
###Instruction: I am using the standard L7 load balancing ingress on Google Container Engine. I have installed it through the following ingress definition: apiVersion: extensions/v1beta1 kind: Ingress metadata: name: l7-ingress-{{environment}} spec: tls: - secretName: web-secret backend: serviceName: web servicePort: 80 Now, my question is, how do I ensure that the TLS certificate is updated once the secret web-secret changes? AFAICT, it currently stays the same even though the underlying secret changes. ### Response: Apparently, the L7 ingress doesn't currently monitor the TLS secret for changes. But a PR to solve this has been merged, so it should only be a matter of time.
I am using the standard L7 load balancing ingress on Google Container Engine. I have installed it through the following ingress definition: apiVersion: extensions/v1beta1 kind: Ingress metadata: name: l7-ingress-{{environment}} spec: tls: - secretName: web-secret backend: serviceName: web servicePort: 80 Now, my question is, how do I ensure that the TLS certificate is updated once the secret web-secret changes? AFAICT, it currently stays the same even though the underlying secret changes.
By default Kubernetes looks in the public Docker registry to find images. If your image doesn't exist there it won't be able to pull it. You can run a local Kubernetes registry with the registry cluster addon. Then tag your images with localhost:5000: docker tag aii localhost:5000/dev/aii Push the image to the Kubernetes registry: docker push localhost:5000/dev/aii And change run-aii.yaml to use the localhost:5000/dev/aii image instead of aii. Now Kubernetes should be able to pull the image. Alternatively, you can run a private Docker registry through one of the providers that offers this (AWS ECR, GCR, etc.), but if this is for local development it will be quicker and easier to get setup with a local Kubernetes Docker registry.
###Instruction: I'm trying to run my first kubernetes pod locally. I've run the following command (from here): export ARCH=amd64 docker run -d \ --volume=/:/rootfs:ro \ --volume=/sys:/sys:ro \ --volume=/var/lib/docker/:/var/lib/docker:rw \ --volume=/var/lib/kubelet/:/var/lib/kubelet:rw \ --volume=/var/run:/var/run:rw \ --net=host \ --pid=host \ --privileged \ gcr.io/google_containers/hyperkube-${ARCH}:${K8S_VERSION} \ /hyperkube kubelet \ --containerized \ --hostname-override=127.0.0.1 \ --api-servers=http://localhost:8080 \ --config=/etc/kubernetes/manifests \ --cluster-dns=10.0.0.10 \ --cluster-domain=cluster.local \ --allow-privileged --v=2 Then, I've trying to run the following: kubectl create -f ./run-aii.yaml run-aii.yaml: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: aii spec: replicas: 2 template: metadata: labels: run: aii spec: containers: - name: aii image: aii ports: - containerPort: 5144 env: - name: KAFKA_IP value: kafka volumeMounts: - mountPath: /root/script name: scripts-data readOnly: true - mountPath: /home/aii/core name: core-aii readOnly: true - mountPath: /home/aii/genome name: genome-aii readOnly: true - mountPath: /home/aii/main name: main-aii readOnly: true - name: kafka image: kafkazoo volumeMounts: - mountPath: /root/script name: scripts-data readOnly: true - mountPath: /root/config name: config-data readOnly: true - name: ws image: ws ports: - containerPort: 3000 volumes: - name: scripts-data hostPath: path: /home/aii/general/infra/script - name: config-data hostPath: path: /home/aii/general/infra/config - name: core-aii hostPath: path: /home/aii/general/core - name: genome-aii hostPath: path: /home/aii/general/genome - name: main-aii hostPath: path: /home/aii/general/main Now, when I run: kubectl get pods I'm getting: NAME READY STATUS RESTARTS AGE aii-806125049-18ocr 0/3 ImagePullBackOff 0 52m aii-806125049-6oi8o 0/3 ImagePullBackOff 0 52m aii-pod 0/3 ImagePullBackOff 0 23h k8s-etcd-127.0.0.1 1/1 Running 0 2d k8s-master-127.0.0.1 4/4 Running 0 2d k8s-proxy-127.0.0.1 1/1 Running 0 2d nginx-198147104-9kajo 1/1 Running 0 2d BTW: docker images return: REPOSITORY TAG IMAGE ID CREATED SIZE ws latest fa7c5f6ef83a 7 days ago 706.8 MB kafkazoo latest 84c687b0bd74 9 days ago 697.7 MB aii latest bd12c4acbbaf 9 days ago 1.421 GB node 4.4 1a93433cee73 11 days ago 647 MB gcr.io/google_containers/hyperkube-amd64 v1.2.4 3c4f38def75b 11 days ago 316.7 MB nginx latest 3edcc5de5a79 2 weeks ago 182.7 MB docker_kafka latest e1d954a6a827 5 weeks ago 697.7 MB spotify/kafka latest 30d3cef1fe8e 12 weeks ago 421.6 MB wurstmeister/zookeeper latest dc00f1198a44 3 months ago 468.7 MB centos latest 61b442687d68 4 months ago 196.6 MB centos centos7.2.1511 38ea04e19303 5 months ago 194.6 MB gcr.io/google_containers/etcd 2.2.1 a6cd91debed1 6 months ago 28.19 MB gcr.io/google_containers/pause 2.0 2b58359142b0 7 months ago 350.2 kB sequenceiq/hadoop-docker latest 5c3cc170c6bc 10 months ago 1.766 GB why do I get the ImagePullBackOff ?? ### Response: By default Kubernetes looks in the public Docker registry to find images. If your image doesn't exist there it won't be able to pull it. You can run a local Kubernetes registry with the registry cluster addon. Then tag your images with localhost:5000: docker tag aii localhost:5000/dev/aii Push the image to the Kubernetes registry: docker push localhost:5000/dev/aii And change run-aii.yaml to use the localhost:5000/dev/aii image instead of aii. Now Kubernetes should be able to pull the image. Alternatively, you can run a private Docker registry through one of the providers that offers this (AWS ECR, GCR, etc.), but if this is for local development it will be quicker and easier to get setup with a local Kubernetes Docker registry.
I'm trying to run my first kubernetes pod locally. I've run the following command (from here): export ARCH=amd64 docker run -d \ --volume=/:/rootfs:ro \ --volume=/sys:/sys:ro \ --volume=/var/lib/docker/:/var/lib/docker:rw \ --volume=/var/lib/kubelet/:/var/lib/kubelet:rw \ --volume=/var/run:/var/run:rw \ --net=host \ --pid=host \ --privileged \ gcr.io/google_containers/hyperkube-${ARCH}:${K8S_VERSION} \ /hyperkube kubelet \ --containerized \ --hostname-override=127.0.0.1 \ --api-servers=http://localhost:8080 \ --config=/etc/kubernetes/manifests \ --cluster-dns=10.0.0.10 \ --cluster-domain=cluster.local \ --allow-privileged --v=2 Then, I've trying to run the following: kubectl create -f ./run-aii.yaml run-aii.yaml: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: aii spec: replicas: 2 template: metadata: labels: run: aii spec: containers: - name: aii image: aii ports: - containerPort: 5144 env: - name: KAFKA_IP value: kafka volumeMounts: - mountPath: /root/script name: scripts-data readOnly: true - mountPath: /home/aii/core name: core-aii readOnly: true - mountPath: /home/aii/genome name: genome-aii readOnly: true - mountPath: /home/aii/main name: main-aii readOnly: true - name: kafka image: kafkazoo volumeMounts: - mountPath: /root/script name: scripts-data readOnly: true - mountPath: /root/config name: config-data readOnly: true - name: ws image: ws ports: - containerPort: 3000 volumes: - name: scripts-data hostPath: path: /home/aii/general/infra/script - name: config-data hostPath: path: /home/aii/general/infra/config - name: core-aii hostPath: path: /home/aii/general/core - name: genome-aii hostPath: path: /home/aii/general/genome - name: main-aii hostPath: path: /home/aii/general/main Now, when I run: kubectl get pods I'm getting: NAME READY STATUS RESTARTS AGE aii-806125049-18ocr 0/3 ImagePullBackOff 0 52m aii-806125049-6oi8o 0/3 ImagePullBackOff 0 52m aii-pod 0/3 ImagePullBackOff 0 23h k8s-etcd-127.0.0.1 1/1 Running 0 2d k8s-master-127.0.0.1 4/4 Running 0 2d k8s-proxy-127.0.0.1 1/1 Running 0 2d nginx-198147104-9kajo 1/1 Running 0 2d BTW: docker images return: REPOSITORY TAG IMAGE ID CREATED SIZE ws latest fa7c5f6ef83a 7 days ago 706.8 MB kafkazoo latest 84c687b0bd74 9 days ago 697.7 MB aii latest bd12c4acbbaf 9 days ago 1.421 GB node 4.4 1a93433cee73 11 days ago 647 MB gcr.io/google_containers/hyperkube-amd64 v1.2.4 3c4f38def75b 11 days ago 316.7 MB nginx latest 3edcc5de5a79 2 weeks ago 182.7 MB docker_kafka latest e1d954a6a827 5 weeks ago 697.7 MB spotify/kafka latest 30d3cef1fe8e 12 weeks ago 421.6 MB wurstmeister/zookeeper latest dc00f1198a44 3 months ago 468.7 MB centos latest 61b442687d68 4 months ago 196.6 MB centos centos7.2.1511 38ea04e19303 5 months ago 194.6 MB gcr.io/google_containers/etcd 2.2.1 a6cd91debed1 6 months ago 28.19 MB gcr.io/google_containers/pause 2.0 2b58359142b0 7 months ago 350.2 kB sequenceiq/hadoop-docker latest 5c3cc170c6bc 10 months ago 1.766 GB why do I get the ImagePullBackOff ??
Kubernetes Dashboard is pretty limited at the moment, and only supports ReplicationControllers. If you create a ReplicationController then you will be able to see the Pods connected to it, check their memory and CPU usage, and view their logs. Work is being done to improve Dashboard and in the future it should support other Kubernetes resources besides ReplicationControllers. You can see some mockups in the GitHub repo.
###Instruction: I've created a deployment like this: kubectl run my-app --image=ecr.us-east-1.amazonaws.com/my-app:v1 -l name=my-app --replicas=1 Now I goto the Kubernetes Dashboard: https://172.0.0.1/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard But I dont see my-app listed here. Is it possible to use the Kubernetes Dashboard to view deployments? I'd like to use the dashboard to do things like view the deployments mem/cpu usage, check logs, etc ### Response: Kubernetes Dashboard is pretty limited at the moment, and only supports ReplicationControllers. If you create a ReplicationController then you will be able to see the Pods connected to it, check their memory and CPU usage, and view their logs. Work is being done to improve Dashboard and in the future it should support other Kubernetes resources besides ReplicationControllers. You can see some mockups in the GitHub repo.
I've created a deployment like this: kubectl run my-app --image=ecr.us-east-1.amazonaws.com/my-app:v1 -l name=my-app --replicas=1 Now I goto the Kubernetes Dashboard: https://172.0.0.1/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard But I dont see my-app listed here. Is it possible to use the Kubernetes Dashboard to view deployments? I'd like to use the dashboard to do things like view the deployments mem/cpu usage, check logs, etc
As you already noticed, by design all containers in a pod are destined to live and die together. It's a bit hard to tell what your best alternative would be without knowing what kind of maintenance task your sidekick needs to perform exactly. Generally speaking, I can think of three approaches: Keep your maintenance container running. This is probably a fairly ugly solution as it wastes resources. It really only makes sense if the maintenance task can benefit from running periodically. Move the maintenance task over to your primary container, effectively converting your multi-container pod into a single-container one. I assume that you can run the task asynchronously (as you would already be able to run it in a separate container); if, for some reasons, you cannot, consider modifying readiness and liveness probes accordingly so that your container is given enough time to finish any boot-up procedures before becoming eligible for termination. Consider adjusting your design so that the maintenance task may run as a separate pod (or maybe even as a job). You'd then need to manage any dependencies and wiring yourself by putting together Kubernetes primitives properly.
###Instruction: I have a ReplicationController containing two containers in a pod, the first is a long-living pod, the second does a few maintenance tasks when the RC starts up a POD. However as the second container is short lived, it stops itself when it finishes its start tasks. When Kuberbetes notices this, it kills off the POD and starts a new one... What is the correct way to handle this in Kuberbetes? ### Response: As you already noticed, by design all containers in a pod are destined to live and die together. It's a bit hard to tell what your best alternative would be without knowing what kind of maintenance task your sidekick needs to perform exactly. Generally speaking, I can think of three approaches: Keep your maintenance container running. This is probably a fairly ugly solution as it wastes resources. It really only makes sense if the maintenance task can benefit from running periodically. Move the maintenance task over to your primary container, effectively converting your multi-container pod into a single-container one. I assume that you can run the task asynchronously (as you would already be able to run it in a separate container); if, for some reasons, you cannot, consider modifying readiness and liveness probes accordingly so that your container is given enough time to finish any boot-up procedures before becoming eligible for termination. Consider adjusting your design so that the maintenance task may run as a separate pod (or maybe even as a job). You'd then need to manage any dependencies and wiring yourself by putting together Kubernetes primitives properly.
I have a ReplicationController containing two containers in a pod, the first is a long-living pod, the second does a few maintenance tasks when the RC starts up a POD. However as the second container is short lived, it stops itself when it finishes its start tasks. When Kuberbetes notices this, it kills off the POD and starts a new one... What is the correct way to handle this in Kuberbetes?
You can populate a container's environment variables through the use of Secrets or ConfigMaps. Use Secrets when the data you are working with is sensitive (e.g. passwords), and ConfigMaps when it is not. In your Pod definition specify that the container should pull values from a Secret: apiVersion: v1 kind: Pod metadata: labels: context: docker-k8s-lab name: mysql-pod name: mysql-pod spec: containers: - image: "mysql:latest" name: mysql ports: - containerPort: 3306 envFrom: - secretRef: name: mysql-secret Note that this syntax is only available in Kubernetes 1.6 or later. On an earlier version of Kubernetes you will have to specify each value manually, e.g.: env: - name: MYSQL_USER valueFrom: secretKeyRef: name: mysql-secret key: MYSQL_USER (Note that env take an array as value) And repeating for every value. Whichever approach you use, you can now define two different Secrets, one for production and one for dev. dev-secret.yaml: apiVersion: v1 kind: Secret metadata: name: mysql-secret type: Opaque data: MYSQL_USER: bXlzcWwK MYSQL_PASSWORD: bXlzcWwK MYSQL_DATABASE: c2FtcGxlCg== MYSQL_ROOT_PASSWORD: c3VwZXJzZWNyZXQK prod-secret.yaml: apiVersion: v1 kind: Secret metadata: name: mysql-secret type: Opaque data: MYSQL_USER: am9obgo= MYSQL_PASSWORD: c2VjdXJlCg== MYSQL_DATABASE: cHJvZC1kYgo= MYSQL_ROOT_PASSWORD: cm9vdHkK And deploy the correct secret to the correct Kubernetes cluster: kubectl config use-context dev kubectl create -f dev-secret.yaml kubectl config use-context prod kubectl create -f prod-secret.yaml Now whenever a Pod starts it will populate its environment variables from the values specified in the Secret.
###Instruction: Background: Currently we're using Docker and Docker Compose for our services. We have externalized the configuration for different environments into files that define environment variables read by the application. For example a prod.env file: ENV_VAR_ONE=Something Prod ENV_VAR_TWO=Something else Prod and a test.env file: ENV_VAR_ONE=Something Test ENV_VAR_TWO=Something else Test Thus we can simply use the prod.env or test.env file when starting the container: docker run --env-file prod.env <image> Our application then picks up its configuration based on the environment variables defined in prod.env. Questions: Is there a way to provide environment variables from a file in Kubernetes (for example when defining a pod) instead of hardcoding them like this: apiVersion: v1 kind: Pod metadata: labels: context: docker-k8s-lab name: mysql-pod name: mysql-pod spec: containers: - env: - name: MYSQL_USER value: mysql - name: MYSQL_PASSWORD value: mysql - name: MYSQL_DATABASE value: sample - name: MYSQL_ROOT_PASSWORD value: supersecret image: "mysql:latest" name: mysql ports: - containerPort: 3306 If this is not possible, what is the suggested approach? ### Response: You can populate a container's environment variables through the use of Secrets or ConfigMaps. Use Secrets when the data you are working with is sensitive (e.g. passwords), and ConfigMaps when it is not. In your Pod definition specify that the container should pull values from a Secret: apiVersion: v1 kind: Pod metadata: labels: context: docker-k8s-lab name: mysql-pod name: mysql-pod spec: containers: - image: "mysql:latest" name: mysql ports: - containerPort: 3306 envFrom: - secretRef: name: mysql-secret Note that this syntax is only available in Kubernetes 1.6 or later. On an earlier version of Kubernetes you will have to specify each value manually, e.g.: env: - name: MYSQL_USER valueFrom: secretKeyRef: name: mysql-secret key: MYSQL_USER (Note that env take an array as value) And repeating for every value. Whichever approach you use, you can now define two different Secrets, one for production and one for dev. dev-secret.yaml: apiVersion: v1 kind: Secret metadata: name: mysql-secret type: Opaque data: MYSQL_USER: bXlzcWwK MYSQL_PASSWORD: bXlzcWwK MYSQL_DATABASE: c2FtcGxlCg== MYSQL_ROOT_PASSWORD: c3VwZXJzZWNyZXQK prod-secret.yaml: apiVersion: v1 kind: Secret metadata: name: mysql-secret type: Opaque data: MYSQL_USER: am9obgo= MYSQL_PASSWORD: c2VjdXJlCg== MYSQL_DATABASE: cHJvZC1kYgo= MYSQL_ROOT_PASSWORD: cm9vdHkK And deploy the correct secret to the correct Kubernetes cluster: kubectl config use-context dev kubectl create -f dev-secret.yaml kubectl config use-context prod kubectl create -f prod-secret.yaml Now whenever a Pod starts it will populate its environment variables from the values specified in the Secret.
Background: Currently we're using Docker and Docker Compose for our services. We have externalized the configuration for different environments into files that define environment variables read by the application. For example a prod.env file: ENV_VAR_ONE=Something Prod ENV_VAR_TWO=Something else Prod and a test.env file: ENV_VAR_ONE=Something Test ENV_VAR_TWO=Something else Test Thus we can simply use the prod.env or test.env file when starting the container: docker run --env-file prod.env <image> Our application then picks up its configuration based on the environment variables defined in prod.env. Questions: Is there a way to provide environment variables from a file in Kubernetes (for example when defining a pod) instead of hardcoding them like this: apiVersion: v1 kind: Pod metadata: labels: context: docker-k8s-lab name: mysql-pod name: mysql-pod spec: containers: - env: - name: MYSQL_USER value: mysql - name: MYSQL_PASSWORD value: mysql - name: MYSQL_DATABASE value: sample - name: MYSQL_ROOT_PASSWORD value: supersecret image: "mysql:latest" name: mysql ports: - containerPort: 3306 If this is not possible, what is the suggested approach?
ConfigMaps containing text-files should be no problem at all. Take a look at the --from-file option: http://kubernetes.io/docs/user-guide/configmap/. Im unsure about binary files inside a ConfigMap. I'm able to add a JPEG but trying to read object results in an error so this might not be intended (needs additional base64 encoding or such). $ kubectl create configmap test --from-file=foo1=/tmp/scudcloud_U0GQ27N0M.jpg configmap "test" created $ kubectl get configmap test -o yaml error: error converting JSON to YAML: %!(EXTRA *errors.errorString=yaml: control characters are not allowed)
###Instruction: Due to limitations of ingress resources (in my case I need more than 50 routes which is not supported in Google Container Engine) I'm considering using Nginx as a reverse proxy to other backend services. What I want to do is essentially the same as an ingress resource provides such as routing path "/x" to service x and "/y" to service y. I'd like to run more than one instance of Nginx for HA, probably behind a service. My question mainly concerns configuration where I have a couple of options: Create a custom Docker image with nginx as base image and then copy our nginx configuration into this image. This would make it very easy to run this nginx-based image on Kubernetes. But while this works it would require rebuilding, publishing and storing a new custom nginx image every time the configuration changes. We already have pipelines setup for this so it won't be a big problem operationally. Use the vanilla nginx docker image, create a GCE persistent disk (we're running on Google Container Engine) that is shared between all nginx pods in read only mode. The problem I see with this is approach is how to copy configuration updates to the disk in an easy manner? Is there a better option? I've looked at config maps and/or secrets (which would solve the configuration update problem) but I don't think they can contain arbitrary data such as an nginx config file. ### Response: ConfigMaps containing text-files should be no problem at all. Take a look at the --from-file option: http://kubernetes.io/docs/user-guide/configmap/. Im unsure about binary files inside a ConfigMap. I'm able to add a JPEG but trying to read object results in an error so this might not be intended (needs additional base64 encoding or such). $ kubectl create configmap test --from-file=foo1=/tmp/scudcloud_U0GQ27N0M.jpg configmap "test" created $ kubectl get configmap test -o yaml error: error converting JSON to YAML: %!(EXTRA *errors.errorString=yaml: control characters are not allowed)
Due to limitations of ingress resources (in my case I need more than 50 routes which is not supported in Google Container Engine) I'm considering using Nginx as a reverse proxy to other backend services. What I want to do is essentially the same as an ingress resource provides such as routing path "/x" to service x and "/y" to service y. I'd like to run more than one instance of Nginx for HA, probably behind a service. My question mainly concerns configuration where I have a couple of options: Create a custom Docker image with nginx as base image and then copy our nginx configuration into this image. This would make it very easy to run this nginx-based image on Kubernetes. But while this works it would require rebuilding, publishing and storing a new custom nginx image every time the configuration changes. We already have pipelines setup for this so it won't be a big problem operationally. Use the vanilla nginx docker image, create a GCE persistent disk (we're running on Google Container Engine) that is shared between all nginx pods in read only mode. The problem I see with this is approach is how to copy configuration updates to the disk in an easy manner? Is there a better option? I've looked at config maps and/or secrets (which would solve the configuration update problem) but I don't think they can contain arbitrary data such as an nginx config file.
Off the top of my head I can think of two approaches, both of which I've used: If you want the script to start immediately, use CMD or ENTRYPOINT in your Dockerfile to kick off the script at start of the container So, abstracted, something like this FROM python COPY myscript.py /tmp CMD ["python", "/tmp/myscript.py"] If you want the script to only run when triggered, say, by an event of web request, "wrap" the Python code inside something like flask, bottle or uwsgi. This could look like this: Your wrapper script is mywrapper.py which has: #!usr/bin/env python import bottle app = bottle.Bottle() @app.route('/') def main(): call_my_python_code() In that case your Dockerfile may set this up through exposing a port and then starting the web server: FROM python COPY wrapper.py runserver.sh myscript.py /tmp EXPOSE 80 CMD ["/bin/bash", "-x", "/tmp/runserver.sh" ] With runserver.sh just setting up the server bit: #!/bin/bash set -e exec uwsgi --http 0.0.0.0:80 --wsgi-file wrapper.py --callable app
###Instruction: I am brand new to Docker, so I apologize for any ignorance. I have a web API which needs to trigger a Python script to run. The script is very resource intensive (CPU and RAM), so it needs to run on a different server than the API server. I am planning to run this script within a Docker container. I am also planning to use Kubernates. My API will build a container from an image, and then the Python script needs to run once the image is up and running. How can I trigger the script to run? Would it make sense to use Rundeck for this? Or would it make more sense to simply use SSH? Or something else? ### Response: Off the top of my head I can think of two approaches, both of which I've used: If you want the script to start immediately, use CMD or ENTRYPOINT in your Dockerfile to kick off the script at start of the container So, abstracted, something like this FROM python COPY myscript.py /tmp CMD ["python", "/tmp/myscript.py"] If you want the script to only run when triggered, say, by an event of web request, "wrap" the Python code inside something like flask, bottle or uwsgi. This could look like this: Your wrapper script is mywrapper.py which has: #!usr/bin/env python import bottle app = bottle.Bottle() @app.route('/') def main(): call_my_python_code() In that case your Dockerfile may set this up through exposing a port and then starting the web server: FROM python COPY wrapper.py runserver.sh myscript.py /tmp EXPOSE 80 CMD ["/bin/bash", "-x", "/tmp/runserver.sh" ] With runserver.sh just setting up the server bit: #!/bin/bash set -e exec uwsgi --http 0.0.0.0:80 --wsgi-file wrapper.py --callable app
I am brand new to Docker, so I apologize for any ignorance. I have a web API which needs to trigger a Python script to run. The script is very resource intensive (CPU and RAM), so it needs to run on a different server than the API server. I am planning to run this script within a Docker container. I am also planning to use Kubernates. My API will build a container from an image, and then the Python script needs to run once the image is up and running. How can I trigger the script to run? Would it make sense to use Rundeck for this? Or would it make more sense to simply use SSH? Or something else?
I have found my fault. kubernetes use iptables to transmit with different pod. So we should do that all we used port should be seted in the {spec.ports}, like my issue, the 18010 port must be opened. [root@localhost ~]# kb get svc NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE bj001 10.254.91.218 <none> 3306/TCP,18010/TCP app=bj001 41m dns 10.254.0.2 <none> 53/UDP,53/TCP app=dns 1d kubernetes 10.254.0.1 <none> 443/TCP <none> 8d nginx 10.254.72.30 172.16.2.51 80/TCP app=nginx 1d
###Instruction: I have done all kubernetes DNS service config,and test it running ok. but how could I access the pod from serviceName(DNS domain name)? pod list: [root@localhost ~]# kubectl get pod NAME READY STATUS RESTARTS AGE bj001-y1o2i 3/3 Running 12 20h dns-itc8d 3/3 Running 18 1d nginx-rc5bh 1/1 Running 1 15h service list: [root@localhost ~]# kb get svc NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE bj001 10.254.54.162 172.16.2.51 30101/TCP,30102/TCP app=bj001 1d dns 10.254.0.2 <none> 53/UDP,53/TCP app=dns 1d kubernetes 10.254.0.1 <none> 443/TCP <none> 8d nginx 10.254.72.30 172.16.2.51 80/TCP app=nginx 20h endpoints: [root@localhost ~]# kb get endpoints NAME ENDPOINTS AGE bj001 172.17.12.3:18010,172.17.12.3:3306 1d dns 172.17.87.3:53,172.17.87.3:53 1d kubernetes 172.16.2.50:6443 8d nginx 172.17.12.2:80 20h in nginx pod, I can ping pod bj001,and find the DNS name,but can not ping dns domain name. like this: [root@localhost ~]# kb exec -it nginx-rc5bh sh sh-4.2# nslookup bj001 Server: 10.254.0.2 Address: 10.254.0.2#53 Name: bj001.default.svc.cluster.local Address: 10.254.54.162 sh-4.2# ping 172.17.12.3 PING 172.17.12.3 (172.17.12.3) 56(84) bytes of data. 64 bytes from 172.17.12.3: icmp_seq=1 ttl=64 time=0.073 ms 64 bytes from 172.17.12.3: icmp_seq=2 ttl=64 time=0.082 ms 64 bytes from 172.17.12.3: icmp_seq=3 ttl=64 time=0.088 ms 64 bytes from 172.17.12.3: icmp_seq=4 ttl=64 time=0.105 ms ^C --- 172.17.12.3 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 2999ms rtt min/avg/max/mdev = 0.073/0.087/0.105/0.011 ms sh-4.2# ping bj001 PING bj001.default.svc.cluster.local (10.254.54.162) 56(84) bytes of data. ^C --- bj001.default.svc.cluster.local ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 999ms ### Response: I have found my fault. kubernetes use iptables to transmit with different pod. So we should do that all we used port should be seted in the {spec.ports}, like my issue, the 18010 port must be opened. [root@localhost ~]# kb get svc NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE bj001 10.254.91.218 <none> 3306/TCP,18010/TCP app=bj001 41m dns 10.254.0.2 <none> 53/UDP,53/TCP app=dns 1d kubernetes 10.254.0.1 <none> 443/TCP <none> 8d nginx 10.254.72.30 172.16.2.51 80/TCP app=nginx 1d
I have done all kubernetes DNS service config,and test it running ok. but how could I access the pod from serviceName(DNS domain name)? pod list: [root@localhost ~]# kubectl get pod NAME READY STATUS RESTARTS AGE bj001-y1o2i 3/3 Running 12 20h dns-itc8d 3/3 Running 18 1d nginx-rc5bh 1/1 Running 1 15h service list: [root@localhost ~]# kb get svc NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE bj001 10.254.54.162 172.16.2.51 30101/TCP,30102/TCP app=bj001 1d dns 10.254.0.2 <none> 53/UDP,53/TCP app=dns 1d kubernetes 10.254.0.1 <none> 443/TCP <none> 8d nginx 10.254.72.30 172.16.2.51 80/TCP app=nginx 20h endpoints: [root@localhost ~]# kb get endpoints NAME ENDPOINTS AGE bj001 172.17.12.3:18010,172.17.12.3:3306 1d dns 172.17.87.3:53,172.17.87.3:53 1d kubernetes 172.16.2.50:6443 8d nginx 172.17.12.2:80 20h in nginx pod, I can ping pod bj001,and find the DNS name,but can not ping dns domain name. like this: [root@localhost ~]# kb exec -it nginx-rc5bh sh sh-4.2# nslookup bj001 Server: 10.254.0.2 Address: 10.254.0.2#53 Name: bj001.default.svc.cluster.local Address: 10.254.54.162 sh-4.2# ping 172.17.12.3 PING 172.17.12.3 (172.17.12.3) 56(84) bytes of data. 64 bytes from 172.17.12.3: icmp_seq=1 ttl=64 time=0.073 ms 64 bytes from 172.17.12.3: icmp_seq=2 ttl=64 time=0.082 ms 64 bytes from 172.17.12.3: icmp_seq=3 ttl=64 time=0.088 ms 64 bytes from 172.17.12.3: icmp_seq=4 ttl=64 time=0.105 ms ^C --- 172.17.12.3 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 2999ms rtt min/avg/max/mdev = 0.073/0.087/0.105/0.011 ms sh-4.2# ping bj001 PING bj001.default.svc.cluster.local (10.254.54.162) 56(84) bytes of data. ^C --- bj001.default.svc.cluster.local ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 999ms
You could probably create one PersistentVolumeClaim in each namespace. Take a look at Can a PVC be bound to a specific PV? on how to "pre-bind" PersistentVolumes to PersistentVolumeClaims. Might not be an ideal solution but it probably works 'till PVCs support label selectors.
###Instruction: I'm sharing the same cluster for 2 namespaces: staging and production. The only differences among the two namespaces are: Volumes mounted to certain pods (separate persistence between staging and production, obviously!) A couple of web-URLs for relative addressing A couple of IPs to databases used for sophisticated persistence I have managed to address (2) and (3) as follows, so as to maintain a single YAML file for all ReplicationControllers: Use ConfigMaps local to a namespace to define any configuration that is passed via environment variables into the pods Use Services with Endpoints to handle a DNS entry pointing to different internal IPs However, I'm unable to find a satisfactory way to have a reference for a gcePersistentDisk's pdName - I can't seem to use a ConfigMap, hence a little stumped. What would be the appropriate way to go about this? The best alternative seems to be to maintain 2 separate YAML files with different strings, but this has a code-smell as it is violating DRY. Also, any constructive commentary on the rest of my setup as mentioned above is highly appreciated :-) ### Response: You could probably create one PersistentVolumeClaim in each namespace. Take a look at Can a PVC be bound to a specific PV? on how to "pre-bind" PersistentVolumes to PersistentVolumeClaims. Might not be an ideal solution but it probably works 'till PVCs support label selectors.
I'm sharing the same cluster for 2 namespaces: staging and production. The only differences among the two namespaces are: Volumes mounted to certain pods (separate persistence between staging and production, obviously!) A couple of web-URLs for relative addressing A couple of IPs to databases used for sophisticated persistence I have managed to address (2) and (3) as follows, so as to maintain a single YAML file for all ReplicationControllers: Use ConfigMaps local to a namespace to define any configuration that is passed via environment variables into the pods Use Services with Endpoints to handle a DNS entry pointing to different internal IPs However, I'm unable to find a satisfactory way to have a reference for a gcePersistentDisk's pdName - I can't seem to use a ConfigMap, hence a little stumped. What would be the appropriate way to go about this? The best alternative seems to be to maintain 2 separate YAML files with different strings, but this has a code-smell as it is violating DRY. Also, any constructive commentary on the rest of my setup as mentioned above is highly appreciated :-)
That error doesn't have to do with conflicting libraries, it is a conflicting flag (log_dir). It means you're adding a "--log_dir" flag, and the glog library used by kubernetes also has a log_dir flag. This is a problem with adding flags in libraries during package init. Unfortunately vendoring won't change anything. You might be able to work around this by manipulating the flag.CommandLine global variable to point to a different flag.FlagSet when you import your log library or kubernetes, but that will be tricky since it depends on import ordering.
###Instruction: I am using glog flag log_dir in my project. Recently I imported kubernetes library and started getting this runtime panic panic: ./aaa.test flag redefined: log_dir May 16 23:51:35 ecmdev03-core01 docker[26867]: goroutine 1 [running]: May 16 23:51:35 ecmdev03-core01 docker[26867]: panic(0x15ebc60, 0xc8201aae90) May 16 23:51:35 ecmdev03-core01 docker[26867]: /usr/local/go/src/runtime/panic.go:464 +0x3e6 May 16 23:51:35 ecmdev03-core01 docker[26867]: flag.(*FlagSet).Var(0xc8200160c0, 0x7f561118c1c0, 0xc8201aae40, 0x1bddd70, 0x7, 0x1d75860, 0x2f) May 16 23:51:35 ecmdev03-core01 docker[26867]: /usr/local/go/src/flag/flag.go:776 +0x454 May 16 23:51:35 ecmdev03-core01 docker[26867]: flag.(*FlagSet).StringVar(0xc8200160c0, 0xc8201aae40, 0x1bddd70, 0x7, 0x0, 0x0, 0x1d75860, 0x2f) May 16 23:51:35 ecmdev03-core01 docker[26867]: /usr/local/go/src/flag/flag.go:679 +0xc7 May 16 23:51:35 ecmdev03-core01 docker[26867]: flag.(*FlagSet).String(0xc8200160c0, 0x1bddd70, 0x7, 0x0, 0x0, 0x1d75860, 0x2f, 0xc8201aae30) May 16 23:51:35 ecmdev03-core01 docker[26867]: /usr/local/go/src/flag/flag.go:692 +0x83 May 16 23:51:35 ecmdev03-core01 docker[26867]: flag.String(0x1bddd70, 0x7, 0x0, 0x0, 0x1d75860, 0x2f, 0xba3950) May 16 23:51:35 ecmdev03-core01 docker[26867]: /usr/local/go/src/flag/flag.go:699 +0x5f May 16 23:51:35 ecmdev03-core01 docker[26867]: k8s.io/kubernetes/vendor/github.com/golang/glog.init() May 16 23:51:35 ecmdev03-core01 docker[26867]: /src/ecm_infra/go/src/k8s.io/kubernetes/vendor/github.com/golang/glog/glog_file.go:41 +0x13e May 16 23:51:35 ecmdev03-core01 docker[26867]: k8s.io/kubernetes/pkg/labels.init() May 16 23:51:35 ecmdev03-core01 docker[26867]: /src/ecm_infra/go/src/k8s.io/kubernetes/pkg/labels/selector.go:810 +0x6b May 16 23:51:35 ecmdev03-core01 docker[26867]: k8s.io/kubernetes/pkg/api/unversioned.init() May 16 23:51:35 ecmdev03-core01 docker[26867]: /src/ecm_infra/go/src/k8s.io/kubernetes/pkg/api/unversioned/well_known_labels.go:30 +0x6f May 16 23:51:35 ecmdev03-core01 docker[26867]: k8s.io/kubernetes/pkg/api.init() May 16 23:51:35 ecmdev03-core01 docker[26867]: /src/ecm_infra/go/src/k8s.io/kubernetes/pkg/api/types.go:2731 +0x64 It looks like glog library which is vendored by k8s is conflicting with the one that I am using. Is introducing vendor directory a solution for this problem? Should I use vendoring libraries like glide, govendor, gb etc? If yes, which one is preferred? ### Response: That error doesn't have to do with conflicting libraries, it is a conflicting flag (log_dir). It means you're adding a "--log_dir" flag, and the glog library used by kubernetes also has a log_dir flag. This is a problem with adding flags in libraries during package init. Unfortunately vendoring won't change anything. You might be able to work around this by manipulating the flag.CommandLine global variable to point to a different flag.FlagSet when you import your log library or kubernetes, but that will be tricky since it depends on import ordering.
I am using glog flag log_dir in my project. Recently I imported kubernetes library and started getting this runtime panic panic: ./aaa.test flag redefined: log_dir May 16 23:51:35 ecmdev03-core01 docker[26867]: goroutine 1 [running]: May 16 23:51:35 ecmdev03-core01 docker[26867]: panic(0x15ebc60, 0xc8201aae90) May 16 23:51:35 ecmdev03-core01 docker[26867]: /usr/local/go/src/runtime/panic.go:464 +0x3e6 May 16 23:51:35 ecmdev03-core01 docker[26867]: flag.(*FlagSet).Var(0xc8200160c0, 0x7f561118c1c0, 0xc8201aae40, 0x1bddd70, 0x7, 0x1d75860, 0x2f) May 16 23:51:35 ecmdev03-core01 docker[26867]: /usr/local/go/src/flag/flag.go:776 +0x454 May 16 23:51:35 ecmdev03-core01 docker[26867]: flag.(*FlagSet).StringVar(0xc8200160c0, 0xc8201aae40, 0x1bddd70, 0x7, 0x0, 0x0, 0x1d75860, 0x2f) May 16 23:51:35 ecmdev03-core01 docker[26867]: /usr/local/go/src/flag/flag.go:679 +0xc7 May 16 23:51:35 ecmdev03-core01 docker[26867]: flag.(*FlagSet).String(0xc8200160c0, 0x1bddd70, 0x7, 0x0, 0x0, 0x1d75860, 0x2f, 0xc8201aae30) May 16 23:51:35 ecmdev03-core01 docker[26867]: /usr/local/go/src/flag/flag.go:692 +0x83 May 16 23:51:35 ecmdev03-core01 docker[26867]: flag.String(0x1bddd70, 0x7, 0x0, 0x0, 0x1d75860, 0x2f, 0xba3950) May 16 23:51:35 ecmdev03-core01 docker[26867]: /usr/local/go/src/flag/flag.go:699 +0x5f May 16 23:51:35 ecmdev03-core01 docker[26867]: k8s.io/kubernetes/vendor/github.com/golang/glog.init() May 16 23:51:35 ecmdev03-core01 docker[26867]: /src/ecm_infra/go/src/k8s.io/kubernetes/vendor/github.com/golang/glog/glog_file.go:41 +0x13e May 16 23:51:35 ecmdev03-core01 docker[26867]: k8s.io/kubernetes/pkg/labels.init() May 16 23:51:35 ecmdev03-core01 docker[26867]: /src/ecm_infra/go/src/k8s.io/kubernetes/pkg/labels/selector.go:810 +0x6b May 16 23:51:35 ecmdev03-core01 docker[26867]: k8s.io/kubernetes/pkg/api/unversioned.init() May 16 23:51:35 ecmdev03-core01 docker[26867]: /src/ecm_infra/go/src/k8s.io/kubernetes/pkg/api/unversioned/well_known_labels.go:30 +0x6f May 16 23:51:35 ecmdev03-core01 docker[26867]: k8s.io/kubernetes/pkg/api.init() May 16 23:51:35 ecmdev03-core01 docker[26867]: /src/ecm_infra/go/src/k8s.io/kubernetes/pkg/api/types.go:2731 +0x64 It looks like glog library which is vendored by k8s is conflicting with the one that I am using. Is introducing vendor directory a solution for this problem? Should I use vendoring libraries like glide, govendor, gb etc? If yes, which one is preferred?
Selector directives in Deployments require you to use a sub-field of either matchLabels or matchExpressions, so in my case I need to make use of matchLabels: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis spec: replicas: 3 selector: matchLabels: name: redis template: metadata: labels: name: redis spec: containers: - name: redis image: kubernetes/redis:v1 ports: - containerPort: 6379 resources: limits: cpu: "0.1" volumeMounts: - mountPath: /redis-master-data name: data volumes: - name: data emptyDir: {}
###Instruction: Using Kubernetes 1.2.4, why does my below Deployment definition (redis.yaml) cause the following error? $ kubectl apply -f redis.yaml error validating "redis.yaml": error validating data: found invalid field name for v1beta1.LabelSelector; if you choose to ignore these errors, turn validation off with --validate=false redis.yaml: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis spec: replicas: 3 selector: name: redis template: metadata: labels: name: redis spec: containers: - name: redis image: kubernetes/redis:v1 ports: - containerPort: 6379 resources: limits: cpu: "0.1" volumeMounts: - mountPath: /redis-master-data name: data volumes: - name: data emptyDir: {} ### Response: Selector directives in Deployments require you to use a sub-field of either matchLabels or matchExpressions, so in my case I need to make use of matchLabels: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis spec: replicas: 3 selector: matchLabels: name: redis template: metadata: labels: name: redis spec: containers: - name: redis image: kubernetes/redis:v1 ports: - containerPort: 6379 resources: limits: cpu: "0.1" volumeMounts: - mountPath: /redis-master-data name: data volumes: - name: data emptyDir: {}
Using Kubernetes 1.2.4, why does my below Deployment definition (redis.yaml) cause the following error? $ kubectl apply -f redis.yaml error validating "redis.yaml": error validating data: found invalid field name for v1beta1.LabelSelector; if you choose to ignore these errors, turn validation off with --validate=false redis.yaml: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis spec: replicas: 3 selector: name: redis template: metadata: labels: name: redis spec: containers: - name: redis image: kubernetes/redis:v1 ports: - containerPort: 6379 resources: limits: cpu: "0.1" volumeMounts: - mountPath: /redis-master-data name: data volumes: - name: data emptyDir: {}
You can specify the desired port as the nodePort in the yaml spec that you use to create the service. In this case, where the yaml file you linked to defines the service as: - kind: Service apiVersion: v1 metadata: labels: app: kubernetes-dashboard kubernetes.io/cluster-service: "true" name: kubernetes-dashboard namespace: kube-system spec: type: NodePort ports: - port: 80 targetPort: 9090 selector: app: kubernetes-dashboard You would want to define it as below, assuming your desired port number is 33333: - kind: Service apiVersion: v1 metadata: labels: app: kubernetes-dashboard kubernetes.io/cluster-service: "true" name: kubernetes-dashboard namespace: kube-system spec: type: NodePort ports: - port: 80 targetPort: 9090 nodePort: 33333 selector: app: kubernetes-dashboard
###Instruction: I was just wondering how to manually set the external endpoint used by the Kubernetes web dashboard. After creating the namespace kube-system, I ran the following: kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml Is there a flag I can use to specify which tcp port to use for external access? As far as I can tell it's just randomly assigning one. I've looked through the documentation but I'm having a hard time finding a solution. Any help would be appreciated. ### Response: You can specify the desired port as the nodePort in the yaml spec that you use to create the service. In this case, where the yaml file you linked to defines the service as: - kind: Service apiVersion: v1 metadata: labels: app: kubernetes-dashboard kubernetes.io/cluster-service: "true" name: kubernetes-dashboard namespace: kube-system spec: type: NodePort ports: - port: 80 targetPort: 9090 selector: app: kubernetes-dashboard You would want to define it as below, assuming your desired port number is 33333: - kind: Service apiVersion: v1 metadata: labels: app: kubernetes-dashboard kubernetes.io/cluster-service: "true" name: kubernetes-dashboard namespace: kube-system spec: type: NodePort ports: - port: 80 targetPort: 9090 nodePort: 33333 selector: app: kubernetes-dashboard
I was just wondering how to manually set the external endpoint used by the Kubernetes web dashboard. After creating the namespace kube-system, I ran the following: kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml Is there a flag I can use to specify which tcp port to use for external access? As far as I can tell it's just randomly assigning one. I've looked through the documentation but I'm having a hard time finding a solution. Any help would be appreciated.
As far as I know there isn't a way to directly revoke certificates via a CRL. However, what does work, and what we are currently using, is ABAC policies to identify users (set via the Common Name of a certificate), and whether they have access to a given resource on Kubernetes. As an example, say you have a user called "random". You would generate a Client Certificate for them from your given Certificate Authority, with a Common Name of "random". From there, you can have an ABAC policy file (a csv file with each line being a bit of JSON), with permissions set for user "random" that would provide them with a certain level of access to the Kubernetes API. You can have them have access to everything or certain namespaces or other API parameters. If you need to revoke permissions, you simply delete that user from the ABAC policy file. We've tested this, and it works well. The unfortunate thing, I will say, is you have to restart the Kubernetes API service for those changes to take effect, so there may be a few seconds of downtime for this change to occur. Obviously in a development environment this isn't a big deal, but on production you may need to schedule time for users to be added. Hopefully in the future a simple "kube-apiserver reload" will allow for a re-read of that ABAC policy file. One final thing to note: when using Client Certificates for ABAC authentication, you will need to set permissions for users INDIVIDUALLY. Unlike with auth tokens with ABAC, you cannot set Client Certificate users in "groups." Something that caused us headaches, so figured it was worth passing on. :) Hope this helps!
###Instruction: kube-apiserver does not seem to provide an option to use a certification revocation list (CRL). Is there a way to revoke a client certificate if it's lost or not used anymore? ### Response: As far as I know there isn't a way to directly revoke certificates via a CRL. However, what does work, and what we are currently using, is ABAC policies to identify users (set via the Common Name of a certificate), and whether they have access to a given resource on Kubernetes. As an example, say you have a user called "random". You would generate a Client Certificate for them from your given Certificate Authority, with a Common Name of "random". From there, you can have an ABAC policy file (a csv file with each line being a bit of JSON), with permissions set for user "random" that would provide them with a certain level of access to the Kubernetes API. You can have them have access to everything or certain namespaces or other API parameters. If you need to revoke permissions, you simply delete that user from the ABAC policy file. We've tested this, and it works well. The unfortunate thing, I will say, is you have to restart the Kubernetes API service for those changes to take effect, so there may be a few seconds of downtime for this change to occur. Obviously in a development environment this isn't a big deal, but on production you may need to schedule time for users to be added. Hopefully in the future a simple "kube-apiserver reload" will allow for a re-read of that ABAC policy file. One final thing to note: when using Client Certificates for ABAC authentication, you will need to set permissions for users INDIVIDUALLY. Unlike with auth tokens with ABAC, you cannot set Client Certificate users in "groups." Something that caused us headaches, so figured it was worth passing on. :) Hope this helps!
kube-apiserver does not seem to provide an option to use a certification revocation list (CRL). Is there a way to revoke a client certificate if it's lost or not used anymore?
I fixed the error. The thing was in the actual service. It needs to be listening on 0.0.0.0 instead of 127.0.0.1 or localhost. That way it will listen on every available network interface. More details on the difference between 0.0.0.0 and 127.0.0.1: https://serverfault.com/questions/78048/whats-the-difference-between-ip-address-0-0-0-0-and-127-0-0-1
###Instruction: I have created the replication controller in Kubernetes with the following configuration: { "kind":"ReplicationController", "apiVersion":"v1", "metadata":{ "name":"guestbook", "labels":{ "app":"guestbook" } }, "spec":{ "replicas":1, "selector":{ "app":"guestbook" }, "template":{ "metadata":{ "labels":{ "app":"guestbook" } }, "spec":{ "containers":[ { "name":"guestbook", "image":"username/fsharp-microservice:v1", "ports":[ { "name":"http-server", "containerPort":3000 } ], "command": ["fsharpi", "/home/SuaveServer.fsx"] } ] } } } } The code of the service that is running on the port 3000 is basically this: #r "Suave.dll" #r "Mono.Posix.dll" open Suave open Suave.Http open Suave.Successful open System open System.Net open System.Threading open System.Diagnostics open Mono.Unix open Mono.Unix.Native let app = OK "PONG" let port = 3000us let config = { defaultConfig with bindings = [ HttpBinding.mk HTTP IPAddress.Loopback port ] bufferSize = 8192 maxOps = 10000 } open System.Text.RegularExpressions let cts = new CancellationTokenSource() let listening, server = startWebServerAsync config app Async.Start(server, cts.Token) Console.WriteLine("Server should be started at this point") Console.ReadLine() After I created the service I can see the pod: $kubectl create -f guestbook.json replicationcontroller "guestbook" created $ kubectl get pods NAME READY STATUS RESTARTS AGE guestbook-0b9py 1/1 Running 0 32m I want to access my web service and create the service with type=LoadBalancer that will expose the 3000 port with the following configuration: { "kind":"Service", "apiVersion":"v1", "metadata":{ "name":"guestbook", "labels":{ "app":"guestbook" } }, "spec":{ "ports": [ { "port":3000, "targetPort":"http-server" } ], "selector":{ "app":"guestbook" }, "type": "LoadBalancer" } } Here is the result: $ kubectl create -f guestbook-service.json service "guestbook" created $ kubectl get services NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE guestbook 10.0.82.40 3000/TCP 7s kubernetes 10.0.0.1 <none> 443/TCP 3h $ kubectl describe services Name: guestbook Namespace: default Labels: app=guestbook Selector: app=guestbook Type: LoadBalancer IP: 10.0.82.40 LoadBalancer Ingress: a43eee4a008cf11e68f210a4fa30c03e-1918213320.us-west-2.elb.amazonaws.com Port: <unset> 3000/TCP NodePort: <unset> 30877/TCP Endpoints: 10.244.1.6:3000 Session Affinity: None Events: FirstSeen LastSeen Count From SubobjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 18s 18s 1 {service-controller } Normal CreatingLoadBalancer Creating load balancer 17s 17s 1 {service-controller } Normal CreatedLoadBalancer Created load balancer Name: kubernetes Namespace: default Labels: component=apiserver,provider=kubernetes Selector: <none> Type: ClusterIP IP: 10.0.0.1 Port: https 443/TCP Endpoints: 172.20.0.9:443 Session Affinity: None No events. The "External IP" column is empty I have tried to access the service using "LoadBalancer Ingress" but DNS name can't be resolved. If I check in the AWS console - load balancer is created (but in the details panel there is a message "0 of 2 instances in service" because of health-checks). I have also tried to expose my RC using kubectl expose --type=Load-Balancer, but result is the same. What is the problem? ### Response: I fixed the error. The thing was in the actual service. It needs to be listening on 0.0.0.0 instead of 127.0.0.1 or localhost. That way it will listen on every available network interface. More details on the difference between 0.0.0.0 and 127.0.0.1: https://serverfault.com/questions/78048/whats-the-difference-between-ip-address-0-0-0-0-and-127-0-0-1
I have created the replication controller in Kubernetes with the following configuration: { "kind":"ReplicationController", "apiVersion":"v1", "metadata":{ "name":"guestbook", "labels":{ "app":"guestbook" } }, "spec":{ "replicas":1, "selector":{ "app":"guestbook" }, "template":{ "metadata":{ "labels":{ "app":"guestbook" } }, "spec":{ "containers":[ { "name":"guestbook", "image":"username/fsharp-microservice:v1", "ports":[ { "name":"http-server", "containerPort":3000 } ], "command": ["fsharpi", "/home/SuaveServer.fsx"] } ] } } } } The code of the service that is running on the port 3000 is basically this: #r "Suave.dll" #r "Mono.Posix.dll" open Suave open Suave.Http open Suave.Successful open System open System.Net open System.Threading open System.Diagnostics open Mono.Unix open Mono.Unix.Native let app = OK "PONG" let port = 3000us let config = { defaultConfig with bindings = [ HttpBinding.mk HTTP IPAddress.Loopback port ] bufferSize = 8192 maxOps = 10000 } open System.Text.RegularExpressions let cts = new CancellationTokenSource() let listening, server = startWebServerAsync config app Async.Start(server, cts.Token) Console.WriteLine("Server should be started at this point") Console.ReadLine() After I created the service I can see the pod: $kubectl create -f guestbook.json replicationcontroller "guestbook" created $ kubectl get pods NAME READY STATUS RESTARTS AGE guestbook-0b9py 1/1 Running 0 32m I want to access my web service and create the service with type=LoadBalancer that will expose the 3000 port with the following configuration: { "kind":"Service", "apiVersion":"v1", "metadata":{ "name":"guestbook", "labels":{ "app":"guestbook" } }, "spec":{ "ports": [ { "port":3000, "targetPort":"http-server" } ], "selector":{ "app":"guestbook" }, "type": "LoadBalancer" } } Here is the result: $ kubectl create -f guestbook-service.json service "guestbook" created $ kubectl get services NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE guestbook 10.0.82.40 3000/TCP 7s kubernetes 10.0.0.1 <none> 443/TCP 3h $ kubectl describe services Name: guestbook Namespace: default Labels: app=guestbook Selector: app=guestbook Type: LoadBalancer IP: 10.0.82.40 LoadBalancer Ingress: a43eee4a008cf11e68f210a4fa30c03e-1918213320.us-west-2.elb.amazonaws.com Port: <unset> 3000/TCP NodePort: <unset> 30877/TCP Endpoints: 10.244.1.6:3000 Session Affinity: None Events: FirstSeen LastSeen Count From SubobjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 18s 18s 1 {service-controller } Normal CreatingLoadBalancer Creating load balancer 17s 17s 1 {service-controller } Normal CreatedLoadBalancer Created load balancer Name: kubernetes Namespace: default Labels: component=apiserver,provider=kubernetes Selector: <none> Type: ClusterIP IP: 10.0.0.1 Port: https 443/TCP Endpoints: 172.20.0.9:443 Session Affinity: None No events. The "External IP" column is empty I have tried to access the service using "LoadBalancer Ingress" but DNS name can't be resolved. If I check in the AWS console - load balancer is created (but in the details panel there is a message "0 of 2 instances in service" because of health-checks). I have also tried to expose my RC using kubectl expose --type=Load-Balancer, but result is the same. What is the problem?
You should replace PROJECT_ID with the alphanumeric ID of your project (e.g. "my-project-123").
###Instruction: I'm following the hello node Kubernetes tutorial but when I get to the step of pushing my machine to google cloud with the command: gcloud docker push gcr.io/PROJECT_ID/hello-node:v1 I get the following error message: name invalid: Invalid repository name: gcr.io/PROJECT_ID/hello-node (Just do avoid double-checks, I replaced my PROJECT_ID with the actual number of the projecg) I was able to push my image to a bucket, and run, but then my image doesn't show in Container Engine > Container Registry. How can this be fixed? ### Response: You should replace PROJECT_ID with the alphanumeric ID of your project (e.g. "my-project-123").
I'm following the hello node Kubernetes tutorial but when I get to the step of pushing my machine to google cloud with the command: gcloud docker push gcr.io/PROJECT_ID/hello-node:v1 I get the following error message: name invalid: Invalid repository name: gcr.io/PROJECT_ID/hello-node (Just do avoid double-checks, I replaced my PROJECT_ID with the actual number of the projecg) I was able to push my image to a bucket, and run, but then my image doesn't show in Container Engine > Container Registry. How can this be fixed?
You can look at all services in all namespaces using: kubectl get services --all-namespaces It's likely that a service in the kube-system namespace has the 10.7.240.2 address.
###Instruction: I am trying to assign clusterIP to a Service in the kube cluster. I know the IP range, so that I can form new IPs from that IP range. I am implementing proxy within containers. So I need to know the clusterIP of the depending services, so that I can do a proxy from the source container. For this purpose I am specifying the clusterIP at the time of creating a service. Now the challenge is that I should allocate an IP that is not used by any other service within that cluster, else it will fail. What I did is the following. I got the list of Services and their IPs in that kube cluster and formed a new IP and made sure that the new IP is not in the pool of IPs already in use and then allocated it to the new service. It works fine in most of the cases. Now I am facing a strange issue here. In some cases it fails saying that "Service 'mongodb-4-xmxrv-svc' is invalid: spec.clusterIP: Invalid value: '10.7.240.2': provided IP is already allocated". But that IP is not used by any of the Services within that kube cluster. I am not sure where that IP is used. Can someone help me out to list the IPs that are used within that kube cluster, so that I can form a new IP and allocate it to the Service? ### Response: You can look at all services in all namespaces using: kubectl get services --all-namespaces It's likely that a service in the kube-system namespace has the 10.7.240.2 address.
I am trying to assign clusterIP to a Service in the kube cluster. I know the IP range, so that I can form new IPs from that IP range. I am implementing proxy within containers. So I need to know the clusterIP of the depending services, so that I can do a proxy from the source container. For this purpose I am specifying the clusterIP at the time of creating a service. Now the challenge is that I should allocate an IP that is not used by any other service within that cluster, else it will fail. What I did is the following. I got the list of Services and their IPs in that kube cluster and formed a new IP and made sure that the new IP is not in the pool of IPs already in use and then allocated it to the new service. It works fine in most of the cases. Now I am facing a strange issue here. In some cases it fails saying that "Service 'mongodb-4-xmxrv-svc' is invalid: spec.clusterIP: Invalid value: '10.7.240.2': provided IP is already allocated". But that IP is not used by any of the Services within that kube cluster. I am not sure where that IP is used. Can someone help me out to list the IPs that are used within that kube cluster, so that I can form a new IP and allocate it to the Service?
I found out that this has to do with http2, because Kubernetes exec uses SPDY, the problem went away when I upgraded to curl version > 7.36 and installed nghttp2 on the server. After installing curl I was able to get the response by adding some headers curl -H "Connection: upgrade" -H "Upgrade: SPDY/3.1" {master url:port/pod/exec}
###Instruction: Im using fabric8 Kubernetes Java Client and Im accessing Kubernetes through HTTP, I followed the example from fabric8 but I get the following error: Expected HTTP 100 but received 400 instead, Bad Request. What do I need to do to upgrade my connection to http/2? ### Response: I found out that this has to do with http2, because Kubernetes exec uses SPDY, the problem went away when I upgraded to curl version > 7.36 and installed nghttp2 on the server. After installing curl I was able to get the response by adding some headers curl -H "Connection: upgrade" -H "Upgrade: SPDY/3.1" {master url:port/pod/exec}
Im using fabric8 Kubernetes Java Client and Im accessing Kubernetes through HTTP, I followed the example from fabric8 but I get the following error: Expected HTTP 100 but received 400 instead, Bad Request. What do I need to do to upgrade my connection to http/2?
Adding the "--rm" flag to the original command resulted in the Job (and Pod) being deleted at the completion of the interactive session, which is what I was after. The command then becomes: kubectl run -i --tty --rm proxy-pgclient --image=private-registry.com/pgclient --restart=Never --env="PGPASSWORD=foobar" -- psql -h dbhost.local -p 5432 -U pg_admin -W postgres
###Instruction: I'm trying to run an interactive Pod (container) in Kubernetes that does not create a Job or Deployment and deletes itself after completing. The purpose of the container is to give our developers an easy way to access our database, which doesn't have a public IP address. Currently, we are using this command: kubectl run -i --tty proxy-pgclient --image=private-registry.com/pgclient --restart=Never --env="PGPASSWORD=foobar" -- psql -h dbhost.local -p 5432 -U pg_admin -W postgres which works the first time you run it, however, after exiting the session if you try to run the above again to connect to the database again, we get: Error from server: jobs.extensions "proxy-pgclient" already exists Forcing the developer to delete the job with: kubectl delete job proxy-pgclient before they can run the command and connect again. Is there any way of starting up an interactive container (Pod) in Kubernetes without creating a Job or Deployment object and having that container be deleted when the interactive session is closed? ### Response: Adding the "--rm" flag to the original command resulted in the Job (and Pod) being deleted at the completion of the interactive session, which is what I was after. The command then becomes: kubectl run -i --tty --rm proxy-pgclient --image=private-registry.com/pgclient --restart=Never --env="PGPASSWORD=foobar" -- psql -h dbhost.local -p 5432 -U pg_admin -W postgres
I'm trying to run an interactive Pod (container) in Kubernetes that does not create a Job or Deployment and deletes itself after completing. The purpose of the container is to give our developers an easy way to access our database, which doesn't have a public IP address. Currently, we are using this command: kubectl run -i --tty proxy-pgclient --image=private-registry.com/pgclient --restart=Never --env="PGPASSWORD=foobar" -- psql -h dbhost.local -p 5432 -U pg_admin -W postgres which works the first time you run it, however, after exiting the session if you try to run the above again to connect to the database again, we get: Error from server: jobs.extensions "proxy-pgclient" already exists Forcing the developer to delete the job with: kubectl delete job proxy-pgclient before they can run the command and connect again. Is there any way of starting up an interactive container (Pod) in Kubernetes without creating a Job or Deployment object and having that container be deleted when the interactive session is closed?
Are you able to use other kubectl commands such as kubectl get pods? This sounds like the cluster isn't set up correctly or there's some network issue. Would you also try kubectl config view to see how your cluster is configured? More specifically, look for current-context and clusters fields to see if your cluster is configured as expected.
###Instruction: My container engine cluster has a red exclamation mark next to its name in the Google cloud console overview of the container engine. A tooltip says "The cluster has a problem. Click the cluster name for details." Once I click the name I don't get any more infos, it's just the usual summary. Stackdriver doesn't report anything unusual. No incidents are logged, all pods are marked as healthy but I can't reach my services. Trying to get infos or logs via kubectl doesn't work: kubectl cluster-info Unable to connect to the server: dial tcp xxx.xxx.xxx.xxx:443: i/o timeout How can I debug this problem? And what does this cryptic message mean anyway? ### Response: Are you able to use other kubectl commands such as kubectl get pods? This sounds like the cluster isn't set up correctly or there's some network issue. Would you also try kubectl config view to see how your cluster is configured? More specifically, look for current-context and clusters fields to see if your cluster is configured as expected.
My container engine cluster has a red exclamation mark next to its name in the Google cloud console overview of the container engine. A tooltip says "The cluster has a problem. Click the cluster name for details." Once I click the name I don't get any more infos, it's just the usual summary. Stackdriver doesn't report anything unusual. No incidents are logged, all pods are marked as healthy but I can't reach my services. Trying to get infos or logs via kubectl doesn't work: kubectl cluster-info Unable to connect to the server: dial tcp xxx.xxx.xxx.xxx:443: i/o timeout How can I debug this problem? And what does this cryptic message mean anyway?
It means there's no available nodes in the system for the pods to be scheduled on. Can you provide the output of kubectl get nodes and kubectl describe nodes? Following steps descirbed in the local cluster doc should give you a single node. If your node is there (it should be) but just not ready, you can look at the log in /tmp/kubelet.log (in the future, if you're not using local cluster, look for /var/log/kubelet.log instead) to figure out possible causes.
###Instruction: I'm new to Kubernetes - I've worked with docker-compose until now (on one machine). Now I want to expend my work to cluster of nodes and to get Kubernetes capabilities (service discovery, load balancing, health check etc). I'm working in local servers (RHEL7) and trying to run my first Kubernetes environment (following this doc) with no lack. I run: hack/local-up-cluster.sh then (In another terminal): cluster/kubectl.sh config set-cluster local --server=http://127.0.0.1:8080 --insecure-skip-tls-verify=true cluster/kubectl.sh config set-context local --cluster=local cluster/kubectl.sh config use-context local And: cluster/kubectl.sh create -f run-aii.yaml my run-aii.yaml: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: aii spec: replicas: 1 template: metadata: labels: run: aii spec: containers: - name: aii image: localhost:5000/dev/aii ports: - containerPort: 5144 env: - name: KAFKA_IP value: kafka volumeMounts: - mountPath: /root/script name: scripts-data readOnly: true - mountPath: /home/aii/core name: core-aii readOnly: true - mountPath: /home/aii/genome name: genome-aii readOnly: true - mountPath: /home/aii/main name: main-aii readOnly: true - name: kafka image: localhost:5000/dev/kafkazoo volumeMounts: - mountPath: /root/script name: scripts-data readOnly: true - mountPath: /root/config name: config-data readOnly: true - name: ws image: localhost:5000/dev/ws ports: - containerPort: 3000 volumes: - name: scripts-data hostPath: path: /home/aii/general/infra/script - name: config-data hostPath: path: /home/aii/general/infra/config - name: core-aii hostPath: path: /home/aii/general/core - name: genome-aii hostPath: path: /home/aii/general/genome - name: main-aii hostPath: path: /home/aii/general/main Additional info: [aii@localhost kubernetes]$ cluster/kubectl.sh describe pod aii-4073165096-nkdq6 Name: aii-4073165096-nkdq6 Namespace: default Node: / Labels: pod-template-hash=4073165096,run=aii Status: Pending IP: Controllers: ReplicaSet/aii-4073165096 Containers: aii: Image: localhost:5000/dev/aii Port: 5144/TCP QoS Tier: cpu: BestEffort memory: BestEffort Environment Variables: KAFKA_IP: kafka kafka: Image: localhost:5000/dev/kafkazoo Port: QoS Tier: cpu: BestEffort memory: BestEffort Environment Variables: ws: Image: localhost:5000/dev/ws Port: 3000/TCP QoS Tier: cpu: BestEffort memory: BestEffort Environment Variables: Volumes: scripts-data: Type: HostPath (bare host directory volume) Path: /home/aii/general/infra/script config-data: Type: HostPath (bare host directory volume) Path: /home/aii/general/infra/config core-aii: Type: HostPath (bare host directory volume) Path: /home/aii/general/core genome-aii: Type: HostPath (bare host directory volume) Path: /home/aii/general/genome main-aii: Type: HostPath (bare host directory volume) Path: /home/aii/general/main default-token-hiwwo: Type: Secret (a volume populated by a Secret) SecretName: default-token-hiwwo Events: FirstSeen LastSeen Count From SubobjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 37s 6s 6 {default-scheduler } Warning FailedScheduling no nodes available to schedule pods docker images: [aii@localhost kubernetes]$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE kube-build build-47381c8eab f221edba30ed 25 hours ago 1.628 GB aii latest 1026cd920723 4 days ago 1.427 GB localhost:5000/dev/aii latest 1026cd920723 4 days ago 1.427 GB registry 2 34bccec54793 4 days ago 171.2 MB localhost:5000/dev/ws latest fa7c5f6ef83a 12 days ago 706.8 MB ws latest fa7c5f6ef83a 12 days ago 706.8 MB kafkazoo latest 84c687b0bd74 2 weeks ago 697.7 MB localhost:5000/dev/kafkazoo latest 84c687b0bd74 2 weeks ago 697.7 MB node 4.4 1a93433cee73 2 weeks ago 647 MB gcr.io/google_containers/hyperkube-amd64 v1.2.4 3c4f38def75b 2 weeks ago 316.7 MB nginx latest 3edcc5de5a79 2 weeks ago 182.7 MB gcr.io/google_containers/debian-iptables-arm v3 aca727a3023c 5 weeks ago 120.5 MB gcr.io/google_containers/debian-iptables-amd64 v3 49b5e076215b 6 weeks ago 129.4 MB spotify/kafka latest 30d3cef1fe8e 3 months ago 421.6 MB gcr.io/google_containers/kube-cross v1.4.2-1 8d2874b4f7e9 3 months ago 1.551 GB wurstmeister/zookeeper latest dc00f1198a44 4 months ago 468.7 MB centos latest 61b442687d68 5 months ago 196.6 MB centos centos7.2.1511 38ea04e19303 5 months ago 194.6 MB hypriot/armhf-busybox latest d7ae69033898 6 months ago 1.267 MB gcr.io/google_containers/etcd 2.2.1 a6cd91debed1 6 months ago 28.19 MB gcr.io/google_containers/pause 2.0 2b58359142b0 7 months ago 350.2 kB gcr.io/google_containers/kube-registry-proxy 0.3 b86ac3f11a0c 9 months ago 151.2 MB What is the 'no nodes available to schedule pods' means? Where should I configure/define the nodes? where and how should I specify the IPs of physical machines? EDIT: [aii@localhost kubernetes]$ kubectl get nodes NAME STATUS AGE 127.0.0.1 Ready 1m and: [aii@localhost kubernetes]$ kubectl describe nodes Name: 127.0.0.1 Labels: kubernetes.io/hostname=127.0.0.1 CreationTimestamp: Tue, 24 May 2016 09:58:00 +0300 Phase: Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- OutOfDisk True Tue, 24 May 2016 09:59:50 +0300 Tue, 24 May 2016 09:58:10 +0300 KubeletOutOfDisk out of disk space Ready True Tue, 24 May 2016 09:59:50 +0300 Tue, 24 May 2016 09:58:10 +0300 KubeletReady kubelet is posting ready status Addresses: 127.0.0.1,127.0.0.1 Capacity: pods: 110 cpu: 4 memory: 8010896Ki System Info: Machine ID: b939b024448040469dfdbd3dd3c3e314 System UUID: 59FF2897-234D-4069-A5D4-B68648FC7D38 Boot ID: 0153b84d-90e1-4fd1-9afa-f4312e89613e Kernel Version: 3.10.0-327.4.5.el7.x86_64 OS Image: Red Hat Enterprise Linux Container Runtime Version: docker://1.10.3 Kubelet Version: v1.2.4 Kube-Proxy Version: v1.2.4 ExternalID: 127.0.0.1 Non-terminated Pods: (0 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- Allocated resources: (Total limits may be over 100%, i.e., overcommitted. More info: http://releases.k8s.io/HEAD/docs/user-guide/compute-resources.md) CPU Requests CPU Limits Memory Requests Memory Limits ------------ ---------- --------------- ------------- 0 (0%) 0 (0%) 0 (0%) 0 (0%) Events: FirstSeen LastSeen Count From SubobjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 1m 1m 1 {kube-proxy 127.0.0.1} Normal Starting Starting kube-proxy. 1m 1m 1 {kubelet 127.0.0.1} Normal Starting Starting kubelet. 1m 1m 1 {kubelet 127.0.0.1} Normal NodeHasSufficientDisk Node 127.0.0.1 status is now: NodeHasSufficientDisk 1m 1m 1 {controllermanager } Normal RegisteredNode Node 127.0.0.1 event: Registered Node 127.0.0.1 in NodeController 1m 1m 1 {kubelet 127.0.0.1} Normal NodeOutOfDisk Node 127.0.0.1 status is now: NodeOutOfDisk 1m 1m 1 {kubelet 127.0.0.1} Normal NodeReady Node 127.0.0.1 status is now: NodeReady But I got some free space: [aii@localhost kubernetes]$ df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/rhel-root 47G 42G 3.2G 93% / devtmpfs 3.9G 0 3.9G 0% /dev tmpfs 3.9G 3.7M 3.9G 1% /dev/shm tmpfs 3.9G 17M 3.9G 1% /run tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup /dev/mapper/rhel-var 485M 288M 198M 60% /var /dev/sda1 509M 265M 245M 52% /boot tmpfs 783M 44K 783M 1% /run/user/1000 /dev/sr0 56M 56M 0 100% /run/media/aii/VBOXADDITIONS_5.0.18_106667 How much disk space does it need? (I'm working in VM so I don't have much) ### Response: It means there's no available nodes in the system for the pods to be scheduled on. Can you provide the output of kubectl get nodes and kubectl describe nodes? Following steps descirbed in the local cluster doc should give you a single node. If your node is there (it should be) but just not ready, you can look at the log in /tmp/kubelet.log (in the future, if you're not using local cluster, look for /var/log/kubelet.log instead) to figure out possible causes.
I'm new to Kubernetes - I've worked with docker-compose until now (on one machine). Now I want to expend my work to cluster of nodes and to get Kubernetes capabilities (service discovery, load balancing, health check etc). I'm working in local servers (RHEL7) and trying to run my first Kubernetes environment (following this doc) with no lack. I run: hack/local-up-cluster.sh then (In another terminal): cluster/kubectl.sh config set-cluster local --server=http://127.0.0.1:8080 --insecure-skip-tls-verify=true cluster/kubectl.sh config set-context local --cluster=local cluster/kubectl.sh config use-context local And: cluster/kubectl.sh create -f run-aii.yaml my run-aii.yaml: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: aii spec: replicas: 1 template: metadata: labels: run: aii spec: containers: - name: aii image: localhost:5000/dev/aii ports: - containerPort: 5144 env: - name: KAFKA_IP value: kafka volumeMounts: - mountPath: /root/script name: scripts-data readOnly: true - mountPath: /home/aii/core name: core-aii readOnly: true - mountPath: /home/aii/genome name: genome-aii readOnly: true - mountPath: /home/aii/main name: main-aii readOnly: true - name: kafka image: localhost:5000/dev/kafkazoo volumeMounts: - mountPath: /root/script name: scripts-data readOnly: true - mountPath: /root/config name: config-data readOnly: true - name: ws image: localhost:5000/dev/ws ports: - containerPort: 3000 volumes: - name: scripts-data hostPath: path: /home/aii/general/infra/script - name: config-data hostPath: path: /home/aii/general/infra/config - name: core-aii hostPath: path: /home/aii/general/core - name: genome-aii hostPath: path: /home/aii/general/genome - name: main-aii hostPath: path: /home/aii/general/main Additional info: [aii@localhost kubernetes]$ cluster/kubectl.sh describe pod aii-4073165096-nkdq6 Name: aii-4073165096-nkdq6 Namespace: default Node: / Labels: pod-template-hash=4073165096,run=aii Status: Pending IP: Controllers: ReplicaSet/aii-4073165096 Containers: aii: Image: localhost:5000/dev/aii Port: 5144/TCP QoS Tier: cpu: BestEffort memory: BestEffort Environment Variables: KAFKA_IP: kafka kafka: Image: localhost:5000/dev/kafkazoo Port: QoS Tier: cpu: BestEffort memory: BestEffort Environment Variables: ws: Image: localhost:5000/dev/ws Port: 3000/TCP QoS Tier: cpu: BestEffort memory: BestEffort Environment Variables: Volumes: scripts-data: Type: HostPath (bare host directory volume) Path: /home/aii/general/infra/script config-data: Type: HostPath (bare host directory volume) Path: /home/aii/general/infra/config core-aii: Type: HostPath (bare host directory volume) Path: /home/aii/general/core genome-aii: Type: HostPath (bare host directory volume) Path: /home/aii/general/genome main-aii: Type: HostPath (bare host directory volume) Path: /home/aii/general/main default-token-hiwwo: Type: Secret (a volume populated by a Secret) SecretName: default-token-hiwwo Events: FirstSeen LastSeen Count From SubobjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 37s 6s 6 {default-scheduler } Warning FailedScheduling no nodes available to schedule pods docker images: [aii@localhost kubernetes]$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE kube-build build-47381c8eab f221edba30ed 25 hours ago 1.628 GB aii latest 1026cd920723 4 days ago 1.427 GB localhost:5000/dev/aii latest 1026cd920723 4 days ago 1.427 GB registry 2 34bccec54793 4 days ago 171.2 MB localhost:5000/dev/ws latest fa7c5f6ef83a 12 days ago 706.8 MB ws latest fa7c5f6ef83a 12 days ago 706.8 MB kafkazoo latest 84c687b0bd74 2 weeks ago 697.7 MB localhost:5000/dev/kafkazoo latest 84c687b0bd74 2 weeks ago 697.7 MB node 4.4 1a93433cee73 2 weeks ago 647 MB gcr.io/google_containers/hyperkube-amd64 v1.2.4 3c4f38def75b 2 weeks ago 316.7 MB nginx latest 3edcc5de5a79 2 weeks ago 182.7 MB gcr.io/google_containers/debian-iptables-arm v3 aca727a3023c 5 weeks ago 120.5 MB gcr.io/google_containers/debian-iptables-amd64 v3 49b5e076215b 6 weeks ago 129.4 MB spotify/kafka latest 30d3cef1fe8e 3 months ago 421.6 MB gcr.io/google_containers/kube-cross v1.4.2-1 8d2874b4f7e9 3 months ago 1.551 GB wurstmeister/zookeeper latest dc00f1198a44 4 months ago 468.7 MB centos latest 61b442687d68 5 months ago 196.6 MB centos centos7.2.1511 38ea04e19303 5 months ago 194.6 MB hypriot/armhf-busybox latest d7ae69033898 6 months ago 1.267 MB gcr.io/google_containers/etcd 2.2.1 a6cd91debed1 6 months ago 28.19 MB gcr.io/google_containers/pause 2.0 2b58359142b0 7 months ago 350.2 kB gcr.io/google_containers/kube-registry-proxy 0.3 b86ac3f11a0c 9 months ago 151.2 MB What is the 'no nodes available to schedule pods' means? Where should I configure/define the nodes? where and how should I specify the IPs of physical machines? EDIT: [aii@localhost kubernetes]$ kubectl get nodes NAME STATUS AGE 127.0.0.1 Ready 1m and: [aii@localhost kubernetes]$ kubectl describe nodes Name: 127.0.0.1 Labels: kubernetes.io/hostname=127.0.0.1 CreationTimestamp: Tue, 24 May 2016 09:58:00 +0300 Phase: Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- OutOfDisk True Tue, 24 May 2016 09:59:50 +0300 Tue, 24 May 2016 09:58:10 +0300 KubeletOutOfDisk out of disk space Ready True Tue, 24 May 2016 09:59:50 +0300 Tue, 24 May 2016 09:58:10 +0300 KubeletReady kubelet is posting ready status Addresses: 127.0.0.1,127.0.0.1 Capacity: pods: 110 cpu: 4 memory: 8010896Ki System Info: Machine ID: b939b024448040469dfdbd3dd3c3e314 System UUID: 59FF2897-234D-4069-A5D4-B68648FC7D38 Boot ID: 0153b84d-90e1-4fd1-9afa-f4312e89613e Kernel Version: 3.10.0-327.4.5.el7.x86_64 OS Image: Red Hat Enterprise Linux Container Runtime Version: docker://1.10.3 Kubelet Version: v1.2.4 Kube-Proxy Version: v1.2.4 ExternalID: 127.0.0.1 Non-terminated Pods: (0 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- Allocated resources: (Total limits may be over 100%, i.e., overcommitted. More info: http://releases.k8s.io/HEAD/docs/user-guide/compute-resources.md) CPU Requests CPU Limits Memory Requests Memory Limits ------------ ---------- --------------- ------------- 0 (0%) 0 (0%) 0 (0%) 0 (0%) Events: FirstSeen LastSeen Count From SubobjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 1m 1m 1 {kube-proxy 127.0.0.1} Normal Starting Starting kube-proxy. 1m 1m 1 {kubelet 127.0.0.1} Normal Starting Starting kubelet. 1m 1m 1 {kubelet 127.0.0.1} Normal NodeHasSufficientDisk Node 127.0.0.1 status is now: NodeHasSufficientDisk 1m 1m 1 {controllermanager } Normal RegisteredNode Node 127.0.0.1 event: Registered Node 127.0.0.1 in NodeController 1m 1m 1 {kubelet 127.0.0.1} Normal NodeOutOfDisk Node 127.0.0.1 status is now: NodeOutOfDisk 1m 1m 1 {kubelet 127.0.0.1} Normal NodeReady Node 127.0.0.1 status is now: NodeReady But I got some free space: [aii@localhost kubernetes]$ df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/rhel-root 47G 42G 3.2G 93% / devtmpfs 3.9G 0 3.9G 0% /dev tmpfs 3.9G 3.7M 3.9G 1% /dev/shm tmpfs 3.9G 17M 3.9G 1% /run tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup /dev/mapper/rhel-var 485M 288M 198M 60% /var /dev/sda1 509M 265M 245M 52% /boot tmpfs 783M 44K 783M 1% /run/user/1000 /dev/sr0 56M 56M 0 100% /run/media/aii/VBOXADDITIONS_5.0.18_106667 How much disk space does it need? (I'm working in VM so I don't have much)
EDIT: I have figured out the magic combination of flags to get this working. 1) run stop docker & the bootstrap docker processes systemctl stop docker && pkill -f "docker-bootstrap" && sleep 10 2) Delete iptables rules: iptables -F && iptables -t nat -F 3) run your docker-daemon with the ip-masq=false option 4) run your flanneld with the ip-masq=true option For me this makes all three pod->pod, pod->ext, & ext->service->pod work perfectly. Hope this works for the rest of you! (Preserving initial response which described the symptoms) I am having the same problem with the guide. The slave replicates just fine but the master can't properly check the slave status because the connection is coming from the dot zero ip. Proto Recv-Q Send-Q Local Address Foreign Address State Example slave->master connection from the master perspective (shows wrong IP of slave): tcp 0 0 redis-master:6379 10.1.37.0:51674 ESTABLISHED Example slave->master connection from slave perspective (shows right ip of master): tcp 0 0 redis-03fdy:51674 10.1.90.2:6379 ESTABLISHED Here are the redis server commands from ps. master: root 12 0.1 0.2 26876 2352 ? Sl 10:42 0:02 redis-server 10.1.90.2:6379 slave: root 18 0.1 0.2 26876 2300 ? Sl 10:48 0:00 redis-server 10.1.37.3:6379
###Instruction: I use the K8s example the create a Reliable, Scalable Redis on Kubernetes. https://github.com/kubernetes/kubernetes/tree/master/examples/redis The Redis master needs to get slave the origin IP and register to the sentinel. Sentinel will use to switch master when the master failed. But My docker starts without the flag iptables=false(It will create an iptables like SNAT), so the redis slave source IP will be SNAT, and the master will get the flannel0's IP like 172.16.103.0. I also add the flannel network to container connection between nodes. So the question is "Is any way to get the source IP without removing the iptables flag, or another way to create a Reliable redis cluster." ### Response: EDIT: I have figured out the magic combination of flags to get this working. 1) run stop docker & the bootstrap docker processes systemctl stop docker && pkill -f "docker-bootstrap" && sleep 10 2) Delete iptables rules: iptables -F && iptables -t nat -F 3) run your docker-daemon with the ip-masq=false option 4) run your flanneld with the ip-masq=true option For me this makes all three pod->pod, pod->ext, & ext->service->pod work perfectly. Hope this works for the rest of you! (Preserving initial response which described the symptoms) I am having the same problem with the guide. The slave replicates just fine but the master can't properly check the slave status because the connection is coming from the dot zero ip. Proto Recv-Q Send-Q Local Address Foreign Address State Example slave->master connection from the master perspective (shows wrong IP of slave): tcp 0 0 redis-master:6379 10.1.37.0:51674 ESTABLISHED Example slave->master connection from slave perspective (shows right ip of master): tcp 0 0 redis-03fdy:51674 10.1.90.2:6379 ESTABLISHED Here are the redis server commands from ps. master: root 12 0.1 0.2 26876 2352 ? Sl 10:42 0:02 redis-server 10.1.90.2:6379 slave: root 18 0.1 0.2 26876 2300 ? Sl 10:48 0:00 redis-server 10.1.37.3:6379
I use the K8s example the create a Reliable, Scalable Redis on Kubernetes. https://github.com/kubernetes/kubernetes/tree/master/examples/redis The Redis master needs to get slave the origin IP and register to the sentinel. Sentinel will use to switch master when the master failed. But My docker starts without the flag iptables=false(It will create an iptables like SNAT), so the redis slave source IP will be SNAT, and the master will get the flannel0's IP like 172.16.103.0. I also add the flannel network to container connection between nodes. So the question is "Is any way to get the source IP without removing the iptables flag, or another way to create a Reliable redis cluster."
OpenShift is a PaaS layer on top of Kubernetes, so there really isn't a difference in the Kubernetes part of the stack. However, OpenShift embeds many Kubernetes (and Etcd) binaries in their distribution, so it isn't always 100% obvious that Kubernetes is somewhere underneath because you don't interact the the stock server-side Kubernetes components in the same way. OpenShift also layers on additional/different functionality like more types of authorization/authentication, watching container registries for changes (imageStreams), container builds, a different UI, etc. (Check out the docs for a more complete overview). One downside of this embedded approach, is that installing OpenShift on top of an existing Kubernetes cluster is more difficult and less documented than the embedded approach that RedHat has to support for their customers. It looks like there is some work lined up to address that. Right now, there is an example of doing this OpenShift-on-top-of-stock-Kubernetes in the Kubernetes repo, but my experience with this is that this approach is more exploratory (and somewhat out of date when I last tried it), than anything you'd use for production.
###Instruction: I understand that openshift comes with its own kubernetes and etcd. But i have already installed kubernetes and etcd. What should i do? what is the differences between Openshift kubernetes an google's? Thanks. ### Response: OpenShift is a PaaS layer on top of Kubernetes, so there really isn't a difference in the Kubernetes part of the stack. However, OpenShift embeds many Kubernetes (and Etcd) binaries in their distribution, so it isn't always 100% obvious that Kubernetes is somewhere underneath because you don't interact the the stock server-side Kubernetes components in the same way. OpenShift also layers on additional/different functionality like more types of authorization/authentication, watching container registries for changes (imageStreams), container builds, a different UI, etc. (Check out the docs for a more complete overview). One downside of this embedded approach, is that installing OpenShift on top of an existing Kubernetes cluster is more difficult and less documented than the embedded approach that RedHat has to support for their customers. It looks like there is some work lined up to address that. Right now, there is an example of doing this OpenShift-on-top-of-stock-Kubernetes in the Kubernetes repo, but my experience with this is that this approach is more exploratory (and somewhat out of date when I last tried it), than anything you'd use for production.
I understand that openshift comes with its own kubernetes and etcd. But i have already installed kubernetes and etcd. What should i do? what is the differences between Openshift kubernetes an google's? Thanks.