input
stringclasses
1 value
output
stringlengths
30
28.3k
text
stringlengths
151
31.8k
instruction
stringlengths
49
30.3k
Fabric8's Kubernetes Client is using a generated model and DSL that has the exact same structure as as the JSON and YAML configuration. So in order to create a Service instance that looks like: { "kind": "Service", "apiVersion": "v1", "metadata": { "name": "myservice" }, "spec": { "ports": [ { "protocol": "TCP", "port": 80, "targetPort": 8080, } ], "selector": { "key": "value1", },¬ "portalIP": "172.30.234.134", "type": "ClusterIP", } } You can use the following code: Service service = new ServiceBuilder() .withNewMetadata() .withName("myservice") .endMetadata() .withNewSpec() .addNewPort() .withProtocol("TCP") .withPort(80) .withNewTargetPort(8080) .endPort() .addToSelector("key1", "value1") .withPortalIP("172.30.234.134") .withType("ClusterIP") .endSpec() .build(); If don't need to hold a reference of the service object and you just want to create it, you can inline it like: client.services().createNew() .withNewMetadata() .withName("myservice") .endMetadata() .withNewSpec() .addNewPort() .withProtocol("TCP") .withPort(80) .withNewTargetPort(8080) .endPort() .addToSelector("key1", "value1") .withPortalIP("172.30.234.134") .withType("ClusterIP") .endSpec() .done(); It's even more compact that the JSON equivalent, because default value can be committed and also some stuff like selector can be optionally inlined in a single line. This is something that not only applies to Service, but to every-single Kubernetes/Openshift resource. If you have the JSON or YAML in place you can load them easily by providing an input stream to the client: Service service = client.services().load(inputStream).get(); There are more options here, like directly creating the service: Service newService = client.services().load(inputStream).create(); It's always help to remember that structure is always the same regardless of lang, format. Pretty much anything can be inlined, so tab completion in your IDE can be really really helpful.
###Instruction: I am using fabric8.io to orchestrate application containers in Kubernetes. I am looking to create a Service that manages a pod with a certain label on a certain port. Is there a specific example of the API that does this. I couldnt find it in the examples https://github.com/fabric8io/kubernetes-client/blob/master/kubernetes-examples/src/main/java/io/fabric8/kubernetes/examples/FullExample.java#L75 There dont seem to be javadocs available??? ### Response: Fabric8's Kubernetes Client is using a generated model and DSL that has the exact same structure as as the JSON and YAML configuration. So in order to create a Service instance that looks like: { "kind": "Service", "apiVersion": "v1", "metadata": { "name": "myservice" }, "spec": { "ports": [ { "protocol": "TCP", "port": 80, "targetPort": 8080, } ], "selector": { "key": "value1", },¬ "portalIP": "172.30.234.134", "type": "ClusterIP", } } You can use the following code: Service service = new ServiceBuilder() .withNewMetadata() .withName("myservice") .endMetadata() .withNewSpec() .addNewPort() .withProtocol("TCP") .withPort(80) .withNewTargetPort(8080) .endPort() .addToSelector("key1", "value1") .withPortalIP("172.30.234.134") .withType("ClusterIP") .endSpec() .build(); If don't need to hold a reference of the service object and you just want to create it, you can inline it like: client.services().createNew() .withNewMetadata() .withName("myservice") .endMetadata() .withNewSpec() .addNewPort() .withProtocol("TCP") .withPort(80) .withNewTargetPort(8080) .endPort() .addToSelector("key1", "value1") .withPortalIP("172.30.234.134") .withType("ClusterIP") .endSpec() .done(); It's even more compact that the JSON equivalent, because default value can be committed and also some stuff like selector can be optionally inlined in a single line. This is something that not only applies to Service, but to every-single Kubernetes/Openshift resource. If you have the JSON or YAML in place you can load them easily by providing an input stream to the client: Service service = client.services().load(inputStream).get(); There are more options here, like directly creating the service: Service newService = client.services().load(inputStream).create(); It's always help to remember that structure is always the same regardless of lang, format. Pretty much anything can be inlined, so tab completion in your IDE can be really really helpful.
I am using fabric8.io to orchestrate application containers in Kubernetes. I am looking to create a Service that manages a pod with a certain label on a certain port. Is there a specific example of the API that does this. I couldnt find it in the examples https://github.com/fabric8io/kubernetes-client/blob/master/kubernetes-examples/src/main/java/io/fabric8/kubernetes/examples/FullExample.java#L75 There dont seem to be javadocs available???
In Kubernetes, each pod has an IP and within a pod there exists a so called infrastructure container, which is the first container that the Kubelet instantiates and it acquires the pod’s IP and sets up the network namespace. All the other containers in the pod then join the infra container’s network and IPC namespace. The infra container has network bridge mode enabled and all the other containers in the pod share its namespace via container mode. The initial process that runs in the infra container does effectively nothing since its sole purpose is to act as the home for the namespaces.
###Instruction: I have known clearly about the usage of the docker option --net=container:NAME_or_ID, I also have read the source code of kubernetes about how to configure the container to use the network of InfraContainer, so I think the only work the process in container gcr.io/google_containers/pause:0.8.0 does is "pause", it will never do any complex work like "receiving", "sending" or "routing". But I am not sure about it because I can not find the Dockerfile of gcr.io/google_containers/pause:0.8.0, so I need someone know clearly about it to tell me the truth, thanks! ### Response: In Kubernetes, each pod has an IP and within a pod there exists a so called infrastructure container, which is the first container that the Kubelet instantiates and it acquires the pod’s IP and sets up the network namespace. All the other containers in the pod then join the infra container’s network and IPC namespace. The infra container has network bridge mode enabled and all the other containers in the pod share its namespace via container mode. The initial process that runs in the infra container does effectively nothing since its sole purpose is to act as the home for the namespaces.
I have known clearly about the usage of the docker option --net=container:NAME_or_ID, I also have read the source code of kubernetes about how to configure the container to use the network of InfraContainer, so I think the only work the process in container gcr.io/google_containers/pause:0.8.0 does is "pause", it will never do any complex work like "receiving", "sending" or "routing". But I am not sure about it because I can not find the Dockerfile of gcr.io/google_containers/pause:0.8.0, so I need someone know clearly about it to tell me the truth, thanks!
ClusterIP can mean 2 things: a type of service which is only accessible within a Kubernetes cluster, or the internal ("virtual") IP of components within a Kubernetes cluster. Assuming you're asking about finding the internal IP of a cluster, it can be accessed in 3 ways (using the simple-nginx example): Via command line kubectl utility: $ kubectl describe service my-nginx Name: my-nginx Namespace: default Labels: run=my-nginx Selector: run=my-nginx Type: LoadBalancer IP: 10.123.253.27 LoadBalancer Ingress: 104.197.129.240 Port: <unnamed> 80/TCP NodePort: <unnamed> 30723/TCP Endpoints: 10.120.0.6:80 Session Affinity: None No events. Via the kubernetes API (here I've used kubectl proxy to route through localhost to my cluster): $ kubectl proxy & $ curl -G http://localhost:8001/api/v1/namespaces/default/services/my-nginx { "kind": "Service", "apiVersion": "v1", "metadata": <omitted>, "spec": { "ports": [ { "protocol": "TCP", "port": 80, "targetPort": 80, "nodePort": 30723 } ], "selector": { "run": "my-nginx" }, "clusterIP": "10.123.253.27", "type": "LoadBalancer", "sessionAffinity": "None" }, "status": { "loadBalancer": { "ingress": [ { "ip": "104.197.129.240" } ] } } } Via the $<NAME>_SERVICE_HOST environment variable within a Kubernetes container (in this example my-nginx-yczg9 is the name of a pod in the cluster): $ kubectl exec my-nginx-yczg9 -- sh -c 'echo $MY_NGINX_SERVICE_HOST' 10.123.253.27 More details on service IPs can be found in the Services in Kubernetes documentation, and the previously mentioned simple-nginx example is a good example of exposing a service outside your cluster with the LoadBalancer service type.
###Instruction: I have created a cluster of three nodes: one master, two minions. How to check the cluster IP in Kubernetes? Is it the IP of the master node? ### Response: ClusterIP can mean 2 things: a type of service which is only accessible within a Kubernetes cluster, or the internal ("virtual") IP of components within a Kubernetes cluster. Assuming you're asking about finding the internal IP of a cluster, it can be accessed in 3 ways (using the simple-nginx example): Via command line kubectl utility: $ kubectl describe service my-nginx Name: my-nginx Namespace: default Labels: run=my-nginx Selector: run=my-nginx Type: LoadBalancer IP: 10.123.253.27 LoadBalancer Ingress: 104.197.129.240 Port: <unnamed> 80/TCP NodePort: <unnamed> 30723/TCP Endpoints: 10.120.0.6:80 Session Affinity: None No events. Via the kubernetes API (here I've used kubectl proxy to route through localhost to my cluster): $ kubectl proxy & $ curl -G http://localhost:8001/api/v1/namespaces/default/services/my-nginx { "kind": "Service", "apiVersion": "v1", "metadata": <omitted>, "spec": { "ports": [ { "protocol": "TCP", "port": 80, "targetPort": 80, "nodePort": 30723 } ], "selector": { "run": "my-nginx" }, "clusterIP": "10.123.253.27", "type": "LoadBalancer", "sessionAffinity": "None" }, "status": { "loadBalancer": { "ingress": [ { "ip": "104.197.129.240" } ] } } } Via the $<NAME>_SERVICE_HOST environment variable within a Kubernetes container (in this example my-nginx-yczg9 is the name of a pod in the cluster): $ kubectl exec my-nginx-yczg9 -- sh -c 'echo $MY_NGINX_SERVICE_HOST' 10.123.253.27 More details on service IPs can be found in the Services in Kubernetes documentation, and the previously mentioned simple-nginx example is a good example of exposing a service outside your cluster with the LoadBalancer service type.
I have created a cluster of three nodes: one master, two minions. How to check the cluster IP in Kubernetes? Is it the IP of the master node?
Can you downgrade docker version to 1.7.2 first? I did the exact what you did above with docker 1.7.2, and everything works. $ curl 127.0.0.1:8080/ { "paths": [ "/api", "/api/v1", "/api/v1beta3", "/healthz", "/healthz/ping", "/logs/", "/metrics", "/resetMetrics", "/swagger-ui/", "/swaggerapi/", "/ui/", "/version" ] } $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 0141e596414c gcr.io/google_containers/hyperkube:v0.21.2 "/hyperkube proxy -- 15 minutes ago Up 15 minutes nostalgic_nobel 10634ce798e9 gcr.io/google_containers/hyperkube:v0.21.2 "/hyperkube schedule 16 minutes ago Up 16 minutes k8s_scheduler.b725e775_k8s-master-127.0.0.1_default_9b44830745c166dfc6d027b0fc2df36d_43562383 5618a39eb11d gcr.io/google_containers/hyperkube:v0.21.2 "/hyperkube apiserve 16 minutes ago Up 16 minutes k8s_apiserver.70750283_k8s-master-127.0.0.1_default_9b44830745c166dfc6d027b0fc2df36d_e5d145be 25f336102b26 gcr.io/google_containers/hyperkube:v0.21.2 "/hyperkube controll 16 minutes ago Up 16 minutes k8s_controller-manager.aad1ee8f_k8s-master-127.0.0.1_default_9b44830745c166dfc6d027b0fc2df36d_fe538b9b 7f1391840920 gcr.io/google_containers/pause:0.8.0 "/pause" 17 minutes ago Up 17 minutes k8s_POD.e4cc795_k8s-master-127.0.0.1_default_9b44830745c166dfc6d027b0fc2df36d_26fd84fd a11715435f45 gcr.io/google_containers/hyperkube:v0.21.2 "/hyperkube kubelet 17 minutes ago Up 17 minutes jovial_hodgkin a882a1a4b917 gcr.io/google_containers/etcd:2.0.9 "/usr/local/bin/etcd 18 minutes ago Up 18 minutes adoring_hodgkin There are a couple of known issues with docker 1.8.3, especially docker#17190. We had to workaround such issue through kubernetes#16052. But such changes are not cherry-picked to Kubernetes 1.0 release. From the output you posted above, I noticed that there is no pause container. Also you can run docker ps -a to check if some containers are dead, and copy & paste the output of docker logs <dead-container> here? I will file an issue to make sure Kubernetes 1.1 release working fine with docker 1.8.3. Thanks!
###Instruction: I am following the Running Kubernetes locally via Docker guide and I am unable to get the master to start normally. Step One: Run etcd docker run --net=host -d gcr.io/google_containers/etcd:2.0.9 /usr/local/bin/etcd --addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001 --data-dir=/var/etcd/data The etcd container appears to start normally. Don't see any errors with docker logs and I end up with an etcd process listening on 4001. Step Two: Run the master docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.21.2 /hyperkube kubelet --api_servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=127.0.0.1 --config=/etc/kubernetes/manifests I believe the this is where my issues begin. Below is the output from docker logs: W1021 13:23:04.093281 1 server.go:259] failed to set oom_score_adj to -900: write /proc/self/oom_score_adj: permission denied W1021 13:23:04.093426 1 server.go:462] Could not load kubeconfig file /var/lib/kubelet/kubeconfig: stat /var/lib/kubelet/kubeconfig: no such file or directory. Trying auth path instead. W1021 13:23:04.093445 1 server.go:424] Could not load kubernetes auth path /var/lib/kubelet/kubernetes_auth: stat /var/lib/kubelet/kubernetes_auth: no such file or directory. Continuing with defaults. I1021 13:23:04.093503 1 server.go:271] Using root directory: /var/lib/kubelet I1021 13:23:04.093519 1 plugins.go:69] No cloud provider specified. I1021 13:23:04.093526 1 server.go:290] Successfully initialized cloud provider: "" from the config file: "" I1021 13:23:05.126191 1 docker.go:289] Connecting to docker on unix:///var/run/docker.sock I1021 13:23:05.126396 1 server.go:651] Adding manifest file: /etc/kubernetes/manifests I1021 13:23:05.126409 1 file.go:47] Watching path "/etc/kubernetes/manifests" I1021 13:23:05.126416 1 server.go:661] Watching apiserver E1021 13:23:05.127148 1 reflector.go:136] Failed to list *api.Pod: Get http://localhost:8080/api/v1/pods?fieldSelector=spec.nodeName%3D127.0.0.1: dial tcp 127.0.0.1:8080: connection refused E1021 13:23:05.127295 1 reflector.go:136] Failed to list *api.Service: Get http://localhost:8080/api/v1/services: dial tcp 127.0.0.1:8080: connection refused E1021 13:23:05.127336 1 reflector.go:136] Failed to list *api.Node: Get http://localhost:8080/api/v1/nodes?fieldSelector=metadata.name%3D127.0.0.1: dial tcp 127.0.0.1:8080: connection refused I1021 13:23:05.343848 1 plugins.go:56] Registering credential provider: .dockercfg W1021 13:23:05.394268 1 container_manager_linux.go:96] Memory limit 0 for container /docker-daemon is too small, reset it to 157286400 I1021 13:23:05.394284 1 container_manager_linux.go:100] Configure resource-only container /docker-daemon with memory limit: 157286400 I1021 13:23:05.395019 1 plugins.go:180] Loaded volume plugin "kubernetes.io/aws-ebs" I1021 13:23:05.395040 1 plugins.go:180] Loaded volume plugin "kubernetes.io/empty-dir" I1021 13:23:05.395052 1 plugins.go:180] Loaded volume plugin "empty" I1021 13:23:05.395068 1 plugins.go:180] Loaded volume plugin "kubernetes.io/gce-pd" I1021 13:23:05.395080 1 plugins.go:180] Loaded volume plugin "gce-pd" I1021 13:23:05.395098 1 plugins.go:180] Loaded volume plugin "kubernetes.io/git-repo" I1021 13:23:05.395112 1 plugins.go:180] Loaded volume plugin "git" I1021 13:23:05.395124 1 plugins.go:180] Loaded volume plugin "kubernetes.io/host-path" I1021 13:23:05.395136 1 plugins.go:180] Loaded volume plugin "kubernetes.io/nfs" I1021 13:23:05.395147 1 plugins.go:180] Loaded volume plugin "kubernetes.io/secret" I1021 13:23:05.395156 1 plugins.go:180] Loaded volume plugin "kubernetes.io/iscsi" I1021 13:23:05.395166 1 plugins.go:180] Loaded volume plugin "kubernetes.io/glusterfs" I1021 13:23:05.395178 1 plugins.go:180] Loaded volume plugin "kubernetes.io/persistent-claim" I1021 13:23:05.395194 1 plugins.go:180] Loaded volume plugin "kubernetes.io/rbd" I1021 13:23:05.395274 1 server.go:623] Started kubelet I1021 13:23:05.395296 1 server.go:63] Starting to listen on 0.0.0.0:10250 I1021 13:23:05.395507 1 server.go:82] Starting to listen read-only on 0.0.0.0:10255 Step Three: Run the service proxy docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v0.21.2 /hyperkube proxy --master=http://127.0.0.1:8080 --v=2 The docker logs from this step contained similar errors to what I saw in Step Two. I1021 13:32:03.177004 1 server.go:88] Running in resource-only container "/kube-proxy" I1021 13:32:03.177432 1 proxier.go:121] Setting proxy IP to 192.168.19.200 and initializing iptables E1021 13:32:03.195731 1 api.go:108] Unable to load services: Get http://127.0.0.1:8080/api/v1/services: dial tcp 127.0.0.1:8080: connection refused E1021 13:32:03.195924 1 api.go:180] Unable to load endpoints: Get http://127.0.0.1:8080/api/v1/endpoints: dial tcp 127.0.0.1:8080: connection refused docker ps output: CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 576d15c22537 gcr.io/google_containers/hyperkube:v0.21.2 "/hyperkube proxy --m" About an hour ago Up About an hour high_pasteur a98637c9d523 gcr.io/google_containers/hyperkube:v0.21.2 "/hyperkube kubelet -" About an hour ago Up 34 minutes drunk_jones 618afb1de613 gcr.io/google_containers/etcd:2.0.9 "/usr/local/bin/etcd " 2 hours ago Up 2 hours high_yonath The first error from Step Two's logs led me to believe the error may have something to do with iptables. iptables -L output: Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination DOCKER all -- anywhere anywhere ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED ACCEPT all -- anywhere anywhere ACCEPT all -- anywhere anywhere Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain DOCKER (1 references) target prot opt source destination docker exec -ti a98637c9d523 cat /etc/kubernetes/manifests/master.json output: { "apiVersion": "v1", "kind": "Pod", "metadata": {"name":"k8s-master"}, "spec":{ "hostNetwork": true, "containers":[ { "name": "controller-manager", "image": "gcr.io/google_containers/hyperkube:v1.0.6", "command": [ "/hyperkube", "controller-manager", "--master=127.0.0.1:8080", "--v=2" ] }, { "name": "apiserver", "image": "gcr.io/google_containers/hyperkube:v1.0.6", "command": [ "/hyperkube", "apiserver", "--portal-net=10.0.0.1/24", "--address=127.0.0.1", "--etcd-servers=http://127.0.0.1:4001", "--cluster-name=kubernetes", "--v=2" ] }, { "name": "scheduler", "image": "gcr.io/google_containers/hyperkube:v1.0.6", "command": [ "/hyperkube", "scheduler", "--master=127.0.0.1:8080", "--v=2" ] } ] } } Docker version 1.8.3 Kernel version 4.2.3 Any insight would be greatly appreciated. ### Response: Can you downgrade docker version to 1.7.2 first? I did the exact what you did above with docker 1.7.2, and everything works. $ curl 127.0.0.1:8080/ { "paths": [ "/api", "/api/v1", "/api/v1beta3", "/healthz", "/healthz/ping", "/logs/", "/metrics", "/resetMetrics", "/swagger-ui/", "/swaggerapi/", "/ui/", "/version" ] } $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 0141e596414c gcr.io/google_containers/hyperkube:v0.21.2 "/hyperkube proxy -- 15 minutes ago Up 15 minutes nostalgic_nobel 10634ce798e9 gcr.io/google_containers/hyperkube:v0.21.2 "/hyperkube schedule 16 minutes ago Up 16 minutes k8s_scheduler.b725e775_k8s-master-127.0.0.1_default_9b44830745c166dfc6d027b0fc2df36d_43562383 5618a39eb11d gcr.io/google_containers/hyperkube:v0.21.2 "/hyperkube apiserve 16 minutes ago Up 16 minutes k8s_apiserver.70750283_k8s-master-127.0.0.1_default_9b44830745c166dfc6d027b0fc2df36d_e5d145be 25f336102b26 gcr.io/google_containers/hyperkube:v0.21.2 "/hyperkube controll 16 minutes ago Up 16 minutes k8s_controller-manager.aad1ee8f_k8s-master-127.0.0.1_default_9b44830745c166dfc6d027b0fc2df36d_fe538b9b 7f1391840920 gcr.io/google_containers/pause:0.8.0 "/pause" 17 minutes ago Up 17 minutes k8s_POD.e4cc795_k8s-master-127.0.0.1_default_9b44830745c166dfc6d027b0fc2df36d_26fd84fd a11715435f45 gcr.io/google_containers/hyperkube:v0.21.2 "/hyperkube kubelet 17 minutes ago Up 17 minutes jovial_hodgkin a882a1a4b917 gcr.io/google_containers/etcd:2.0.9 "/usr/local/bin/etcd 18 minutes ago Up 18 minutes adoring_hodgkin There are a couple of known issues with docker 1.8.3, especially docker#17190. We had to workaround such issue through kubernetes#16052. But such changes are not cherry-picked to Kubernetes 1.0 release. From the output you posted above, I noticed that there is no pause container. Also you can run docker ps -a to check if some containers are dead, and copy & paste the output of docker logs <dead-container> here? I will file an issue to make sure Kubernetes 1.1 release working fine with docker 1.8.3. Thanks!
I am following the Running Kubernetes locally via Docker guide and I am unable to get the master to start normally. Step One: Run etcd docker run --net=host -d gcr.io/google_containers/etcd:2.0.9 /usr/local/bin/etcd --addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001 --data-dir=/var/etcd/data The etcd container appears to start normally. Don't see any errors with docker logs and I end up with an etcd process listening on 4001. Step Two: Run the master docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.21.2 /hyperkube kubelet --api_servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=127.0.0.1 --config=/etc/kubernetes/manifests I believe the this is where my issues begin. Below is the output from docker logs: W1021 13:23:04.093281 1 server.go:259] failed to set oom_score_adj to -900: write /proc/self/oom_score_adj: permission denied W1021 13:23:04.093426 1 server.go:462] Could not load kubeconfig file /var/lib/kubelet/kubeconfig: stat /var/lib/kubelet/kubeconfig: no such file or directory. Trying auth path instead. W1021 13:23:04.093445 1 server.go:424] Could not load kubernetes auth path /var/lib/kubelet/kubernetes_auth: stat /var/lib/kubelet/kubernetes_auth: no such file or directory. Continuing with defaults. I1021 13:23:04.093503 1 server.go:271] Using root directory: /var/lib/kubelet I1021 13:23:04.093519 1 plugins.go:69] No cloud provider specified. I1021 13:23:04.093526 1 server.go:290] Successfully initialized cloud provider: "" from the config file: "" I1021 13:23:05.126191 1 docker.go:289] Connecting to docker on unix:///var/run/docker.sock I1021 13:23:05.126396 1 server.go:651] Adding manifest file: /etc/kubernetes/manifests I1021 13:23:05.126409 1 file.go:47] Watching path "/etc/kubernetes/manifests" I1021 13:23:05.126416 1 server.go:661] Watching apiserver E1021 13:23:05.127148 1 reflector.go:136] Failed to list *api.Pod: Get http://localhost:8080/api/v1/pods?fieldSelector=spec.nodeName%3D127.0.0.1: dial tcp 127.0.0.1:8080: connection refused E1021 13:23:05.127295 1 reflector.go:136] Failed to list *api.Service: Get http://localhost:8080/api/v1/services: dial tcp 127.0.0.1:8080: connection refused E1021 13:23:05.127336 1 reflector.go:136] Failed to list *api.Node: Get http://localhost:8080/api/v1/nodes?fieldSelector=metadata.name%3D127.0.0.1: dial tcp 127.0.0.1:8080: connection refused I1021 13:23:05.343848 1 plugins.go:56] Registering credential provider: .dockercfg W1021 13:23:05.394268 1 container_manager_linux.go:96] Memory limit 0 for container /docker-daemon is too small, reset it to 157286400 I1021 13:23:05.394284 1 container_manager_linux.go:100] Configure resource-only container /docker-daemon with memory limit: 157286400 I1021 13:23:05.395019 1 plugins.go:180] Loaded volume plugin "kubernetes.io/aws-ebs" I1021 13:23:05.395040 1 plugins.go:180] Loaded volume plugin "kubernetes.io/empty-dir" I1021 13:23:05.395052 1 plugins.go:180] Loaded volume plugin "empty" I1021 13:23:05.395068 1 plugins.go:180] Loaded volume plugin "kubernetes.io/gce-pd" I1021 13:23:05.395080 1 plugins.go:180] Loaded volume plugin "gce-pd" I1021 13:23:05.395098 1 plugins.go:180] Loaded volume plugin "kubernetes.io/git-repo" I1021 13:23:05.395112 1 plugins.go:180] Loaded volume plugin "git" I1021 13:23:05.395124 1 plugins.go:180] Loaded volume plugin "kubernetes.io/host-path" I1021 13:23:05.395136 1 plugins.go:180] Loaded volume plugin "kubernetes.io/nfs" I1021 13:23:05.395147 1 plugins.go:180] Loaded volume plugin "kubernetes.io/secret" I1021 13:23:05.395156 1 plugins.go:180] Loaded volume plugin "kubernetes.io/iscsi" I1021 13:23:05.395166 1 plugins.go:180] Loaded volume plugin "kubernetes.io/glusterfs" I1021 13:23:05.395178 1 plugins.go:180] Loaded volume plugin "kubernetes.io/persistent-claim" I1021 13:23:05.395194 1 plugins.go:180] Loaded volume plugin "kubernetes.io/rbd" I1021 13:23:05.395274 1 server.go:623] Started kubelet I1021 13:23:05.395296 1 server.go:63] Starting to listen on 0.0.0.0:10250 I1021 13:23:05.395507 1 server.go:82] Starting to listen read-only on 0.0.0.0:10255 Step Three: Run the service proxy docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v0.21.2 /hyperkube proxy --master=http://127.0.0.1:8080 --v=2 The docker logs from this step contained similar errors to what I saw in Step Two. I1021 13:32:03.177004 1 server.go:88] Running in resource-only container "/kube-proxy" I1021 13:32:03.177432 1 proxier.go:121] Setting proxy IP to 192.168.19.200 and initializing iptables E1021 13:32:03.195731 1 api.go:108] Unable to load services: Get http://127.0.0.1:8080/api/v1/services: dial tcp 127.0.0.1:8080: connection refused E1021 13:32:03.195924 1 api.go:180] Unable to load endpoints: Get http://127.0.0.1:8080/api/v1/endpoints: dial tcp 127.0.0.1:8080: connection refused docker ps output: CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 576d15c22537 gcr.io/google_containers/hyperkube:v0.21.2 "/hyperkube proxy --m" About an hour ago Up About an hour high_pasteur a98637c9d523 gcr.io/google_containers/hyperkube:v0.21.2 "/hyperkube kubelet -" About an hour ago Up 34 minutes drunk_jones 618afb1de613 gcr.io/google_containers/etcd:2.0.9 "/usr/local/bin/etcd " 2 hours ago Up 2 hours high_yonath The first error from Step Two's logs led me to believe the error may have something to do with iptables. iptables -L output: Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination DOCKER all -- anywhere anywhere ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED ACCEPT all -- anywhere anywhere ACCEPT all -- anywhere anywhere Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain DOCKER (1 references) target prot opt source destination docker exec -ti a98637c9d523 cat /etc/kubernetes/manifests/master.json output: { "apiVersion": "v1", "kind": "Pod", "metadata": {"name":"k8s-master"}, "spec":{ "hostNetwork": true, "containers":[ { "name": "controller-manager", "image": "gcr.io/google_containers/hyperkube:v1.0.6", "command": [ "/hyperkube", "controller-manager", "--master=127.0.0.1:8080", "--v=2" ] }, { "name": "apiserver", "image": "gcr.io/google_containers/hyperkube:v1.0.6", "command": [ "/hyperkube", "apiserver", "--portal-net=10.0.0.1/24", "--address=127.0.0.1", "--etcd-servers=http://127.0.0.1:4001", "--cluster-name=kubernetes", "--v=2" ] }, { "name": "scheduler", "image": "gcr.io/google_containers/hyperkube:v1.0.6", "command": [ "/hyperkube", "scheduler", "--master=127.0.0.1:8080", "--v=2" ] } ] } } Docker version 1.8.3 Kernel version 4.2.3 Any insight would be greatly appreciated.
Where are you starting your cluster? I tried to reproduce this on GCE using both the 1.0.7 release and from HEAD and wasn't able to. Using the 1.0.7 release: $ kubectl get pods --namespace=kube-system NAME READY STATUS RESTARTS AGE elasticsearch-logging-v1-6x82b 1/1 Running 0 3m elasticsearch-logging-v1-s4bj5 1/1 Running 0 3m fluentd-elasticsearch-kubernetes-minion-ijpr 1/1 Running 0 1m fluentd-elasticsearch-kubernetes-minion-nrya 1/1 Running 0 2m fluentd-elasticsearch-kubernetes-minion-ppls 1/1 Running 0 1m fluentd-elasticsearch-kubernetes-minion-sy4x 1/1 Running 0 2m kibana-logging-v1-6qka9 1/1 Running 0 3m kube-dns-v8-9hyzm 4/4 Running 0 3m kube-ui-v1-11r3b 1/1 Running 0 3m monitoring-heapster-v6-4uzam 1/1 Running 1 3m monitoring-influx-grafana-v1-euc3a 2/2 Running 0 3m From head: $ kubectl get pods --namespace=kube-system NAME READY STATUS RESTARTS AGE elasticsearch-logging-v1-9gqs8 1/1 Running 0 3m elasticsearch-logging-v1-edb97 1/1 Running 0 3m etcd-server-events-kubernetes-master 1/1 Running 0 3m etcd-server-kubernetes-master 1/1 Running 0 3m fluentd-elasticsearch-kubernetes-master 1/1 Running 0 2m fluentd-elasticsearch-kubernetes-minion-6id6 1/1 Running 0 1m fluentd-elasticsearch-kubernetes-minion-n25a 1/1 Running 0 1m fluentd-elasticsearch-kubernetes-minion-x4wa 1/1 Running 0 1m heapster-v10-ek03n 1/1 Running 0 3m kibana-logging-v1-ybsad 1/1 Running 0 3m kube-apiserver-kubernetes-master 1/1 Running 0 3m kube-controller-manager-kubernetes-master 1/1 Running 0 3m kube-dns-v9-dkmad 4/4 Running 0 3m kube-scheduler-kubernetes-master 1/1 Running 0 3m kube-ui-v3-mt7nw 1/1 Running 0 3m l7-lb-controller-b56yf 2/2 Running 0 3m monitoring-influxdb-grafana-v2-lxufh 2/2 Running 0 3m The only thing I changed in config-default.sh is the KUBE_LOGGING_DESTINATION variable from gcp to elasticsearch: $ git diff cluster/gce/config-default.sh diff --git a/cluster/gce/config-default.sh b/cluster/gce/config-default.sh index fd31820..2e37ebc 100755 --- a/cluster/gce/config-default.sh +++ b/cluster/gce/config-default.sh @@ -58,7 +58,7 @@ ENABLE_CLUSTER_MONITORING="${KUBE_ENABLE_CLUSTER_MONITORING:-googleinfluxdb}" # Optional: Enable node logging. ENABLE_NODE_LOGGING="${KUBE_ENABLE_NODE_LOGGING:-true}" -LOGGING_DESTINATION="${KUBE_LOGGING_DESTINATION:-gcp}" # options: elasticsearch, gcp +LOGGING_DESTINATION="${KUBE_LOGGING_DESTINATION:-elasticsearch}" # options: elasticsearch, gcp # Optional: When set to true, Elasticsearch and Kibana will be setup as part of the cluster bring up. ENABLE_CLUSTER_LOGGING="${KUBE_ENABLE_CLUSTER_LOGGING:-true}"
###Instruction: I am trying to setup cluster logging following below link http://kubernetes.io/v1.0/docs/getting-started-guides/logging-elasticsearch.html my config-default.sh # Optional: Enable node logging. ENABLE_NODE_LOGGING=**true** LOGGING_DESTINATION=${LOGGING_DESTINATION:-**elasticsearch**} # Optional: When set to true, Elasticsearch and Kibana will be setup as part of the cluster bring up. ENABLE_CLUSTER_LOGGING=true ELASTICSEARCH_LOGGING_REPLICAS=${ELASTICSEARCH_LOGGING_REPLICAS:-1} Command $ sudo kubectl get pods --namespace=kube-system NAME READY STATUS RESTARTS AGE kube-dns-v9-epplg 4/4 Running 0 20h kube-ui-v3-i4von 1/1 Running 0 18h As you can see that I enabled logging and set logging destination = elasticsearch. I don't see elasticsearch-logging or fluentd-elasticsearch or kibana-logging when i do get pods. It seems like replication controller, service or pods is not created, do I need do anything else to bring up the ElasticSearch and Kibana? ### Response: Where are you starting your cluster? I tried to reproduce this on GCE using both the 1.0.7 release and from HEAD and wasn't able to. Using the 1.0.7 release: $ kubectl get pods --namespace=kube-system NAME READY STATUS RESTARTS AGE elasticsearch-logging-v1-6x82b 1/1 Running 0 3m elasticsearch-logging-v1-s4bj5 1/1 Running 0 3m fluentd-elasticsearch-kubernetes-minion-ijpr 1/1 Running 0 1m fluentd-elasticsearch-kubernetes-minion-nrya 1/1 Running 0 2m fluentd-elasticsearch-kubernetes-minion-ppls 1/1 Running 0 1m fluentd-elasticsearch-kubernetes-minion-sy4x 1/1 Running 0 2m kibana-logging-v1-6qka9 1/1 Running 0 3m kube-dns-v8-9hyzm 4/4 Running 0 3m kube-ui-v1-11r3b 1/1 Running 0 3m monitoring-heapster-v6-4uzam 1/1 Running 1 3m monitoring-influx-grafana-v1-euc3a 2/2 Running 0 3m From head: $ kubectl get pods --namespace=kube-system NAME READY STATUS RESTARTS AGE elasticsearch-logging-v1-9gqs8 1/1 Running 0 3m elasticsearch-logging-v1-edb97 1/1 Running 0 3m etcd-server-events-kubernetes-master 1/1 Running 0 3m etcd-server-kubernetes-master 1/1 Running 0 3m fluentd-elasticsearch-kubernetes-master 1/1 Running 0 2m fluentd-elasticsearch-kubernetes-minion-6id6 1/1 Running 0 1m fluentd-elasticsearch-kubernetes-minion-n25a 1/1 Running 0 1m fluentd-elasticsearch-kubernetes-minion-x4wa 1/1 Running 0 1m heapster-v10-ek03n 1/1 Running 0 3m kibana-logging-v1-ybsad 1/1 Running 0 3m kube-apiserver-kubernetes-master 1/1 Running 0 3m kube-controller-manager-kubernetes-master 1/1 Running 0 3m kube-dns-v9-dkmad 4/4 Running 0 3m kube-scheduler-kubernetes-master 1/1 Running 0 3m kube-ui-v3-mt7nw 1/1 Running 0 3m l7-lb-controller-b56yf 2/2 Running 0 3m monitoring-influxdb-grafana-v2-lxufh 2/2 Running 0 3m The only thing I changed in config-default.sh is the KUBE_LOGGING_DESTINATION variable from gcp to elasticsearch: $ git diff cluster/gce/config-default.sh diff --git a/cluster/gce/config-default.sh b/cluster/gce/config-default.sh index fd31820..2e37ebc 100755 --- a/cluster/gce/config-default.sh +++ b/cluster/gce/config-default.sh @@ -58,7 +58,7 @@ ENABLE_CLUSTER_MONITORING="${KUBE_ENABLE_CLUSTER_MONITORING:-googleinfluxdb}" # Optional: Enable node logging. ENABLE_NODE_LOGGING="${KUBE_ENABLE_NODE_LOGGING:-true}" -LOGGING_DESTINATION="${KUBE_LOGGING_DESTINATION:-gcp}" # options: elasticsearch, gcp +LOGGING_DESTINATION="${KUBE_LOGGING_DESTINATION:-elasticsearch}" # options: elasticsearch, gcp # Optional: When set to true, Elasticsearch and Kibana will be setup as part of the cluster bring up. ENABLE_CLUSTER_LOGGING="${KUBE_ENABLE_CLUSTER_LOGGING:-true}"
I am trying to setup cluster logging following below link http://kubernetes.io/v1.0/docs/getting-started-guides/logging-elasticsearch.html my config-default.sh # Optional: Enable node logging. ENABLE_NODE_LOGGING=**true** LOGGING_DESTINATION=${LOGGING_DESTINATION:-**elasticsearch**} # Optional: When set to true, Elasticsearch and Kibana will be setup as part of the cluster bring up. ENABLE_CLUSTER_LOGGING=true ELASTICSEARCH_LOGGING_REPLICAS=${ELASTICSEARCH_LOGGING_REPLICAS:-1} Command $ sudo kubectl get pods --namespace=kube-system NAME READY STATUS RESTARTS AGE kube-dns-v9-epplg 4/4 Running 0 20h kube-ui-v3-i4von 1/1 Running 0 18h As you can see that I enabled logging and set logging destination = elasticsearch. I don't see elasticsearch-logging or fluentd-elasticsearch or kibana-logging when i do get pods. It seems like replication controller, service or pods is not created, do I need do anything else to bring up the ElasticSearch and Kibana?
As @tim-hockin says, we do plan to have a way to partition the network. But, IMO, for systems with more moving parts, (which is where Kubernetes should really shine), I think it will be better to focus on application security. Taking your three-layer example, the PHP pod should be authorized to talk to the database, but the Nginx pod should not. So, if someone figures out a way to execute an arbitrary command in the Nginx pod, they might be able to send a request to the database Pod, but it should be rejected as not authorized. I prefer the application-security approach because: I don't think the --links approach will scale well to 10s of different microservices or more. It will be too hard to manage all the links. I think as the number of devs in your org grows, you will need fine grained app-level security anyhow. In terms of being like docker compose, it looks like docker compose currently only works on single machines, according to this page: https://github.com/docker/compose/blob/master/SWARM.md
###Instruction: TL;DR Kubernetes allows all containers to access all other containers on the entire cluster, this seems to greatly increase the security risks. How to mitigate? Unlike Docker, where one would usually only allow network connection between containers that need to communicate (via --link), each Pod on Kubernetes can access all other Pods on that cluster. That means that for a standard Nginx + PHP/Python + MySQL/PostgreSQL, running on Kubernetes, a compromised Nginx would be able to access the database. People used to run all those on a single machine, but that machine would have serious periodic updates (more than containers), and SELinux/AppArmor for serious people. One can mitigate a bit the risks by having each project (if you have various independent websites for example) run each on their own cluster, but that seems wasteful. The current Kubernetes security seems to be very incomplete. Is there already a way to have a decent security for production? ### Response: As @tim-hockin says, we do plan to have a way to partition the network. But, IMO, for systems with more moving parts, (which is where Kubernetes should really shine), I think it will be better to focus on application security. Taking your three-layer example, the PHP pod should be authorized to talk to the database, but the Nginx pod should not. So, if someone figures out a way to execute an arbitrary command in the Nginx pod, they might be able to send a request to the database Pod, but it should be rejected as not authorized. I prefer the application-security approach because: I don't think the --links approach will scale well to 10s of different microservices or more. It will be too hard to manage all the links. I think as the number of devs in your org grows, you will need fine grained app-level security anyhow. In terms of being like docker compose, it looks like docker compose currently only works on single machines, according to this page: https://github.com/docker/compose/blob/master/SWARM.md
TL;DR Kubernetes allows all containers to access all other containers on the entire cluster, this seems to greatly increase the security risks. How to mitigate? Unlike Docker, where one would usually only allow network connection between containers that need to communicate (via --link), each Pod on Kubernetes can access all other Pods on that cluster. That means that for a standard Nginx + PHP/Python + MySQL/PostgreSQL, running on Kubernetes, a compromised Nginx would be able to access the database. People used to run all those on a single machine, but that machine would have serious periodic updates (more than containers), and SELinux/AppArmor for serious people. One can mitigate a bit the risks by having each project (if you have various independent websites for example) run each on their own cluster, but that seems wasteful. The current Kubernetes security seems to be very incomplete. Is there already a way to have a decent security for production?
The default service account for each namespace is created by the service account controller, which is a loop that is part of the kube-controller-manager binary. So, verify that binary is running, and check its logs for anything that suggests it can't create a service account, make sure you set the "--service-account-private-key-file=somefile" to a file that has a valid PEM key. Alternatively, if you want to make some progress without service accounts, and come back to that later, you can disable the admission controller that is blocking your pods by removing the "ServiceAccount" option from your api-server's --admission-controllers flag. But you will probably want to come back and fix that later.
###Instruction: I'm trying to get Kubernetes running on some local machines running CoreOS. I'm loosely following this guide. Everything seems to be up and running, and I'm able to connect to the api via kubectl. However, when I try to create a pod, I get this error: Pod "redis-master" is forbidden: Missing service account default/default: <nil> Doing kubectl get serviceAccounts confirms that I don't have any ServiceAccounts: NAME SECRETS According to the documentation, each namespace should have a default ServiceAccount. Running kubectl get namespace confirms that I have the default namespace: NAME LABELS STATUS default <none> Active I'm brand new to Kubernetes and CoreOS, so I'm sure there's something I'm overlooking, but I can't for the life of me figure out what's going on. I'd appreciate any pointers. UPDATE It appears the kube-controller-manager isn't running. When I try to run it, I get this message: I1104 21:09:49.262780 26292 plugins.go:69] No cloud provider specified. I1104 21:09:49.262935 26292 nodecontroller.go:114] Sending events to api server. E1104 21:09:49.263089 26292 controllermanager.go:217] Failed to start service controller: ServiceController should not be run without a cloudprovider. W1104 21:09:49.629084 26292 request.go:302] field selector: v1 - secrets - type - kubernetes.io/service-account-token: need to check if this is versioned correctly. W1104 21:09:49.629322 26292 request.go:302] field selector: v1 - serviceAccounts - metadata.name - default: need to check if this is versioned correctly. W1104 21:09:49.636082 26292 request.go:302] field selector: v1 - serviceAccounts - metadata.name - default: need to check if this is versioned correctly. W1104 21:09:49.638712 26292 request.go:302] field selector: v1 - secrets - type - kubernetes.io/service-account-token: need to check if this is versioned correctly. Since I'm running this locally, I don't have a cloud provider. I tried to define --cloud-provider="" but it still complains with the same error. ### Response: The default service account for each namespace is created by the service account controller, which is a loop that is part of the kube-controller-manager binary. So, verify that binary is running, and check its logs for anything that suggests it can't create a service account, make sure you set the "--service-account-private-key-file=somefile" to a file that has a valid PEM key. Alternatively, if you want to make some progress without service accounts, and come back to that later, you can disable the admission controller that is blocking your pods by removing the "ServiceAccount" option from your api-server's --admission-controllers flag. But you will probably want to come back and fix that later.
I'm trying to get Kubernetes running on some local machines running CoreOS. I'm loosely following this guide. Everything seems to be up and running, and I'm able to connect to the api via kubectl. However, when I try to create a pod, I get this error: Pod "redis-master" is forbidden: Missing service account default/default: <nil> Doing kubectl get serviceAccounts confirms that I don't have any ServiceAccounts: NAME SECRETS According to the documentation, each namespace should have a default ServiceAccount. Running kubectl get namespace confirms that I have the default namespace: NAME LABELS STATUS default <none> Active I'm brand new to Kubernetes and CoreOS, so I'm sure there's something I'm overlooking, but I can't for the life of me figure out what's going on. I'd appreciate any pointers. UPDATE It appears the kube-controller-manager isn't running. When I try to run it, I get this message: I1104 21:09:49.262780 26292 plugins.go:69] No cloud provider specified. I1104 21:09:49.262935 26292 nodecontroller.go:114] Sending events to api server. E1104 21:09:49.263089 26292 controllermanager.go:217] Failed to start service controller: ServiceController should not be run without a cloudprovider. W1104 21:09:49.629084 26292 request.go:302] field selector: v1 - secrets - type - kubernetes.io/service-account-token: need to check if this is versioned correctly. W1104 21:09:49.629322 26292 request.go:302] field selector: v1 - serviceAccounts - metadata.name - default: need to check if this is versioned correctly. W1104 21:09:49.636082 26292 request.go:302] field selector: v1 - serviceAccounts - metadata.name - default: need to check if this is versioned correctly. W1104 21:09:49.638712 26292 request.go:302] field selector: v1 - secrets - type - kubernetes.io/service-account-token: need to check if this is versioned correctly. Since I'm running this locally, I don't have a cloud provider. I tried to define --cloud-provider="" but it still complains with the same error.
Can you plot memory/working_set_bytes_guage instead of memory/usage_bytes_guage? The kube UI might be using working set which correlates with free. memory/usage includes pages that the kernel can reclaim on demand.
###Instruction: Kubernetes UI dashboard shows (this matches the free -m on this minion) Memory: 7.29 GB / 7.84 GB This overall memory usage gradually increasing over time. I am trying to get a view into this memory growth using Kubernetes/Grafana default dashboard for this metric: memory/usage_bytes_gauge. However, I see the following on the minion - The units do not match, i.e: approx 7GB used vs 200MiB on the plot - Memory change is all over in the graph as opposed to gradual increase ### Response: Can you plot memory/working_set_bytes_guage instead of memory/usage_bytes_guage? The kube UI might be using working set which correlates with free. memory/usage includes pages that the kernel can reclaim on demand.
Kubernetes UI dashboard shows (this matches the free -m on this minion) Memory: 7.29 GB / 7.84 GB This overall memory usage gradually increasing over time. I am trying to get a view into this memory growth using Kubernetes/Grafana default dashboard for this metric: memory/usage_bytes_gauge. However, I see the following on the minion - The units do not match, i.e: approx 7GB used vs 200MiB on the plot - Memory change is all over in the graph as opposed to gradual increase
Once you're certain your client properly handles disconnections, you can use the following kube-apiserver flag to control how long apiserver lets the watches stay open: https://github.com/kubernetes/kubernetes/blob/release-1.1/docs/admin/kube-apiserver.md --min-request-timeout=1800: An optional field indicating the minimum number of seconds a handler must keep a request open before timing it out. Currently only honored by the watch request handler, which picks a randomized value above this number as the connection timeout, to spread out load. Test with a small value, run in production with a large value.
###Instruction: After 30-45 minutes, chunked HTTP connection to API server is dropped: Transmission Control Protocol, Src Port: http-alt (8080), Dst Port: 55782 (55782), Seq: 751, Ack: 88, Len: 0 .... 0000 0001 0001 = Flags: 0x011 (FIN, ACK) This happens regardless of the activity level, i.e. it happens for connections that were idle for a long time but also for the ones that had notifications coming for the whole duration of the connection. HTTP 1.0 (with Connection: Keep-Alive header) just ends the original request, while HTTP 1.1, which is keepalive by default, sends 400 Bad Request before dropping the connection. Is it possible to get a watch connection which remains alive for a long period of time? ### Response: Once you're certain your client properly handles disconnections, you can use the following kube-apiserver flag to control how long apiserver lets the watches stay open: https://github.com/kubernetes/kubernetes/blob/release-1.1/docs/admin/kube-apiserver.md --min-request-timeout=1800: An optional field indicating the minimum number of seconds a handler must keep a request open before timing it out. Currently only honored by the watch request handler, which picks a randomized value above this number as the connection timeout, to spread out load. Test with a small value, run in production with a large value.
After 30-45 minutes, chunked HTTP connection to API server is dropped: Transmission Control Protocol, Src Port: http-alt (8080), Dst Port: 55782 (55782), Seq: 751, Ack: 88, Len: 0 .... 0000 0001 0001 = Flags: 0x011 (FIN, ACK) This happens regardless of the activity level, i.e. it happens for connections that were idle for a long time but also for the ones that had notifications coming for the whole duration of the connection. HTTP 1.0 (with Connection: Keep-Alive header) just ends the original request, while HTTP 1.1, which is keepalive by default, sends 400 Bad Request before dropping the connection. Is it possible to get a watch connection which remains alive for a long period of time?
Part 1: What you said about using kubedns to find heapster and querying that REST API is fine. You could also write a client interface that abstracts the interface to heapster -- that would help with unit testing. Take a look at this metrics client: https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/podautoscaler/metrics/metrics_client.go It doesn't do exactly what you want: it gets per-Pod stats instead of per-cluster or per-node stats. But you could modify it. In function getForPods, you can see the code that resolves the heapster service and connects to it here: resultRaw, err := h.client.Services(h.heapsterNamespace). ProxyGet(h.heapsterService, metricPath, map[string]string{"start": startTime.Format(time.RFC3339)}). DoRaw() where heapsterNamespace is "kube-system" and heapsterService is "heapster". That metrics client is part of the "horizonal pod autoscaler" implementation. It is solving a slightly different problem, but you should take a look at it if you haven't already. If is described here: https://github.com/kubernetes/kubernetes/blob/master/docs/design/horizontal-pod-autoscaler.md FYI: The Heapster REST API is defined here: https://github.com/kubernetes/heapster/blob/master/docs/model.md You should poke around and see if there are node-level or cluster-level CPU metrics that work for you. Part 2: There is no standard interface for shrinking nodes. It is different for each cloud provider. And if you are on-premises, then you can't shrink nodes. Related discussion: https://github.com/kubernetes/kubernetes/issues/11935 Side note: Among kubernetes developers, we typically use the term "rescheduler" when talking about something that rebalances pods across machines, by removing a pod from one machine and creates the same kind of pod on another machine. That is a different thing than the thing you are talking about building. We haven't built a rescheduler yet, but there is an outline of how to build one here: https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/rescheduler.md
###Instruction: I would like to expand/shrink the number of kubelets being used by kubernetes cluster based on resource usage. I have been looking at the code and have some idea of how to implement it at a high level. I am stuck on 2 things: What will be a good way for accessing the cluster metrics (via Heapster)? Should I try to use the kubedns for finding the heapster endpoint and directly query the API or is there some other way possible? Also, I am not sure on how to use kubedns to get the heapster URL in the former. The rescheduler which expands/shrinks the number of nodes will need to kick in every 30 minutes. What will be the best way for it. Is there some interface or something in the code which I can use for it or should I write a code segment which gets called every 30 mins and put it in the main loop? Any help would be greatly appreciated :) ### Response: Part 1: What you said about using kubedns to find heapster and querying that REST API is fine. You could also write a client interface that abstracts the interface to heapster -- that would help with unit testing. Take a look at this metrics client: https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/podautoscaler/metrics/metrics_client.go It doesn't do exactly what you want: it gets per-Pod stats instead of per-cluster or per-node stats. But you could modify it. In function getForPods, you can see the code that resolves the heapster service and connects to it here: resultRaw, err := h.client.Services(h.heapsterNamespace). ProxyGet(h.heapsterService, metricPath, map[string]string{"start": startTime.Format(time.RFC3339)}). DoRaw() where heapsterNamespace is "kube-system" and heapsterService is "heapster". That metrics client is part of the "horizonal pod autoscaler" implementation. It is solving a slightly different problem, but you should take a look at it if you haven't already. If is described here: https://github.com/kubernetes/kubernetes/blob/master/docs/design/horizontal-pod-autoscaler.md FYI: The Heapster REST API is defined here: https://github.com/kubernetes/heapster/blob/master/docs/model.md You should poke around and see if there are node-level or cluster-level CPU metrics that work for you. Part 2: There is no standard interface for shrinking nodes. It is different for each cloud provider. And if you are on-premises, then you can't shrink nodes. Related discussion: https://github.com/kubernetes/kubernetes/issues/11935 Side note: Among kubernetes developers, we typically use the term "rescheduler" when talking about something that rebalances pods across machines, by removing a pod from one machine and creates the same kind of pod on another machine. That is a different thing than the thing you are talking about building. We haven't built a rescheduler yet, but there is an outline of how to build one here: https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/rescheduler.md
I would like to expand/shrink the number of kubelets being used by kubernetes cluster based on resource usage. I have been looking at the code and have some idea of how to implement it at a high level. I am stuck on 2 things: What will be a good way for accessing the cluster metrics (via Heapster)? Should I try to use the kubedns for finding the heapster endpoint and directly query the API or is there some other way possible? Also, I am not sure on how to use kubedns to get the heapster URL in the former. The rescheduler which expands/shrinks the number of nodes will need to kick in every 30 minutes. What will be the best way for it. Is there some interface or something in the code which I can use for it or should I write a code segment which gets called every 30 mins and put it in the main loop? Any help would be greatly appreciated :)
There is no command to do exactly what you asked. Here are some close matches. Be careful before running any of these commands. Make sure you are connected to the right cluster, if you use multiple clusters. Consider running. kubectl config view first. You can delete all the pods in a single namespace with this command: kubectl delete --all pods --namespace=foo You can also delete all deployments in namespace which will delete all pods attached with the deployments corresponding to the namespace kubectl delete --all deployments --namespace=foo You can delete all namespaces and every object in every namespace (but not un-namespaced objects, like nodes and some events) with this command: kubectl delete --all namespaces However, the latter command is probably not something you want to do, since it will delete things in the kube-system namespace, which will make your cluster not usable. This command will delete all the namespaces except kube-system, which might be useful: for each in $(kubectl get ns -o jsonpath="{.items[*].metadata.name}" | grep -v kube-system); do kubectl delete ns $each done
###Instruction: Upon looking at the docs, there is an API call to delete a single pod, but is there a way to delete all pods in all namespaces? ### Response: There is no command to do exactly what you asked. Here are some close matches. Be careful before running any of these commands. Make sure you are connected to the right cluster, if you use multiple clusters. Consider running. kubectl config view first. You can delete all the pods in a single namespace with this command: kubectl delete --all pods --namespace=foo You can also delete all deployments in namespace which will delete all pods attached with the deployments corresponding to the namespace kubectl delete --all deployments --namespace=foo You can delete all namespaces and every object in every namespace (but not un-namespaced objects, like nodes and some events) with this command: kubectl delete --all namespaces However, the latter command is probably not something you want to do, since it will delete things in the kube-system namespace, which will make your cluster not usable. This command will delete all the namespaces except kube-system, which might be useful: for each in $(kubectl get ns -o jsonpath="{.items[*].metadata.name}" | grep -v kube-system); do kubectl delete ns $each done
Upon looking at the docs, there is an API call to delete a single pod, but is there a way to delete all pods in all namespaces?
So it's the kubelet that actually creates the pods and talks to the docker daemon. If you do a docker ps -a on your nodes (as in not master) in your cluster, you'll see the containers in your pod running. So the workflow is run a kubectl command, that goes to the API server, which passes it to the controller, say that command was to spawn a pod, the controller relays that to the API server which then goes to the scheduler and tells it to spawn the pod. Then the kubelet is told to spawn said pod. I suggest reading the Borg paper that Kubernetes is based on to better understand things in further detail. http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/43438.pdf
###Instruction: I have a problem to understand the kubernetes workflow: So as I understand the flow: You have a master which contains etcd, api-server, controller manager and scheduler. You have nodes which contain pods (wich contain containers), kubelet and a proxy. The proxy is working as a basic proxy to make it possible for a service to communicate with other nodes. When a pod dies, the controller manager will see this (it 'reads' the replication controller which describes how many pods there normally are). unclear: The controller manager will inform the API-server (I'm not right about this). The API-server will tell the scheduler to search a new place for the pod. After the scheduler has found a good place, the API will inform kubelet to create a new pod. I'm not sure about the last scenario? Can you tell me the right proces is a clear way? Which component is creating the pod and container? Is it kubelet? ### Response: So it's the kubelet that actually creates the pods and talks to the docker daemon. If you do a docker ps -a on your nodes (as in not master) in your cluster, you'll see the containers in your pod running. So the workflow is run a kubectl command, that goes to the API server, which passes it to the controller, say that command was to spawn a pod, the controller relays that to the API server which then goes to the scheduler and tells it to spawn the pod. Then the kubelet is told to spawn said pod. I suggest reading the Borg paper that Kubernetes is based on to better understand things in further detail. http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/43438.pdf
I have a problem to understand the kubernetes workflow: So as I understand the flow: You have a master which contains etcd, api-server, controller manager and scheduler. You have nodes which contain pods (wich contain containers), kubelet and a proxy. The proxy is working as a basic proxy to make it possible for a service to communicate with other nodes. When a pod dies, the controller manager will see this (it 'reads' the replication controller which describes how many pods there normally are). unclear: The controller manager will inform the API-server (I'm not right about this). The API-server will tell the scheduler to search a new place for the pod. After the scheduler has found a good place, the API will inform kubelet to create a new pod. I'm not sure about the last scenario? Can you tell me the right proces is a clear way? Which component is creating the pod and container? Is it kubelet?
I really don't understand your questions, but I can explain how flannel integrates with docker. Flannel is managing this file: # cat /usr/lib/systemd/system/docker.service.d/flannel.conf [Service] EnvironmentFile=-/run/flannel/docker Which is setting the docker service to use the values from /run/flannel/docker as environment variables. Inside /run/flannel/docker flannel is writing the network configuration that docker should use: # cat /run/flannel/docker DOCKER_OPT_BIP="--bip=172.16.66.1/24" DOCKER_OPT_IPMASQ="--ip-masq=true" DOCKER_OPT_MTU="--mtu=1472" DOCKER_NETWORK_OPTIONS=" --iptables=false --ip-masq=false --bip=172.16.66.1/24 --ip-masq=true --mtu=1472 " On centos/redhat, the docker systemd scripts is starting the daemon with the following command (taken from /usr/lib/systemd/system/docker.service) ExecStart=/usr/bin/docker -d $OPTIONS \ $DOCKER_STORAGE_OPTIONS \ $DOCKER_NETWORK_OPTIONS \ $ADD_REGISTRY \ $BLOCK_REGISTRY \ $INSECURE_REGISTRY So it will use only DOCKER_NETWORK_OPTIONS from what flannel offers. On coreos, the docker daemon is started with: /usr/lib/coreos/dockerd daemon --host=fd:// $DOCKER_OPTS $DOCKER_OPT_BIP $DOCKER_OPT_MTU $DOCKER_OPT_IPMASQ
###Instruction: When Installing a Kubernetes Master Node via Docker, docker config bip and mtu for Run flannel. --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU} What's are FLANNEL_SUBNET and FLANNEL_MTU variables? How to set ${FLANNEL_SUBNET} and ${FLANNEL_MTU}? ### Response: I really don't understand your questions, but I can explain how flannel integrates with docker. Flannel is managing this file: # cat /usr/lib/systemd/system/docker.service.d/flannel.conf [Service] EnvironmentFile=-/run/flannel/docker Which is setting the docker service to use the values from /run/flannel/docker as environment variables. Inside /run/flannel/docker flannel is writing the network configuration that docker should use: # cat /run/flannel/docker DOCKER_OPT_BIP="--bip=172.16.66.1/24" DOCKER_OPT_IPMASQ="--ip-masq=true" DOCKER_OPT_MTU="--mtu=1472" DOCKER_NETWORK_OPTIONS=" --iptables=false --ip-masq=false --bip=172.16.66.1/24 --ip-masq=true --mtu=1472 " On centos/redhat, the docker systemd scripts is starting the daemon with the following command (taken from /usr/lib/systemd/system/docker.service) ExecStart=/usr/bin/docker -d $OPTIONS \ $DOCKER_STORAGE_OPTIONS \ $DOCKER_NETWORK_OPTIONS \ $ADD_REGISTRY \ $BLOCK_REGISTRY \ $INSECURE_REGISTRY So it will use only DOCKER_NETWORK_OPTIONS from what flannel offers. On coreos, the docker daemon is started with: /usr/lib/coreos/dockerd daemon --host=fd:// $DOCKER_OPTS $DOCKER_OPT_BIP $DOCKER_OPT_MTU $DOCKER_OPT_IPMASQ
When Installing a Kubernetes Master Node via Docker, docker config bip and mtu for Run flannel. --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU} What's are FLANNEL_SUBNET and FLANNEL_MTU variables? How to set ${FLANNEL_SUBNET} and ${FLANNEL_MTU}?
To have the master node access the cluster network, you can run flanneld and kube-proxy on the master node. This should give you the access you need. However, adding these components in the context of using the kube-up.sh method may be a little involved. Seems like you may have a few options while remaining mostly within the framework of that tutorial: You could walk through the kube-up.sh scripts and alter it so that it installs and configures kube-proxy and flanneld on the master node, but not the kubelet. That may be hard to maintain over time. You could bring up the cluster as you already have with all 4 nodes running as 'nodes' (the new name for workers that used to be called 'minions'). Then mark the master node as unschedulable (kubectl patch nodes $NODENAME -p '{"spec": {"unschedulable": true}}') as outlined here. The master will still show up in node listings and such, but it should not get any pods scheduled to it and should have full network access to the cluster. You could also bring the cluster up as you already have with 4 nodes and then just log in and remove the kubelet on the master. This is effectively like the above, except the master won't show up in node listings (although you may have to remove it (kubectl delete node $NODENAME) when you remove the kubelet. There are probably other options (there always are!), but hopefully these can get you started.
###Instruction: I am trying to setup a small Kubernetes cluster using a VM (master) and 3 bare metal servers (all running Ubuntu 14.04). I am following the Kubernetes install tutorial for Ubuntu. Everything works fine if I use the 4 nodes (VM + servers) as minions. But when I try to use the VM as just a master, it cannot access the Flannel network. I can create pods, services, etc, but if I try to access a service from the master node (VM), it cannot find the Flannel's IP. Can I include a master only node to the Kubernetes' internal network (Flannel's net in this case)? If not, any advice in how to access the Kubernetes UI and other services from the master node? ### Response: To have the master node access the cluster network, you can run flanneld and kube-proxy on the master node. This should give you the access you need. However, adding these components in the context of using the kube-up.sh method may be a little involved. Seems like you may have a few options while remaining mostly within the framework of that tutorial: You could walk through the kube-up.sh scripts and alter it so that it installs and configures kube-proxy and flanneld on the master node, but not the kubelet. That may be hard to maintain over time. You could bring up the cluster as you already have with all 4 nodes running as 'nodes' (the new name for workers that used to be called 'minions'). Then mark the master node as unschedulable (kubectl patch nodes $NODENAME -p '{"spec": {"unschedulable": true}}') as outlined here. The master will still show up in node listings and such, but it should not get any pods scheduled to it and should have full network access to the cluster. You could also bring the cluster up as you already have with 4 nodes and then just log in and remove the kubelet on the master. This is effectively like the above, except the master won't show up in node listings (although you may have to remove it (kubectl delete node $NODENAME) when you remove the kubelet. There are probably other options (there always are!), but hopefully these can get you started.
I am trying to setup a small Kubernetes cluster using a VM (master) and 3 bare metal servers (all running Ubuntu 14.04). I am following the Kubernetes install tutorial for Ubuntu. Everything works fine if I use the 4 nodes (VM + servers) as minions. But when I try to use the VM as just a master, it cannot access the Flannel network. I can create pods, services, etc, but if I try to access a service from the master node (VM), it cannot find the Flannel's IP. Can I include a master only node to the Kubernetes' internal network (Flannel's net in this case)? If not, any advice in how to access the Kubernetes UI and other services from the master node?
In top command, your are looking at physical memory amount. in stats command, this also include the disk cached ram, so it's always bigger than the physical amount of ram. When you really need more RAM, the disk cached will be released for the application to use. In deed, the memmory usage is pulled via cgroup memory.usage_in_bytes, you can access it in /sys/fs/cgroup/memory/docker/long_container_id/memory.usage_in_bytes. And acording to linux doc https://www.kernel.org/doc/Documentation/cgroups/memory.txt section 5.5: 5.5 usage_in_bytes For efficiency, as other kernel components, memory cgroup uses some optimization to avoid unnecessary cacheline false sharing. usage_in_bytes is affected by the method and doesn't show 'exact' value of memory (and swap) usage, it's a fuzz value for efficient access. (Of course, when necessary, it's synchronized.) If you want to know more exact memory usage, you should use RSS+CACHE(+SWAP) value in memory.stat(see 5.2).
###Instruction: I'm running my rethinkdb container in Kubernetes cluster. Below is what I notice: Running top in the host which is CoreOS, rethinkdb process takes about 3Gb: $ top PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 981 root 20 0 53.9m 34.5m 20.9m S 15.6 0.4 1153:34 hyperkube 51139 root 20 0 4109.3m 3.179g 22.5m S 15.0 41.8 217:43.56 rethinkdb 579 root 20 0 707.5m 76.1m 19.3m S 2.3 1.0 268:33.55 kubelet But running docker stats to check the rethinkdb container, it takes about 7Gb! $ docker ps | grep rethinkdb eb9e6b83d6b8 rethinkdb:2.1.5 "rethinkdb --bind al 3 days ago Up 3 days k8s_rethinkdb-3.746aa_rethinkdb-rc-3-eiyt7_default_560121bb-82af-11e5-9c05-00155d070266_661dfae4 $ docker stats eb9e6b83d6b8 CONTAINER CPU % MEM USAGE/LIMIT MEM % NET I/O eb9e6b83d6b8 4.96% 6.992 GB/8.169 GB 85.59% 0 B/0 B $ free -m total used free shared buffers cached Mem: 7790 7709 81 0 71 3505 -/+ buffers/cache: 4132 3657 Swap: 0 0 0 Can someone explain why the container is taking a lot more memory than the rethinkdb process itself? I'm running docker v1.7.1, CoreOS v773.1.0, kernel 4.1.5 ### Response: In top command, your are looking at physical memory amount. in stats command, this also include the disk cached ram, so it's always bigger than the physical amount of ram. When you really need more RAM, the disk cached will be released for the application to use. In deed, the memmory usage is pulled via cgroup memory.usage_in_bytes, you can access it in /sys/fs/cgroup/memory/docker/long_container_id/memory.usage_in_bytes. And acording to linux doc https://www.kernel.org/doc/Documentation/cgroups/memory.txt section 5.5: 5.5 usage_in_bytes For efficiency, as other kernel components, memory cgroup uses some optimization to avoid unnecessary cacheline false sharing. usage_in_bytes is affected by the method and doesn't show 'exact' value of memory (and swap) usage, it's a fuzz value for efficient access. (Of course, when necessary, it's synchronized.) If you want to know more exact memory usage, you should use RSS+CACHE(+SWAP) value in memory.stat(see 5.2).
I'm running my rethinkdb container in Kubernetes cluster. Below is what I notice: Running top in the host which is CoreOS, rethinkdb process takes about 3Gb: $ top PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 981 root 20 0 53.9m 34.5m 20.9m S 15.6 0.4 1153:34 hyperkube 51139 root 20 0 4109.3m 3.179g 22.5m S 15.0 41.8 217:43.56 rethinkdb 579 root 20 0 707.5m 76.1m 19.3m S 2.3 1.0 268:33.55 kubelet But running docker stats to check the rethinkdb container, it takes about 7Gb! $ docker ps | grep rethinkdb eb9e6b83d6b8 rethinkdb:2.1.5 "rethinkdb --bind al 3 days ago Up 3 days k8s_rethinkdb-3.746aa_rethinkdb-rc-3-eiyt7_default_560121bb-82af-11e5-9c05-00155d070266_661dfae4 $ docker stats eb9e6b83d6b8 CONTAINER CPU % MEM USAGE/LIMIT MEM % NET I/O eb9e6b83d6b8 4.96% 6.992 GB/8.169 GB 85.59% 0 B/0 B $ free -m total used free shared buffers cached Mem: 7790 7709 81 0 71 3505 -/+ buffers/cache: 4132 3657 Swap: 0 0 0 Can someone explain why the container is taking a lot more memory than the rethinkdb process itself? I'm running docker v1.7.1, CoreOS v773.1.0, kernel 4.1.5
It looks like your problem is with making sure the corresponding security group is open to ssh from whichever nodes you'd like to connect from. Make sure it's open to the public IP or the private IP, depending on which you're connecting from. For the right ssh key to use: it'll be whichever one you setup when spinning up the nodes. You can check that in the EC2 pane of AWS in the "key pairs" side bar option:
###Instruction: The quickstart mentions a few times that, "You should be able to ssh into any node in your cluster ..." (e.g., http://kubernetes.io/v1.0/docs/user-guide/connecting-applications.html#environment-variables). I have tried as described below but I am getting timed out. I used export KUBERNETES_PROVIDER=aws; curl -sS https://get.k8s.io | bash to start the cluster I have only specified AWS_REGION in my environment The nodes are residing in VPC and I am able to ping them from a bastion This is the result: ubuntu@ip-10-128-1-26:~$ ssh core@170.20.0.248 -v OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: Connecting to 170.20.0.248 [170.20.0.248] port 22. debug1: connect to address 170.20.0.248 port 22: Connection timed out ssh: connect to host 170.20.0.248 port 22: Connection timed out ubuntu@ip-10-128-1-26:~$ Any idea or pointers would be appreciated. Thank you. ### Response: It looks like your problem is with making sure the corresponding security group is open to ssh from whichever nodes you'd like to connect from. Make sure it's open to the public IP or the private IP, depending on which you're connecting from. For the right ssh key to use: it'll be whichever one you setup when spinning up the nodes. You can check that in the EC2 pane of AWS in the "key pairs" side bar option:
The quickstart mentions a few times that, "You should be able to ssh into any node in your cluster ..." (e.g., http://kubernetes.io/v1.0/docs/user-guide/connecting-applications.html#environment-variables). I have tried as described below but I am getting timed out. I used export KUBERNETES_PROVIDER=aws; curl -sS https://get.k8s.io | bash to start the cluster I have only specified AWS_REGION in my environment The nodes are residing in VPC and I am able to ping them from a bastion This is the result: ubuntu@ip-10-128-1-26:~$ ssh core@170.20.0.248 -v OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: Connecting to 170.20.0.248 [170.20.0.248] port 22. debug1: connect to address 170.20.0.248 port 22: Connection timed out ssh: connect to host 170.20.0.248 port 22: Connection timed out ubuntu@ip-10-128-1-26:~$ Any idea or pointers would be appreciated. Thank you.
The problem was with the networking and kube2sky not accessing the API, so couldn't get the services. Changing the docker run for the master from, --config=/etc/kubernetes/manifests to --config=/etc/kubernetes/manifests-multi Then in the skydns-rc.yaml the for kube2sky as well as setting the domain, set the host IP address. - -kube_master_url=http://192.168.99.100:8080 #<- your docker machine IP Without the manifests-multi, the host IP is not accessible. This was a simple change but took a bit to track down. I have created a simple set up on GitHub and will maintain this so people don't have to go through this pain just to get a local dev environment up and running. https://github.com/justingrayston/kubernetes-docker-dns
###Instruction: I am trying to prepare a dev environment for my team, so we can develop, stage and deploy with the same (or near same) environment. Getting a Kubernetes Cluster running locally via http://kubernetes.io/v1.0/docs/getting-started-guides/docker.html was nice and simple. I could then use kubectl to start the pods and services for my application. However, the services IP addresses are going to be different each time you start up. Which is a problem, if your code needs to use them. In Google Container Engine kube DNS means you can access a service by name. Which means the code that uses the service can remain constant between deployments. Now, I know we could piece together the IP and PORT via environment variables, but I wanted to have an identical set up as possible. So I followed some instructions found in various places, both here and in the Kubernetes repo like this. Sure enough with a little editing of the yml files KubeDNS starts up. But an nslookup on kubernetes.default fails. The health check on the DNS also fails (because it can't resolve the test look up) and the instance is shut down and restarted. Running kubectl cluster-info results in: Kubernetes master is running at http://localhost:8080 KubeDNS is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kube-dns So all good. However, hitting that endpoint results in: { kind: "Status", apiVersion: "v1", metadata: { }, status: "Failure", message: "no endpoints available for "kube-dns"", code: 500 } I am now at a loss, and know it is something obvious or easy to fix as it seems to all be working. Here is how I start up the cluster and DNS. # Run etcd docker run --net=host \ -d gcr.io/google_containers/etcd:2.0.12 /usr/local/bin/etcd \ --addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001 --data-dir=/var/etcd/data # Run the master docker run \ --volume=/:/rootfs:ro \ --volume=/sys:/sys:ro \ --volume=/dev:/dev \ --volume=/var/lib/docker/:/var/lib/docker:ro \ --volume=/var/lib/kubelet/:/var/lib/kubelet:rw \ --volume=/var/run:/var/run:rw \ --net=host \ --privileged=true \ -d \ gcr.io/google_containers/hyperkube:v1.0.6 \ /hyperkube kubelet --containerized --hostname-override="127.0.0.1" \ --address="0.0.0.0" --api-servers=http://localhost:8080 \ --config=/etc/kubernetes/manifests \ --cluster_dns=10.0.0.10 --cluster_domain=cluster.local # Run the service proxy docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v1.0.6 \ /hyperkube proxy --master=http://127.0.0.1:8080 --v=2 # forward local port - after this you should be able to user kubectl locally machine=default; ssh -i ~/.docker/machine/machines/$machine/id_rsa docker@$(docker-machine ip $machine) -L 8080:localhost:8080 All the containers spin up ok, kubectl get nodes reports ok. Note I pass in the dns flags. I then start the DNS rc with this file, which is the edited version from here apiVersion: v1 kind: ReplicationController metadata: name: kube-dns-v9 namespace: kube-system labels: k8s-app: kube-dns version: v9 kubernetes.io/cluster-service: "true" spec: replicas: 1 selector: k8s-app: kube-dns version: v9 template: metadata: labels: k8s-app: kube-dns version: v9 kubernetes.io/cluster-service: "true" spec: containers: - name: etcd image: gcr.io/google_containers/etcd:2.0.9 resources: limits: cpu: 100m memory: 50Mi command: - /usr/local/bin/etcd - -data-dir - /var/etcd/data - -listen-client-urls - http://127.0.0.1:2379,http://127.0.0.1:4001 - -advertise-client-urls - http://127.0.0.1:2379,http://127.0.0.1:4001 - -initial-cluster-token - skydns-etcd volumeMounts: - name: etcd-storage mountPath: /var/etcd/data - name: kube2sky image: gcr.io/google_containers/kube2sky:1.11 resources: limits: cpu: 100m memory: 50Mi args: # command = "/kube2sky" - -domain=cluster.local - name: skydns image: gcr.io/google_containers/skydns:2015-10-13-8c72f8c resources: limits: cpu: 100m memory: 50Mi args: # command = "/skydns" - -machines=http://localhost:4001 - -addr=0.0.0.0:53 - -ns-rotate=false - -domain=cluster.local ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP livenessProbe: httpGet: path: /healthz port: 8080 scheme: HTTP initialDelaySeconds: 30 timeoutSeconds: 5 readinessProbe: httpGet: path: /healthz port: 8080 scheme: HTTP initialDelaySeconds: 1 timeoutSeconds: 5 - name: healthz image: gcr.io/google_containers/exechealthz:1.0 resources: limits: cpu: 10m memory: 20Mi args: - -cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null - -port=8080 ports: - containerPort: 8080 protocol: TCP volumes: - name: etcd-storage emptyDir: {} dnsPolicy: Default # Don't use cluster DNS. Then start the service (again based on the file in the repo) apiVersion: v1 kind: Service metadata: name: kube-dns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" kubernetes.io/name: "KubeDNS" spec: selector: k8s-app: kube-dns clusterIP: 10.0.0.10 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP I made the assumption based on another SO question that clusterIP is the value I passed into the master, and not the ip of the host machine. I am sure it has to be something obvious or simple that I have missed. Anyone out there who can help? Thanks! UPDATE I found this closed issue over in the GitHub repo. Seems I have an identical problem. I have added to the thread on GitHub, and tried lots of things but still no progress. I tried using different images, but they had different errors (or the same error representing itself differently, I couldn't tell). Everything relating to this that I have found suggests IP restrictions, or firewall/security settings. So I decided to curl the api from the container itself. docker exec 49705c38846a echo $(curl http://0.0.0.0:8080/api/v1/services?labels=) % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 908 100 908 0 0 314k 0 --:--:-- --:--:-- --:--:-- 443k { "kind": "ServiceList", "apiVersion": "v1", "metadata": { "selfLink": "/api/v1/services", "resourceVersion": "948" }, "items": [ { "metadata": { "name": "kubernetes", "namespace": "default", "selfLink": "/api/v1/namespaces/default/services/kubernetes", "uid": "369a9307-796e-11e5-87de-7a0704d1fdad", "resourceVersion": "6", "creationTimestamp": "2015-10-23T10:09:57Z", "labels": { "component": "apiserver", "provider": "kubernetes" } }, "spec": { "ports": [ { "protocol": "TCP", "port": 443, "targetPort": 443, "nodePort": 0 } ], "clusterIP": "10.0.0.1", "type": "ClusterIP", "sessionAffinity": "None" }, "status": { "loadBalancer": {} } } ] } Seems like a valid response to me, so why the JSON parse error coming from kube2Sky!? Failed to list *api.Service: couldn't get version/kind; json parse error: invalid character '<' looking for beginning of value Failed to list *api.Endpoints: couldn't get version/kind; json parse error: invalid character '<' looking for beginning of value ### Response: The problem was with the networking and kube2sky not accessing the API, so couldn't get the services. Changing the docker run for the master from, --config=/etc/kubernetes/manifests to --config=/etc/kubernetes/manifests-multi Then in the skydns-rc.yaml the for kube2sky as well as setting the domain, set the host IP address. - -kube_master_url=http://192.168.99.100:8080 #<- your docker machine IP Without the manifests-multi, the host IP is not accessible. This was a simple change but took a bit to track down. I have created a simple set up on GitHub and will maintain this so people don't have to go through this pain just to get a local dev environment up and running. https://github.com/justingrayston/kubernetes-docker-dns
I am trying to prepare a dev environment for my team, so we can develop, stage and deploy with the same (or near same) environment. Getting a Kubernetes Cluster running locally via http://kubernetes.io/v1.0/docs/getting-started-guides/docker.html was nice and simple. I could then use kubectl to start the pods and services for my application. However, the services IP addresses are going to be different each time you start up. Which is a problem, if your code needs to use them. In Google Container Engine kube DNS means you can access a service by name. Which means the code that uses the service can remain constant between deployments. Now, I know we could piece together the IP and PORT via environment variables, but I wanted to have an identical set up as possible. So I followed some instructions found in various places, both here and in the Kubernetes repo like this. Sure enough with a little editing of the yml files KubeDNS starts up. But an nslookup on kubernetes.default fails. The health check on the DNS also fails (because it can't resolve the test look up) and the instance is shut down and restarted. Running kubectl cluster-info results in: Kubernetes master is running at http://localhost:8080 KubeDNS is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kube-dns So all good. However, hitting that endpoint results in: { kind: "Status", apiVersion: "v1", metadata: { }, status: "Failure", message: "no endpoints available for "kube-dns"", code: 500 } I am now at a loss, and know it is something obvious or easy to fix as it seems to all be working. Here is how I start up the cluster and DNS. # Run etcd docker run --net=host \ -d gcr.io/google_containers/etcd:2.0.12 /usr/local/bin/etcd \ --addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001 --data-dir=/var/etcd/data # Run the master docker run \ --volume=/:/rootfs:ro \ --volume=/sys:/sys:ro \ --volume=/dev:/dev \ --volume=/var/lib/docker/:/var/lib/docker:ro \ --volume=/var/lib/kubelet/:/var/lib/kubelet:rw \ --volume=/var/run:/var/run:rw \ --net=host \ --privileged=true \ -d \ gcr.io/google_containers/hyperkube:v1.0.6 \ /hyperkube kubelet --containerized --hostname-override="127.0.0.1" \ --address="0.0.0.0" --api-servers=http://localhost:8080 \ --config=/etc/kubernetes/manifests \ --cluster_dns=10.0.0.10 --cluster_domain=cluster.local # Run the service proxy docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v1.0.6 \ /hyperkube proxy --master=http://127.0.0.1:8080 --v=2 # forward local port - after this you should be able to user kubectl locally machine=default; ssh -i ~/.docker/machine/machines/$machine/id_rsa docker@$(docker-machine ip $machine) -L 8080:localhost:8080 All the containers spin up ok, kubectl get nodes reports ok. Note I pass in the dns flags. I then start the DNS rc with this file, which is the edited version from here apiVersion: v1 kind: ReplicationController metadata: name: kube-dns-v9 namespace: kube-system labels: k8s-app: kube-dns version: v9 kubernetes.io/cluster-service: "true" spec: replicas: 1 selector: k8s-app: kube-dns version: v9 template: metadata: labels: k8s-app: kube-dns version: v9 kubernetes.io/cluster-service: "true" spec: containers: - name: etcd image: gcr.io/google_containers/etcd:2.0.9 resources: limits: cpu: 100m memory: 50Mi command: - /usr/local/bin/etcd - -data-dir - /var/etcd/data - -listen-client-urls - http://127.0.0.1:2379,http://127.0.0.1:4001 - -advertise-client-urls - http://127.0.0.1:2379,http://127.0.0.1:4001 - -initial-cluster-token - skydns-etcd volumeMounts: - name: etcd-storage mountPath: /var/etcd/data - name: kube2sky image: gcr.io/google_containers/kube2sky:1.11 resources: limits: cpu: 100m memory: 50Mi args: # command = "/kube2sky" - -domain=cluster.local - name: skydns image: gcr.io/google_containers/skydns:2015-10-13-8c72f8c resources: limits: cpu: 100m memory: 50Mi args: # command = "/skydns" - -machines=http://localhost:4001 - -addr=0.0.0.0:53 - -ns-rotate=false - -domain=cluster.local ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP livenessProbe: httpGet: path: /healthz port: 8080 scheme: HTTP initialDelaySeconds: 30 timeoutSeconds: 5 readinessProbe: httpGet: path: /healthz port: 8080 scheme: HTTP initialDelaySeconds: 1 timeoutSeconds: 5 - name: healthz image: gcr.io/google_containers/exechealthz:1.0 resources: limits: cpu: 10m memory: 20Mi args: - -cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null - -port=8080 ports: - containerPort: 8080 protocol: TCP volumes: - name: etcd-storage emptyDir: {} dnsPolicy: Default # Don't use cluster DNS. Then start the service (again based on the file in the repo) apiVersion: v1 kind: Service metadata: name: kube-dns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" kubernetes.io/name: "KubeDNS" spec: selector: k8s-app: kube-dns clusterIP: 10.0.0.10 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP I made the assumption based on another SO question that clusterIP is the value I passed into the master, and not the ip of the host machine. I am sure it has to be something obvious or simple that I have missed. Anyone out there who can help? Thanks! UPDATE I found this closed issue over in the GitHub repo. Seems I have an identical problem. I have added to the thread on GitHub, and tried lots of things but still no progress. I tried using different images, but they had different errors (or the same error representing itself differently, I couldn't tell). Everything relating to this that I have found suggests IP restrictions, or firewall/security settings. So I decided to curl the api from the container itself. docker exec 49705c38846a echo $(curl http://0.0.0.0:8080/api/v1/services?labels=) % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 908 100 908 0 0 314k 0 --:--:-- --:--:-- --:--:-- 443k { "kind": "ServiceList", "apiVersion": "v1", "metadata": { "selfLink": "/api/v1/services", "resourceVersion": "948" }, "items": [ { "metadata": { "name": "kubernetes", "namespace": "default", "selfLink": "/api/v1/namespaces/default/services/kubernetes", "uid": "369a9307-796e-11e5-87de-7a0704d1fdad", "resourceVersion": "6", "creationTimestamp": "2015-10-23T10:09:57Z", "labels": { "component": "apiserver", "provider": "kubernetes" } }, "spec": { "ports": [ { "protocol": "TCP", "port": 443, "targetPort": 443, "nodePort": 0 } ], "clusterIP": "10.0.0.1", "type": "ClusterIP", "sessionAffinity": "None" }, "status": { "loadBalancer": {} } } ] } Seems like a valid response to me, so why the JSON parse error coming from kube2Sky!? Failed to list *api.Service: couldn't get version/kind; json parse error: invalid character '<' looking for beginning of value Failed to list *api.Endpoints: couldn't get version/kind; json parse error: invalid character '<' looking for beginning of value
That token has no explicit permissions. If you run with any authorization mode other than AllowAll, you will find that account cannot do anything with the API. If you want to stop injecting API tokens, you can remove the service account admission controller from the list (in apiserver options). If you want to stop generating tokens completely, you can remove the private key argument from the controller manager start options.
###Instruction: Kubernetes automatically places a token and certificate in /var/run/secrets/kubernetes.io/serviceaccount of each running container in a pod. This token allows access to the the API Server from any container. Is it possible to either prevent this directory from being added to a container or specify a service account that has zero privileges? ### Response: That token has no explicit permissions. If you run with any authorization mode other than AllowAll, you will find that account cannot do anything with the API. If you want to stop injecting API tokens, you can remove the service account admission controller from the list (in apiserver options). If you want to stop generating tokens completely, you can remove the private key argument from the controller manager start options.
Kubernetes automatically places a token and certificate in /var/run/secrets/kubernetes.io/serviceaccount of each running container in a pod. This token allows access to the the API Server from any container. Is it possible to either prevent this directory from being added to a container or specify a service account that has zero privileges?
Rolling update just scales down one replicationController and scales up another one. Therefore, it deletes the old pods and make new pods, at a controlled rate. So, if the new replication controller json file has different env vars and the same image, then the new pods will have that too. In fact, even if you don't change anything in the json file, except one label value (you have to change some label), then you will get new pods with the same image and env. I guess you could use this to do a rolling restart? You get to pick what label(s) you want to change when you do a rolling update. There is no formal Kubernetes notion of a "version". You can make a label called "version" if you want, or "contdelivver" or whatever. I think if I were in your shoes, I would look at two options: Option 1: put (at least) two labels on the rcs, one for the docker image version (which, IIUC is also a continuous delivery version), and one for the "environment version". This could be a git commit, if you store your environment vars in git, or something more casual. So, your pods could have labels like "imgver=1.3,envver=a34b87", or something like that. Option 2: store the current best known replication controller, as a json (or yaml) file in version control (git, svn, whatevs). Then use the revision number from version control as a single label (e.g. "version=r346"). This is not the same as your continuous delivery label. It is a label for the whole configuration of the pod.
###Instruction: Background: We're currently using a continuous delivery pipeline and at the end of the pipeline we deploy the generated Docker image to some server(s) together with the latest application configuration (set as environment variables when starting the Docker container). The continuous delivery build number is used as version for the Docker image and it's currently also this version that gets deployed to the server(s). Sometimes though we need to update the application configuration (environment variables) and reuse an existing Docker image. Today we simply deploy an existing Docker image with an updated configuration. Now we're thinking of switching to Kubernetes instead of our home-built solution. Thus it would be nice for us if the version number generated by our continuous delivery pipeline is reflected as the pod version in Kubernetes as well (even if we deploy the same version of the Docker image that is currently deployed but with different environment variables). Question: I've read the documentation of rolling-update but it doesn't indicate that you can do a rolling-update and only change the environment variables associated with a pod without changing its version. Is this possible? Is there a workaround? Is this something we should avoid altogether and use a different approach that is more "Kubernetes friendly"? ### Response: Rolling update just scales down one replicationController and scales up another one. Therefore, it deletes the old pods and make new pods, at a controlled rate. So, if the new replication controller json file has different env vars and the same image, then the new pods will have that too. In fact, even if you don't change anything in the json file, except one label value (you have to change some label), then you will get new pods with the same image and env. I guess you could use this to do a rolling restart? You get to pick what label(s) you want to change when you do a rolling update. There is no formal Kubernetes notion of a "version". You can make a label called "version" if you want, or "contdelivver" or whatever. I think if I were in your shoes, I would look at two options: Option 1: put (at least) two labels on the rcs, one for the docker image version (which, IIUC is also a continuous delivery version), and one for the "environment version". This could be a git commit, if you store your environment vars in git, or something more casual. So, your pods could have labels like "imgver=1.3,envver=a34b87", or something like that. Option 2: store the current best known replication controller, as a json (or yaml) file in version control (git, svn, whatevs). Then use the revision number from version control as a single label (e.g. "version=r346"). This is not the same as your continuous delivery label. It is a label for the whole configuration of the pod.
Background: We're currently using a continuous delivery pipeline and at the end of the pipeline we deploy the generated Docker image to some server(s) together with the latest application configuration (set as environment variables when starting the Docker container). The continuous delivery build number is used as version for the Docker image and it's currently also this version that gets deployed to the server(s). Sometimes though we need to update the application configuration (environment variables) and reuse an existing Docker image. Today we simply deploy an existing Docker image with an updated configuration. Now we're thinking of switching to Kubernetes instead of our home-built solution. Thus it would be nice for us if the version number generated by our continuous delivery pipeline is reflected as the pod version in Kubernetes as well (even if we deploy the same version of the Docker image that is currently deployed but with different environment variables). Question: I've read the documentation of rolling-update but it doesn't indicate that you can do a rolling-update and only change the environment variables associated with a pod without changing its version. Is this possible? Is there a workaround? Is this something we should avoid altogether and use a different approach that is more "Kubernetes friendly"?
With @rwehner's help, I was finally able to figure it out. Checking the kubelet log revealed: Output: mount.nfs: rpc.statd is not running but is required for remote locking. As soon as I got rpcbind running, the mount worked.
###Instruction: I'm trying to mount an external nfs share in a Replication Controller. When I create the replication controller, the pod is pending. Getting the details on the pod, I get these events: Events: FirstSeen LastSeen Count From SubobjectPath Reason Message Thu, 05 Nov 2015 11:28:33 -0700 Thu, 05 Nov 2015 11:28:33 -0700 1 {scheduler } scheduled Successfully assigned web-worker-hthjq to jolt-server-5 Thu, 05 Nov 2015 11:28:43 -0700 Thu, 05 Nov 2015 11:28:43 -0700 1 {kubelet jolt-server-5} failedMount Unable to mount volumes for pod "web-worker-hthjq_default": exit status 32 Thu, 05 Nov 2015 11:28:43 -0700 Thu, 05 Nov 2015 11:28:43 -0700 1 {kubelet jolt-server-5} failedSync Error syncing pod, skipping: exit status 32 My set up is one master and one node on local machines. These machines are running CoreOS. The nfs share exists on another machine on the network. If I shell into the host machine, I can successfully mount the nfs share, so I believe the export is configured correctly. Looking around online, it seems like the only examples of using nfs shares are those defined within Kubernetes (one pod sharing with another). Is there not a way to mount an external share directly from Kubernetes (I don't want to have to mount it to the host machine and then mount from the host machine to the container). Here's my Replication Controller: apiVersion: v1 kind: ReplicationController metadata: labels: name: web-worker name: web-worker spec: replicas: 1 selector: component: web-worker template: metadata: labels: app: task-queue component: web-worker spec: containers: - command: - /sbin/my_init image: quincy/php-apache2:latest name: web-worker ports: - containerPort: 80 protocol: TCP volumeMounts: - name: code-base mountPath: /var/www/html - name: local-secrets mountPath: /secrets volumes: - name: code-base nfs: server: thuey.jolt.local path: "/Users/thuey/Sites/jolt/jolt-web" - name: local-secrets secret: secretName: local-secret UPDATE After thinking about it more, I realized the problem is probably that it can't find the server (thuey.jolt.local). It's probably just looking at the internal DNS. Is that accurate? If so, is there any way around that? UPDATE After attempting this again, it is now working mounting directly from the pod to the networked nfs server. ### Response: With @rwehner's help, I was finally able to figure it out. Checking the kubelet log revealed: Output: mount.nfs: rpc.statd is not running but is required for remote locking. As soon as I got rpcbind running, the mount worked.
I'm trying to mount an external nfs share in a Replication Controller. When I create the replication controller, the pod is pending. Getting the details on the pod, I get these events: Events: FirstSeen LastSeen Count From SubobjectPath Reason Message Thu, 05 Nov 2015 11:28:33 -0700 Thu, 05 Nov 2015 11:28:33 -0700 1 {scheduler } scheduled Successfully assigned web-worker-hthjq to jolt-server-5 Thu, 05 Nov 2015 11:28:43 -0700 Thu, 05 Nov 2015 11:28:43 -0700 1 {kubelet jolt-server-5} failedMount Unable to mount volumes for pod "web-worker-hthjq_default": exit status 32 Thu, 05 Nov 2015 11:28:43 -0700 Thu, 05 Nov 2015 11:28:43 -0700 1 {kubelet jolt-server-5} failedSync Error syncing pod, skipping: exit status 32 My set up is one master and one node on local machines. These machines are running CoreOS. The nfs share exists on another machine on the network. If I shell into the host machine, I can successfully mount the nfs share, so I believe the export is configured correctly. Looking around online, it seems like the only examples of using nfs shares are those defined within Kubernetes (one pod sharing with another). Is there not a way to mount an external share directly from Kubernetes (I don't want to have to mount it to the host machine and then mount from the host machine to the container). Here's my Replication Controller: apiVersion: v1 kind: ReplicationController metadata: labels: name: web-worker name: web-worker spec: replicas: 1 selector: component: web-worker template: metadata: labels: app: task-queue component: web-worker spec: containers: - command: - /sbin/my_init image: quincy/php-apache2:latest name: web-worker ports: - containerPort: 80 protocol: TCP volumeMounts: - name: code-base mountPath: /var/www/html - name: local-secrets mountPath: /secrets volumes: - name: code-base nfs: server: thuey.jolt.local path: "/Users/thuey/Sites/jolt/jolt-web" - name: local-secrets secret: secretName: local-secret UPDATE After thinking about it more, I realized the problem is probably that it can't find the server (thuey.jolt.local). It's probably just looking at the internal DNS. Is that accurate? If so, is there any way around that? UPDATE After attempting this again, it is now working mounting directly from the pod to the networked nfs server.
Not out of the box (at the time of this writing), but the Kubernetes Ingress api is evolving to support internal loadbalancers. Note the following: Kubernetes Services are round robin loadbalanced by default. You can deploy something like the service loadbalancer [1] and access your services on the ClusterIP of the loadbalancer pod, just remove the hostPort line in the rc configuration [2] to avoid exposing them on the public IP of the vm. [1] https://github.com/kubernetes/contrib/tree/master/service-loadbalancer [2] https://github.com/kubernetes/contrib/blob/master/service-loadbalancer/rc.yaml#L35
###Instruction: Currently testing out Kubernetes 1.0.7 on AWS and it creates an external load balancer just fine but I want to know if its possible to create an internal load balancer that is only accessible within the internal subnet. ### Response: Not out of the box (at the time of this writing), but the Kubernetes Ingress api is evolving to support internal loadbalancers. Note the following: Kubernetes Services are round robin loadbalanced by default. You can deploy something like the service loadbalancer [1] and access your services on the ClusterIP of the loadbalancer pod, just remove the hostPort line in the rc configuration [2] to avoid exposing them on the public IP of the vm. [1] https://github.com/kubernetes/contrib/tree/master/service-loadbalancer [2] https://github.com/kubernetes/contrib/blob/master/service-loadbalancer/rc.yaml#L35
Currently testing out Kubernetes 1.0.7 on AWS and it creates an external load balancer just fine but I want to know if its possible to create an internal load balancer that is only accessible within the internal subnet.
The template in the ReplicationControllerSpec defines the template for a Pod that the ReplicationController will create, but since it's an abstract template, it doesn't make sense for it to have a creation time. Once the ReplicationController creates a pod, the CreationTimestamp will be set to the time the Pod was created at. The ReplicationController also has its own metadata (not in the template) which should include the CreationTime of the ReplicationController object.
###Instruction: What does it mean when the creationTimestamp for the template is null? "template": { "metadata": { "creationTimestamp": null, "labels": { "name": "kube-template" } }, ### Response: The template in the ReplicationControllerSpec defines the template for a Pod that the ReplicationController will create, but since it's an abstract template, it doesn't make sense for it to have a creation time. Once the ReplicationController creates a pod, the CreationTimestamp will be set to the time the Pod was created at. The ReplicationController also has its own metadata (not in the template) which should include the CreationTime of the ReplicationController object.
What does it mean when the creationTimestamp for the template is null? "template": { "metadata": { "creationTimestamp": null, "labels": { "name": "kube-template" } },
Passing the devices to the container is not currently supported in Kubernetes. This issue is tracked in https://github.com/kubernetes/kubernetes/issues/5607
###Instruction: Or to put it another way, what can I do in kubernetes so that the container is run with the equivalent of --device=/dev/tty10, as an example. Otherwise accessing a device like that gives an error. [root@87eb47e75ed4 /]# echo foo >> /dev/tty10 bash: /dev/tty10: Operation not permitted I haven't found a way currently to achieve this short of making the container privileged. I was hoping there'd be something settable in the securityContext, perhaps. ### Response: Passing the devices to the container is not currently supported in Kubernetes. This issue is tracked in https://github.com/kubernetes/kubernetes/issues/5607
Or to put it another way, what can I do in kubernetes so that the container is run with the equivalent of --device=/dev/tty10, as an example. Otherwise accessing a device like that gives an error. [root@87eb47e75ed4 /]# echo foo >> /dev/tty10 bash: /dev/tty10: Operation not permitted I haven't found a way currently to achieve this short of making the container privileged. I was hoping there'd be something settable in the securityContext, perhaps.
The DNS name for services is as follow: {service-name}.{namespace}.svc.cluster.local. Assuming you configured kubectl to work with your cluster you should be able to get your service and namespace details by the following the steps below. Get your namespace $ kubectl get namespaces NAME LABELS STATUS default <none> Active kube-system <none> Active You should ignore the kube-system entry, because that is for the cluster itself. All other entries are your namespaces. By default there will be one extra namespace called default. Get your services $ kubectl get services NAME LABELS SELECTOR IP(S) PORT(S) broker-partition0 name=broker-partition0,type=broker name=broker-partition0 10.203.248.95 5050/TCP broker-partition1 name=broker-partition1,type=broker name=broker-partition1 10.203.249.91 5050/TCP kubernetes component=apiserver,provider=kubernetes <none> 10.203.240.1 443/TCP service-frontend name=service-frontend,service=frontend name=service-frontend 10.203.246.16 80/TCP 104.155.61.198 service-membership0 name=service-membership0,partition=0,service=membership name=service-membership0 10.203.246.242 80/TCP service-membership1 name=service-membership1,partition=1,service=membership name=service-membership1 10.203.248.211 80/TCP This command lists all the services available in your cluster. So for example, if I want to get the IP address of the service-frontend I can use the following DNS: service-frontend.default.svc.cluster.local. Verify DNS with busybox pod You can create a busybox pod and use that pod to execute nslookup command to query the DNS server. $ kubectl create -f - << EOF apiVersion: v1 kind: Pod metadata: name: busybox namespace: default spec: containers: - image: busybox command: - sleep - "3600" imagePullPolicy: IfNotPresent name: busybox restartPolicy: Always EOF Now you can do an nslookup from the pod in your cluster. $ kubectl exec busybox -- nslookup broker-partition0.default.svc.cluster.local Server: 10.203.240.10 Address 1: 10.203.240.10 Name: service-frontend.default.svc.cluster.local Address 1: 10.203.246.16 Here you see that the Addres 1 entry is the IP of the service-frontend service, the same as the IP address listed by the kubectl get services.
###Instruction: From the kubernetes docs I see that there is a DNS based service discovery mechanism. Does Google Container Engine support this. If so, what's the format of DNS name to discover a service running inside Container Engine. I couldn't find the relevant information in the Container Engine docs. ### Response: The DNS name for services is as follow: {service-name}.{namespace}.svc.cluster.local. Assuming you configured kubectl to work with your cluster you should be able to get your service and namespace details by the following the steps below. Get your namespace $ kubectl get namespaces NAME LABELS STATUS default <none> Active kube-system <none> Active You should ignore the kube-system entry, because that is for the cluster itself. All other entries are your namespaces. By default there will be one extra namespace called default. Get your services $ kubectl get services NAME LABELS SELECTOR IP(S) PORT(S) broker-partition0 name=broker-partition0,type=broker name=broker-partition0 10.203.248.95 5050/TCP broker-partition1 name=broker-partition1,type=broker name=broker-partition1 10.203.249.91 5050/TCP kubernetes component=apiserver,provider=kubernetes <none> 10.203.240.1 443/TCP service-frontend name=service-frontend,service=frontend name=service-frontend 10.203.246.16 80/TCP 104.155.61.198 service-membership0 name=service-membership0,partition=0,service=membership name=service-membership0 10.203.246.242 80/TCP service-membership1 name=service-membership1,partition=1,service=membership name=service-membership1 10.203.248.211 80/TCP This command lists all the services available in your cluster. So for example, if I want to get the IP address of the service-frontend I can use the following DNS: service-frontend.default.svc.cluster.local. Verify DNS with busybox pod You can create a busybox pod and use that pod to execute nslookup command to query the DNS server. $ kubectl create -f - << EOF apiVersion: v1 kind: Pod metadata: name: busybox namespace: default spec: containers: - image: busybox command: - sleep - "3600" imagePullPolicy: IfNotPresent name: busybox restartPolicy: Always EOF Now you can do an nslookup from the pod in your cluster. $ kubectl exec busybox -- nslookup broker-partition0.default.svc.cluster.local Server: 10.203.240.10 Address 1: 10.203.240.10 Name: service-frontend.default.svc.cluster.local Address 1: 10.203.246.16 Here you see that the Addres 1 entry is the IP of the service-frontend service, the same as the IP address listed by the kubectl get services.
From the kubernetes docs I see that there is a DNS based service discovery mechanism. Does Google Container Engine support this. If so, what's the format of DNS name to discover a service running inside Container Engine. I couldn't find the relevant information in the Container Engine docs.
From: https://github.com/kubernetes/kubernetes/pull/12717/files This function func ReadDockerConfigFile() (cfg DockerConfig, err error) is used to parse config which is stored in: GetPreferredDockercfgPath() + "/config.json" workingDirPath + "/config.json" $HOME/.docker/config.json /.docker/config.json GetPreferredDockercfgPath() + "/.dockercfg" workingDirPath + "/.dockercfg" $HOME/.dockercfg /.dockercfg The first four one are new type of secret, and the last four one are the old type. This helps explain why moving the file to /.dockercfg fixed your issue, but not why there was an issue in the first place.
###Instruction: I am attempting to pull private docker images from Docker Hub. Error: image orgname/imagename:latest not found The info I am seeing on the internet... http://kubernetes.io/v1.0/docs/user-guide/images.html#using-a-private-registry https://github.com/kubernetes/kubernetes/issues/7954 Leads me to believe I should be able to put something like { "https://index.docker.io/v1/": { "auth": "base64pw==", "email": "email@place.com" } } In the kubelet uer's $HOME/.dockercfg and kublet will then authenticate with the container registry before attempting to pull. This doesn't appear to be working. Am I doing something wrong? Is this still possible? I am using the vagrant provisioner located in https://github.com/kubernetes/kubernetes/tree/master/cluster Also: I am aware of the ImagePullSecrets method but am trying to figure out why this isn't working. Update: I moved /root/.dockercfg to /.dockercfg and it now appears to be pulling private images. ### Response: From: https://github.com/kubernetes/kubernetes/pull/12717/files This function func ReadDockerConfigFile() (cfg DockerConfig, err error) is used to parse config which is stored in: GetPreferredDockercfgPath() + "/config.json" workingDirPath + "/config.json" $HOME/.docker/config.json /.docker/config.json GetPreferredDockercfgPath() + "/.dockercfg" workingDirPath + "/.dockercfg" $HOME/.dockercfg /.dockercfg The first four one are new type of secret, and the last four one are the old type. This helps explain why moving the file to /.dockercfg fixed your issue, but not why there was an issue in the first place.
I am attempting to pull private docker images from Docker Hub. Error: image orgname/imagename:latest not found The info I am seeing on the internet... http://kubernetes.io/v1.0/docs/user-guide/images.html#using-a-private-registry https://github.com/kubernetes/kubernetes/issues/7954 Leads me to believe I should be able to put something like { "https://index.docker.io/v1/": { "auth": "base64pw==", "email": "email@place.com" } } In the kubelet uer's $HOME/.dockercfg and kublet will then authenticate with the container registry before attempting to pull. This doesn't appear to be working. Am I doing something wrong? Is this still possible? I am using the vagrant provisioner located in https://github.com/kubernetes/kubernetes/tree/master/cluster Also: I am aware of the ImagePullSecrets method but am trying to figure out why this isn't working. Update: I moved /root/.dockercfg to /.dockercfg and it now appears to be pulling private images.
Tl;Dr: Watch this space for Kubernetes 1.2 Till now Kubernetes has only supported L4 loadbalancing. This means the GCE/GKE loadbalancer opens up a tcp connection and just sends traffic to your backend, which is responsible for terminating ssl. As of Kubernetes 1.1, Kubernetes has an "Ingress" resource, but it's currently in Beta and only supports HTTP. It will support different SSL modes in 1.2. So, how to terminate SSL with a normal Kubernetes service? https://github.com/kubernetes/kubernetes/blob/release-1.0/examples/https-nginx/README.md How to create a loadbalancer for this Service? L4: Change NodePort to LoadBalancer (https://github.com/kubernetes/kubernetes/blob/release-1.0/examples/https-nginx/nginx-app.yaml#L8) L7: Deploy a Service loadbalancer (https://github.com/kubernetes/contrib/tree/master/service-loadbalancer#https) How to create a GCE HTTP loadbalancer through Kubernetes? https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/ingress.md#simple-fanout So how to create a GCE HTTPS loadbalancer through Kubernetes? Coming in 1.2, currently the process is manual. If you're not clear on the exact manual steps reply to this and I will clarify (not sure if I should list all of them here and confuse you even more).
###Instruction: Background: I'm pretty new to the Google's Cloud platform so I want to make sure that I'm not is missing anything obvious. We're experimenting with GKE and Kubernetes and we'd like to expose some services over https. I've read the documentation for http(s) load-balancing which seem to suggest that you should maintain your own nginx instance that does SSL terminal and load balancing. To me this looks quite complex (I'm used to working on AWS and its load-balancer (ELB) which has supported SSL termination for ages). Questions: Is creating and maintaining an nginx instance the way to go if all you need is SSL termination in GKE? If so, how is this done? The documentation doesn't really seem to convey this afaict. ### Response: Tl;Dr: Watch this space for Kubernetes 1.2 Till now Kubernetes has only supported L4 loadbalancing. This means the GCE/GKE loadbalancer opens up a tcp connection and just sends traffic to your backend, which is responsible for terminating ssl. As of Kubernetes 1.1, Kubernetes has an "Ingress" resource, but it's currently in Beta and only supports HTTP. It will support different SSL modes in 1.2. So, how to terminate SSL with a normal Kubernetes service? https://github.com/kubernetes/kubernetes/blob/release-1.0/examples/https-nginx/README.md How to create a loadbalancer for this Service? L4: Change NodePort to LoadBalancer (https://github.com/kubernetes/kubernetes/blob/release-1.0/examples/https-nginx/nginx-app.yaml#L8) L7: Deploy a Service loadbalancer (https://github.com/kubernetes/contrib/tree/master/service-loadbalancer#https) How to create a GCE HTTP loadbalancer through Kubernetes? https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/ingress.md#simple-fanout So how to create a GCE HTTPS loadbalancer through Kubernetes? Coming in 1.2, currently the process is manual. If you're not clear on the exact manual steps reply to this and I will clarify (not sure if I should list all of them here and confuse you even more).
Background: I'm pretty new to the Google's Cloud platform so I want to make sure that I'm not is missing anything obvious. We're experimenting with GKE and Kubernetes and we'd like to expose some services over https. I've read the documentation for http(s) load-balancing which seem to suggest that you should maintain your own nginx instance that does SSL terminal and load balancing. To me this looks quite complex (I'm used to working on AWS and its load-balancer (ELB) which has supported SSL termination for ages). Questions: Is creating and maintaining an nginx instance the way to go if all you need is SSL termination in GKE? If so, how is this done? The documentation doesn't really seem to convey this afaict.
use the latest kubernetes and pass the params --proxy-mode=iptables to kube-proxy start command, HDFS cluster works now
###Instruction: I deployed kubernetes with flanneld.service enabled in coreos. And then I started hdfs namenode and datanode via kubernetes replication-controller. I also created kubernetes service for namenode. The namenode service ip is 10.100.220.223, while the pod ip of namenode is 10.20.96.4. In my case, one namenode and one datanode happens to be on same host. And namenode pod and datanode pod can ping each other successfully. However I encountered the following two problems when trying to start hdfs datanode: If I used namenode service ip 10.100.220.223 as fs.defaultFS in core-site.xml for datanode. When datanode tried to register itself to namenode via rpc request, namenode got the wrong ip address for the datanode. Normally it should get the pod ip of the datanode, but in this case docker0 inet address of datanode host is reported to namenode. in order to workaround this, I used namenode pod ip 10.20.96.4 in core-site.xml for datanode. This time datanode can't be started at all. The error info reports that "k8s_POD-2fdae8b2_namenode-controller-keptk_default_55b8147c-881f-11e5-abad-02d07c9f6649_e41f815f.bridge" is used as namenode host instead of the namenode pod ip. I tried to search this issue over the network, but nothing helps me. Could you please help me out of this? Thanks. ### Response: use the latest kubernetes and pass the params --proxy-mode=iptables to kube-proxy start command, HDFS cluster works now
I deployed kubernetes with flanneld.service enabled in coreos. And then I started hdfs namenode and datanode via kubernetes replication-controller. I also created kubernetes service for namenode. The namenode service ip is 10.100.220.223, while the pod ip of namenode is 10.20.96.4. In my case, one namenode and one datanode happens to be on same host. And namenode pod and datanode pod can ping each other successfully. However I encountered the following two problems when trying to start hdfs datanode: If I used namenode service ip 10.100.220.223 as fs.defaultFS in core-site.xml for datanode. When datanode tried to register itself to namenode via rpc request, namenode got the wrong ip address for the datanode. Normally it should get the pod ip of the datanode, but in this case docker0 inet address of datanode host is reported to namenode. in order to workaround this, I used namenode pod ip 10.20.96.4 in core-site.xml for datanode. This time datanode can't be started at all. The error info reports that "k8s_POD-2fdae8b2_namenode-controller-keptk_default_55b8147c-881f-11e5-abad-02d07c9f6649_e41f815f.bridge" is used as namenode host instead of the namenode pod ip. I tried to search this issue over the network, but nothing helps me. Could you please help me out of this? Thanks.
Flocker is supported in Kubernetes release 1.1. A Flocker dataset can be referenced from a PersistentVolume or directly from a Pod volume. http://kubernetes.io/v1.1/examples/flocker/ http://kubernetes.io/v1.1/docs/api-reference/v1/definitions.html#_v1_persistentvolume
###Instruction: Kubernetes volume support flocker? If support flocker volume, give an example about using flocker volume? Thanks! ### Response: Flocker is supported in Kubernetes release 1.1. A Flocker dataset can be referenced from a PersistentVolume or directly from a Pod volume. http://kubernetes.io/v1.1/examples/flocker/ http://kubernetes.io/v1.1/docs/api-reference/v1/definitions.html#_v1_persistentvolume
Kubernetes volume support flocker? If support flocker volume, give an example about using flocker volume? Thanks!
"kubectl describe pod busybox" or "kubectl get pod busybox -o yaml" output could be useful. Since you didn't specify, I assume that the busybox pod was created in the default namespace, and that no resource requirements nor nodeSelectors were specified. In many cluster setups, including vagrant, we create a LimitRange for the default namespace to request a nominal amount of CPU for each pod (.1 cores). You should be able to confirm that this is the case using "kubectl get pod busybox -o yaml". We also create a number of system pods automatically. You should be able to see them using "kubectl get pods --all-namespaces -o wide". It is possible for nodes with sufficiently small capacity to fill up with just system pods, though I wouldn't expect this to happen with 2-core nodes. If the busybox pod were created before the nodes were registered, that could be another reason for that event, though I would expect to see a subsequent event for the reason that the pod remained pending even after nodes were created. Please take a look at the troubleshooting guide for more troubleshooting tips, and follow up here on on slack (slack.k8s.io) with more information. http://kubernetes.io/v1.1/docs/troubleshooting.html
###Instruction: I have a picture below of my mac. K8S Cluster(on VirtualBox, 1*master, 2*workers) OS Ubuntu 15.04 K8S version 1.1.1 When I try to create a pod "busybox.yaml" it goes to pending status. How can I resolve it? I pasted the online status below for understanding with a picture (kubectl describe node). Status kubectl get nodes 192.168.56.11 kubernetes.io/hostname=192.168.56.11 Ready 7d 192.168.56.12 kubernetes.io/hostname=192.168.56.12 Ready 7d kubectl get ev 1h 39s 217 busybox Pod FailedScheduling {scheduler } no nodes available to schedule pods kubectl get pods NAME READY STATUS RESTARTS AGE busybox 0/1 Pending 0 1h And I also added one more status. ### Response: "kubectl describe pod busybox" or "kubectl get pod busybox -o yaml" output could be useful. Since you didn't specify, I assume that the busybox pod was created in the default namespace, and that no resource requirements nor nodeSelectors were specified. In many cluster setups, including vagrant, we create a LimitRange for the default namespace to request a nominal amount of CPU for each pod (.1 cores). You should be able to confirm that this is the case using "kubectl get pod busybox -o yaml". We also create a number of system pods automatically. You should be able to see them using "kubectl get pods --all-namespaces -o wide". It is possible for nodes with sufficiently small capacity to fill up with just system pods, though I wouldn't expect this to happen with 2-core nodes. If the busybox pod were created before the nodes were registered, that could be another reason for that event, though I would expect to see a subsequent event for the reason that the pod remained pending even after nodes were created. Please take a look at the troubleshooting guide for more troubleshooting tips, and follow up here on on slack (slack.k8s.io) with more information. http://kubernetes.io/v1.1/docs/troubleshooting.html
I have a picture below of my mac. K8S Cluster(on VirtualBox, 1*master, 2*workers) OS Ubuntu 15.04 K8S version 1.1.1 When I try to create a pod "busybox.yaml" it goes to pending status. How can I resolve it? I pasted the online status below for understanding with a picture (kubectl describe node). Status kubectl get nodes 192.168.56.11 kubernetes.io/hostname=192.168.56.11 Ready 7d 192.168.56.12 kubernetes.io/hostname=192.168.56.12 Ready 7d kubectl get ev 1h 39s 217 busybox Pod FailedScheduling {scheduler } no nodes available to schedule pods kubectl get pods NAME READY STATUS RESTARTS AGE busybox 0/1 Pending 0 1h And I also added one more status.
I noted your use case in the feature request for exposing secrets as environment variables: https://github.com/kubernetes/kubernetes/issues/4710 It's mainly the quoting that makes this tricky in shell. The following worked for me interactively and should work in a script, but additional quoting would be needed if specified as an argument to "sh -c". (ls -1 secretdir | while read var ; do echo export ${var}=$(cat secretdir/${var}) ; done; echo yourcommand) | sh - There may be more elegant ways to do this.
###Instruction: Update: Kubernetes supports adding secrets directly to environment variables now. See pod example on github Original post: I've been using files created by Kubernetes Secrets to store sensitive configs, but I always end up writing an extra layer into the containers or overriding the CMD to get the contents of the secret files into environment variables before running like normal. I'd like a bash script to do this for me. I found a ruby script that does something similar, but my ruby and bash skills aren't quite good enough to complete this. Here's the ruby script from https://blog.oestrich.org/2015/09/kubernetes-secrets-to-env-file/ env = {} Dir["#{ARGV[1]}/*"].each do |file| key = file.split("/").last key = key.gsub("-", "_").upcase env[key] = File.read(file).strip end File.open(ARGV[0], "w") do |file| env.each do |key, value| file.puts(%{export #{key}="#{value}"}) end end With a bash script that does something similar to the above, it would be nice if it could be made generic, so that it checks if the directory exists, and if not (e.g. in a plain Docker environment), it will assume that the environment variables are already set by some other means. How would I write a script to do this? ### Response: I noted your use case in the feature request for exposing secrets as environment variables: https://github.com/kubernetes/kubernetes/issues/4710 It's mainly the quoting that makes this tricky in shell. The following worked for me interactively and should work in a script, but additional quoting would be needed if specified as an argument to "sh -c". (ls -1 secretdir | while read var ; do echo export ${var}=$(cat secretdir/${var}) ; done; echo yourcommand) | sh - There may be more elegant ways to do this.
Update: Kubernetes supports adding secrets directly to environment variables now. See pod example on github Original post: I've been using files created by Kubernetes Secrets to store sensitive configs, but I always end up writing an extra layer into the containers or overriding the CMD to get the contents of the secret files into environment variables before running like normal. I'd like a bash script to do this for me. I found a ruby script that does something similar, but my ruby and bash skills aren't quite good enough to complete this. Here's the ruby script from https://blog.oestrich.org/2015/09/kubernetes-secrets-to-env-file/ env = {} Dir["#{ARGV[1]}/*"].each do |file| key = file.split("/").last key = key.gsub("-", "_").upcase env[key] = File.read(file).strip end File.open(ARGV[0], "w") do |file| env.each do |key, value| file.puts(%{export #{key}="#{value}"}) end end With a bash script that does something similar to the above, it would be nice if it could be made generic, so that it checks if the directory exists, and if not (e.g. in a plain Docker environment), it will assume that the environment variables are already set by some other means. How would I write a script to do this?
You can do this a few ways, one is keep doing what you're doing in a single file, another is to use labels to specify which environment's config to use, and the other is use namespaces. I personally recommend namespaces, this way you can have separate .yml files for each environment that potentially spins up the same pods, but with different configurations, so to do this you would have staging, prod, etc namespaces. Namespaces also are a great way to have the same kubernetes cluster have a concept of staging and production. Additionally you can specify permissions for certain namespaces. Here are the docs on namespaces https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/design/namespaces.md
###Instruction: We are moving our ruby microservices to kubernetes and we used to hold environment specific configuration in the config/application.yml. With kubernetes, you can create environment specific files for each service, e.g. config/kubernetes/production.yml etc. While kubernetes pod configuration file is able to hold environmental variables, it seems that you cannot really hold structured data in there. For an example, in application.yml we have development: &development process: notifier: type: 'terminal-notifier' ... production: &production process: notifier: type: 'airbrake' api_key: 'xxxx' host: 'xxx.xxx.com' ... Is it reasonable to continue this practice with kubernetes and break the environments up in the application.yml or does kubernetes have some other best practices for provisioning structured configuration for pod? Note that until all services are migrated, we basically have to hold the configurations as such: kubernetes_staging: <<: *staging ... ### Response: You can do this a few ways, one is keep doing what you're doing in a single file, another is to use labels to specify which environment's config to use, and the other is use namespaces. I personally recommend namespaces, this way you can have separate .yml files for each environment that potentially spins up the same pods, but with different configurations, so to do this you would have staging, prod, etc namespaces. Namespaces also are a great way to have the same kubernetes cluster have a concept of staging and production. Additionally you can specify permissions for certain namespaces. Here are the docs on namespaces https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/design/namespaces.md
We are moving our ruby microservices to kubernetes and we used to hold environment specific configuration in the config/application.yml. With kubernetes, you can create environment specific files for each service, e.g. config/kubernetes/production.yml etc. While kubernetes pod configuration file is able to hold environmental variables, it seems that you cannot really hold structured data in there. For an example, in application.yml we have development: &development process: notifier: type: 'terminal-notifier' ... production: &production process: notifier: type: 'airbrake' api_key: 'xxxx' host: 'xxx.xxx.com' ... Is it reasonable to continue this practice with kubernetes and break the environments up in the application.yml or does kubernetes have some other best practices for provisioning structured configuration for pod? Note that until all services are migrated, we basically have to hold the configurations as such: kubernetes_staging: <<: *staging ...
I finally solved the problem. In the yaml files describing the Replication Controllers, I was using keyring: in the volume section: keyring: "ceph.client.admin.keyring" After I generated a Ceph secret and changed the yaml files to use secretRef: secretRef: name: "ceph-secret" Kubernetes was able to map and mount the Ceph volumes and the pods began to start normally. I don't know why using keyring: doesn't work in this case.
###Instruction: I am setting up a small Kubernetes cluster using a VM (master) and 3 bare metal servers (all running Ubuntu 14.04). I followed the Kubernetes install tutorial for Ubuntu. Each bare metal server also has 2T of disk space exported using Ceph 0.94.5. Everything is working fine, but when I try to start a Replication Controller I get the following (kubectl get pods): NAME READY STATUS RESTARTS AGE site2-zecnf 0/1 Image: site-img is ready, container is creating 0 12m The pod will be in this Not Ready state forever, but, if I kill it and start it again, it will run fine (sometimes I have to repeat this operation a few times though). Once the pod is running, everything works just fine. If, for some reason, the pod dies, it's restarted by Kubernetes, but can enter in this Not Ready state again. Running: kubectl describe pod java-site2-crctv I get (some fields deleted): Namespace: default Status: Pending Replication Controllers: java-site2 (1/1 replicas created) Containers: java-site: Image: javasite-img State: Waiting Reason: Image: javasite-img is ready, container is creating Ready: False Restart Count: 0 Conditions: Type Status Ready False Events: FirstSeen LastSeen Count From SubobjectPath Reason Message Sat, 14 Nov 2015 12:37:56 -0200 Sat, 14 Nov 2015 12:37:56 -0200 1 {scheduler } scheduled Successfully assigned java-site2-crctv to 10.70.2.3 Sat, 14 Nov 2015 12:37:57 -0200 Sat, 14 Nov 2015 12:45:29 -0200 46 {kubelet 10.70.2.3} failedMount Unable to mount volumes for pod "java-site2-crctv_default": exit status 22 Sat, 14 Nov 2015 12:37:57 -0200 Sat, 14 Nov 2015 12:45:29 -0200 46 {kubelet 10.70.2.3} failedSync Error syncing pod, skipping: exit status 22 The pod cannot mount the volume. But, if I mount the volumes (rdb blocks) by hand in a local folder in all nodes, the problem is gone (pods start without problems). It seems to me that Kubernetes isn't able to map them (sudo rbd map java-site-vol), only to mount them (sudo mount /dev/rbd/rbd/java-site-vol /...). Should I map all Ceph volumes that I use or should Kubernetes do that? ### Response: I finally solved the problem. In the yaml files describing the Replication Controllers, I was using keyring: in the volume section: keyring: "ceph.client.admin.keyring" After I generated a Ceph secret and changed the yaml files to use secretRef: secretRef: name: "ceph-secret" Kubernetes was able to map and mount the Ceph volumes and the pods began to start normally. I don't know why using keyring: doesn't work in this case.
I am setting up a small Kubernetes cluster using a VM (master) and 3 bare metal servers (all running Ubuntu 14.04). I followed the Kubernetes install tutorial for Ubuntu. Each bare metal server also has 2T of disk space exported using Ceph 0.94.5. Everything is working fine, but when I try to start a Replication Controller I get the following (kubectl get pods): NAME READY STATUS RESTARTS AGE site2-zecnf 0/1 Image: site-img is ready, container is creating 0 12m The pod will be in this Not Ready state forever, but, if I kill it and start it again, it will run fine (sometimes I have to repeat this operation a few times though). Once the pod is running, everything works just fine. If, for some reason, the pod dies, it's restarted by Kubernetes, but can enter in this Not Ready state again. Running: kubectl describe pod java-site2-crctv I get (some fields deleted): Namespace: default Status: Pending Replication Controllers: java-site2 (1/1 replicas created) Containers: java-site: Image: javasite-img State: Waiting Reason: Image: javasite-img is ready, container is creating Ready: False Restart Count: 0 Conditions: Type Status Ready False Events: FirstSeen LastSeen Count From SubobjectPath Reason Message Sat, 14 Nov 2015 12:37:56 -0200 Sat, 14 Nov 2015 12:37:56 -0200 1 {scheduler } scheduled Successfully assigned java-site2-crctv to 10.70.2.3 Sat, 14 Nov 2015 12:37:57 -0200 Sat, 14 Nov 2015 12:45:29 -0200 46 {kubelet 10.70.2.3} failedMount Unable to mount volumes for pod "java-site2-crctv_default": exit status 22 Sat, 14 Nov 2015 12:37:57 -0200 Sat, 14 Nov 2015 12:45:29 -0200 46 {kubelet 10.70.2.3} failedSync Error syncing pod, skipping: exit status 22 The pod cannot mount the volume. But, if I mount the volumes (rdb blocks) by hand in a local folder in all nodes, the problem is gone (pods start without problems). It seems to me that Kubernetes isn't able to map them (sudo rbd map java-site-vol), only to mount them (sudo mount /dev/rbd/rbd/java-site-vol /...). Should I map all Ceph volumes that I use or should Kubernetes do that?
Currently, Google Cloud Platform Networks only support IPv4, so connecting to IPv6 addresses from GKE is not possible.
###Instruction: Background I'd like to connect Wordpress docker container to a Google Could SQL instance. By default Google Cloud SQL only expose an IPv6 address and preferably I'd like to connect Wordpress to this address but I can't find a way to do so (see my other stackoverflow post for details). Question I'd like to know if it's possible to connect to an IPv6 address from a pod running in Kubernetes (GKE)? If so how? ### Response: Currently, Google Cloud Platform Networks only support IPv4, so connecting to IPv6 addresses from GKE is not possible.
Background I'd like to connect Wordpress docker container to a Google Could SQL instance. By default Google Cloud SQL only expose an IPv6 address and preferably I'd like to connect Wordpress to this address but I can't find a way to do so (see my other stackoverflow post for details). Question I'd like to know if it's possible to connect to an IPv6 address from a pod running in Kubernetes (GKE)? If so how?
The issue is that your client (kubectl) doesn't support the new ingress resource because it hasn't been updated to 1.1 yet. This is mentioned in the Google Container Engine release notes: The packaged kubectl is version 1.0.7, consequently new Kubernetes 1.1 APIs like autoscaling will not be available via kubectl until next week's push of the kubectl binary. along with the solution (download the newer binary manually).
###Instruction: I'd like to try out the new Ingress resource available in Kubernetes 1.1 in Google Container Engine (GKE). But when I try to create for example the following resource: apiVersion: extensions/v1beta1 kind: Ingress metadata: name: test-ingress spec: backend: serviceName: testsvc servicePort: 80 using: $ kubectl create -f test-ingress.yaml I end up with the following error message: error: could not read an encoded object from test-ingress.yaml: API version "extensions/v1beta1" in "test-ingress.yaml" isn't supported, only supports API versions ["v1"] error: no objects passed to create When I run kubectl version it shows: Client Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.7", GitCommit:"6234d6a0abd3323cd08c52602e4a91e47fc9491c", GitTreeState:"clean"} Server Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.1", GitCommit:"92635e23dfafb2ddc828c8ac6c03c7a7205a84d8", GitTreeState:"clean"} But I seem to have the latest kubectl component installed since running gcloud components update kubectl just gives me: All components are up to date. So how do I enable the extensions/v1beta1 in Kubernetes/GKE? ### Response: The issue is that your client (kubectl) doesn't support the new ingress resource because it hasn't been updated to 1.1 yet. This is mentioned in the Google Container Engine release notes: The packaged kubectl is version 1.0.7, consequently new Kubernetes 1.1 APIs like autoscaling will not be available via kubectl until next week's push of the kubectl binary. along with the solution (download the newer binary manually).
I'd like to try out the new Ingress resource available in Kubernetes 1.1 in Google Container Engine (GKE). But when I try to create for example the following resource: apiVersion: extensions/v1beta1 kind: Ingress metadata: name: test-ingress spec: backend: serviceName: testsvc servicePort: 80 using: $ kubectl create -f test-ingress.yaml I end up with the following error message: error: could not read an encoded object from test-ingress.yaml: API version "extensions/v1beta1" in "test-ingress.yaml" isn't supported, only supports API versions ["v1"] error: no objects passed to create When I run kubectl version it shows: Client Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.7", GitCommit:"6234d6a0abd3323cd08c52602e4a91e47fc9491c", GitTreeState:"clean"} Server Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.1", GitCommit:"92635e23dfafb2ddc828c8ac6c03c7a7205a84d8", GitTreeState:"clean"} But I seem to have the latest kubectl component installed since running gcloud components update kubectl just gives me: All components are up to date. So how do I enable the extensions/v1beta1 in Kubernetes/GKE?
The warning is so that folks don't assume that using emptyDir provides a persistent storage layer. An emptyDir volume will persist as long as the pod is running on the same host. But if the host is replaced or it's disk becomes corrupted, then all data would be lost. Using network mounted storage is one way to work around both of these failure modes. If you want to use replicated storage instead, that works as well.
###Instruction: In the Kubernetes example of Elasticsearch production deployment, there is a warning about using emptyDir, and advises to "be adapted according to your storage needs", which is linked to the documentation of persistent storage on Kubernetes. Is it better to use a persistent storage, which is an external storage for the node, and so needs (high) I/O over network, or can we deploy a reliable Elasticsearch using multiple data nodes with local emptyDir storage? Context: We're deploying our Kubernetes on commodity hardware, and we prefer not to use SAN for the storage layer (because it doesn't seem like commodity). ### Response: The warning is so that folks don't assume that using emptyDir provides a persistent storage layer. An emptyDir volume will persist as long as the pod is running on the same host. But if the host is replaced or it's disk becomes corrupted, then all data would be lost. Using network mounted storage is one way to work around both of these failure modes. If you want to use replicated storage instead, that works as well.
In the Kubernetes example of Elasticsearch production deployment, there is a warning about using emptyDir, and advises to "be adapted according to your storage needs", which is linked to the documentation of persistent storage on Kubernetes. Is it better to use a persistent storage, which is an external storage for the node, and so needs (high) I/O over network, or can we deploy a reliable Elasticsearch using multiple data nodes with local emptyDir storage? Context: We're deploying our Kubernetes on commodity hardware, and we prefer not to use SAN for the storage layer (because it doesn't seem like commodity).
I started a bash shell inside the container and executed my command instead and that turned out to work: command: ["/bin/bash", "-c", "PID=`pidof java` && kill -SIGTERM $PID && while ps -p $PID > /dev/null; do sleep 1; done;"] Without /bin/bash I couldn't get it working.
###Instruction: I've read the docs on graceful termination of a pod in Kubernetes but I'm not quite sure how to map my specific use case of shutting down a Java process gracefully. What I want to do is to run the following bash command as a part of the termination process: $ kill -SIGTERM `pidof java` I've tried this: ... lifecycle: preStop: exec: command: ["kill", "-SIGTERM", "`pidof java`"] ... But nothing happens but the Java shutdown hook doesn't seem to kick-in when I stop the pod (kubectl stop pod pod-xxx). I suspect that the pidof java part of the kill command doesn't work (but I'm not sure). How would I do this in Kubernetes? ### Response: I started a bash shell inside the container and executed my command instead and that turned out to work: command: ["/bin/bash", "-c", "PID=`pidof java` && kill -SIGTERM $PID && while ps -p $PID > /dev/null; do sleep 1; done;"] Without /bin/bash I couldn't get it working.
I've read the docs on graceful termination of a pod in Kubernetes but I'm not quite sure how to map my specific use case of shutting down a Java process gracefully. What I want to do is to run the following bash command as a part of the termination process: $ kill -SIGTERM `pidof java` I've tried this: ... lifecycle: preStop: exec: command: ["kill", "-SIGTERM", "`pidof java`"] ... But nothing happens but the Java shutdown hook doesn't seem to kick-in when I stop the pod (kubectl stop pod pod-xxx). I suspect that the pidof java part of the kill command doesn't work (but I'm not sure). How would I do this in Kubernetes?
As of Kubernetes v1.1, you can now implement a Horizontal Pod Autoscaler: http://kubernetes.io/v1.1/docs/user-guide/horizontal-pod-autoscaler.html
###Instruction: I'm developing a Docker-based web service, where each subscriber has private access to their own Docker container running in the cloud, exposing port 443. I've used nginx-proxy/docker-gen successfully to serve multiple Docker containers from the same VM, with just port 443 exposed to the public net. This works fine ... but what do I do when the subscribers saturate the VM resources? (As a simple example, I may have a practical limit of 10 subscribers' containers on a single DigitalOcean 2Gb instance serving as a Docker host.) Eg when subscriber #11 signs up, I need to have a new Docker host ready and waiting to start up that new container. In other words, I want to do horizontal autoscaling of my Docker hosts, responsive to user subscription demand. Doing some service discovery and making the containers publicly-addressable would be nice. I'm trying to work out what the best solution is. Kubernetes 1.1 seems to support auto-scaling of Pods (ie basically increasing the number of containers...) but not the auto-scaling of the container hosts ("minions" in Kubernetes-speak??) I've looked at the following projects which seem close to what I need: Deis - no explicit autoscaling as far as I can tell Tsuru - possible autoscaling solution but limited to count/RAM Mesos/Mesosphere - probably much more complex than necessary Can anybody make any useful suggestions?? ### Response: As of Kubernetes v1.1, you can now implement a Horizontal Pod Autoscaler: http://kubernetes.io/v1.1/docs/user-guide/horizontal-pod-autoscaler.html
I'm developing a Docker-based web service, where each subscriber has private access to their own Docker container running in the cloud, exposing port 443. I've used nginx-proxy/docker-gen successfully to serve multiple Docker containers from the same VM, with just port 443 exposed to the public net. This works fine ... but what do I do when the subscribers saturate the VM resources? (As a simple example, I may have a practical limit of 10 subscribers' containers on a single DigitalOcean 2Gb instance serving as a Docker host.) Eg when subscriber #11 signs up, I need to have a new Docker host ready and waiting to start up that new container. In other words, I want to do horizontal autoscaling of my Docker hosts, responsive to user subscription demand. Doing some service discovery and making the containers publicly-addressable would be nice. I'm trying to work out what the best solution is. Kubernetes 1.1 seems to support auto-scaling of Pods (ie basically increasing the number of containers...) but not the auto-scaling of the container hosts ("minions" in Kubernetes-speak??) I've looked at the following projects which seem close to what I need: Deis - no explicit autoscaling as far as I can tell Tsuru - possible autoscaling solution but limited to count/RAM Mesos/Mesosphere - probably much more complex than necessary Can anybody make any useful suggestions??
Firstly, please see https://github.com/kubernetes/kubernetes/issues/11793#issuecomment-150410114 if you haven't already. I guess I can kill and restart pod using a mechanism similar to that used by kubectl delete and create but I am unable to get all the required pod and node lists. I would suggest writing a control loop instead. When you create a pod it's stored in the master and you can easily create a Kubernetes client to retrieve this pod, in the same way the Kubernetes scheduler does. See [1] for how to access the api through a client. With this in mind, I would suggest writing a control loop. There are several existing examples [2], but a very basic controller is the Ingress controller (just so you aren't confused by all the code in production controllers) [3]. A problem you will face is getting the Kubernetes scheduler to ignore the pod. See discussion on the github issue for solutions. It is possible to go the route you're on and implement it in kubectl, if you still want to. Run: kubectl get pods -o wide --v=7 Note this output has node names, and kubectl should show you the REST calls it's making in the process. I suspect you will run into problems soon though, as you really don't just want to create/delete, because there's a high chance the scheduler will put the pod on the same node. [1] kubernetes go client used storage of nodes and cluster [2] https://github.com/kubernetes/kubernetes/tree/master/pkg/controller [3] https://github.com/kubernetes/contrib/tree/master/Ingress/controllers
###Instruction: I want to implement a rescheduler like functionality which basically kills pods if it decides that the pods could be rescheduled in a better way (based on requiring less number of nodes/fitting etc). Till now I have created a new kubectl command which I want to run whenever I want to reschedule. I have also looked at the code and rescheduler proposal in the docs. But, I am unable to access the pod details that would be needed to decide which pod to kill if any from the command's run function(present in pkg/kubectl/cmd/newCommand.go) . I guess I can kill and restart pod using a mechanism similar to that used by kubectl delete and create but I am unable to get all the required pod and node lists. For example, the objs variable in pkg/kubectl/cmd/get.go (used for kubectl get) contains pod details but there is no data of which node they are scheduled on and what are the resource capacities for that node. I would be grateful if someone could give some idea of how to get these details. Also, if it is easier to implement it at some other place instead of as a kubectl command then such suggestions are also welcomed. ### Response: Firstly, please see https://github.com/kubernetes/kubernetes/issues/11793#issuecomment-150410114 if you haven't already. I guess I can kill and restart pod using a mechanism similar to that used by kubectl delete and create but I am unable to get all the required pod and node lists. I would suggest writing a control loop instead. When you create a pod it's stored in the master and you can easily create a Kubernetes client to retrieve this pod, in the same way the Kubernetes scheduler does. See [1] for how to access the api through a client. With this in mind, I would suggest writing a control loop. There are several existing examples [2], but a very basic controller is the Ingress controller (just so you aren't confused by all the code in production controllers) [3]. A problem you will face is getting the Kubernetes scheduler to ignore the pod. See discussion on the github issue for solutions. It is possible to go the route you're on and implement it in kubectl, if you still want to. Run: kubectl get pods -o wide --v=7 Note this output has node names, and kubectl should show you the REST calls it's making in the process. I suspect you will run into problems soon though, as you really don't just want to create/delete, because there's a high chance the scheduler will put the pod on the same node. [1] kubernetes go client used storage of nodes and cluster [2] https://github.com/kubernetes/kubernetes/tree/master/pkg/controller [3] https://github.com/kubernetes/contrib/tree/master/Ingress/controllers
I want to implement a rescheduler like functionality which basically kills pods if it decides that the pods could be rescheduled in a better way (based on requiring less number of nodes/fitting etc). Till now I have created a new kubectl command which I want to run whenever I want to reschedule. I have also looked at the code and rescheduler proposal in the docs. But, I am unable to access the pod details that would be needed to decide which pod to kill if any from the command's run function(present in pkg/kubectl/cmd/newCommand.go) . I guess I can kill and restart pod using a mechanism similar to that used by kubectl delete and create but I am unable to get all the required pod and node lists. For example, the objs variable in pkg/kubectl/cmd/get.go (used for kubectl get) contains pod details but there is no data of which node they are scheduled on and what are the resource capacities for that node. I would be grateful if someone could give some idea of how to get these details. Also, if it is easier to implement it at some other place instead of as a kubectl command then such suggestions are also welcomed.
It appears that you can't currently set a ulimit but it is an open issue: https://github.com/kubernetes/kubernetes/issues/3595
###Instruction: How do I set ulimit for containers in Kubernetes? (specifically ulimit -u) ### Response: It appears that you can't currently set a ulimit but it is an open issue: https://github.com/kubernetes/kubernetes/issues/3595
How do I set ulimit for containers in Kubernetes? (specifically ulimit -u)
I found the Solution, the reason is the version of kube-apiserver,kube-controller-manager and kube-scheduler does not match with the kubelet. the detail: https://github.com/kubernetes/kubernetes/issues/17154
###Instruction: I create a kubernetes cluster to test. but cannot create rc. I got errorreason: 'failedScheduling' no nodes available to schedule pods: I1112 04:24:34.626614 6 factory.go:214] About to try and schedule pod my-nginx-63t4p I1112 04:24:34.626635 6 scheduler.go:127] Failed to schedule: &{{ } {my-nginx-63t4p my-nginx- default /api/v1/namespaces/default/pods/my-nginx-63t4p c4198c29-88ef-11e5-af0e-002590fdff2c 1054 0 2015-11-12 03:45:07 +0000 UTC <nil> map[app:nginx] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"default","name":"my-nginx","uid":"c414bbd3-88ef-11e5-8682-002590fdf940","apiVersion":"v1","resourceVersion":"1050"}}]} {[{default-token-879cw {<nil> <nil> <nil> <nil> <nil> 0xc20834c030 <nil> <nil> <nil> <nil> <nil>}}] [{nginx nginx [] [] [{ 0 80 TCP }] [] {map[] map[]} [{default-token-879cw true /var/run/secrets/kubernetes.io/serviceaccount}] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil>}] Always 0xc20834c028 <nil> ClusterFirst map[] default false []} {Pending [] <nil> []}} I1112 04:24:34.626720 6 event.go:203] Event(api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"my-nginx-63t4p", UID:"c4198c29-88ef-11e5-af0e-002590fdff2c", APIVersion:"v1", ResourceVersion:"1054", FieldPath:""}): reason: 'failedScheduling' no nodes available to schedule pods the status of pod like : core@core-1-86 ~ $ kubectl get po -o wide NAME READY STATUS RESTARTS AGE NODE my-nginx-3w98h 0/1 Pending 0 56m my-nginx-4fau8 0/1 Pending 0 56m my-nginx-9zc4f 0/1 Pending 0 56m my-nginx-fzz5i 0/1 Pending 0 56m my-nginx-hqqpt 0/1 Pending 0 56m my-nginx-pm2bo 0/1 Pending 0 56m my-nginx-rf3tk 0/1 Pending 0 56m my-nginx-v1dj3 0/1 Pending 0 56m my-nginx-viiop 0/1 Pending 0 56m my-nginx-yy23r 0/1 Pending 0 56m the example rc : core@core-1-85 ~ $ cat wk/rc-nginx.yaml apiVersion: v1 kind: ReplicationController metadata: name: my-nginx spec: replicas: 10 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 and the node status in cluster is : core@core-1-85 ~ $ kubectl get node NAME LABELS STATUS AGE 10.12.1.90 kubernetes.io/hostname=10.12.1.90 Ready 37m 10.12.1.92 kubernetes.io/hostname=10.12.1.92 Ready 37m 10.12.1.93 kubernetes.io/hostname=10.12.1.93 Ready 37m 10.12.1.94 kubernetes.io/hostname=10.12.1.94 Ready 38m 10.12.1.95 kubernetes.io/hostname=10.12.1.95 Ready 38m 10.12.1.96 kubernetes.io/hostname=10.12.1.96 Ready 38m 10.12.1.97 kubernetes.io/hostname=10.12.1.97 Ready 38m 10.12.1.98 kubernetes.io/hostname=10.12.1.98 Ready 41m core-1-89 kubernetes.io/hostname=core-1-89 Ready 22m ### Response: I found the Solution, the reason is the version of kube-apiserver,kube-controller-manager and kube-scheduler does not match with the kubelet. the detail: https://github.com/kubernetes/kubernetes/issues/17154
I create a kubernetes cluster to test. but cannot create rc. I got errorreason: 'failedScheduling' no nodes available to schedule pods: I1112 04:24:34.626614 6 factory.go:214] About to try and schedule pod my-nginx-63t4p I1112 04:24:34.626635 6 scheduler.go:127] Failed to schedule: &{{ } {my-nginx-63t4p my-nginx- default /api/v1/namespaces/default/pods/my-nginx-63t4p c4198c29-88ef-11e5-af0e-002590fdff2c 1054 0 2015-11-12 03:45:07 +0000 UTC <nil> map[app:nginx] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"default","name":"my-nginx","uid":"c414bbd3-88ef-11e5-8682-002590fdf940","apiVersion":"v1","resourceVersion":"1050"}}]} {[{default-token-879cw {<nil> <nil> <nil> <nil> <nil> 0xc20834c030 <nil> <nil> <nil> <nil> <nil>}}] [{nginx nginx [] [] [{ 0 80 TCP }] [] {map[] map[]} [{default-token-879cw true /var/run/secrets/kubernetes.io/serviceaccount}] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil>}] Always 0xc20834c028 <nil> ClusterFirst map[] default false []} {Pending [] <nil> []}} I1112 04:24:34.626720 6 event.go:203] Event(api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"my-nginx-63t4p", UID:"c4198c29-88ef-11e5-af0e-002590fdff2c", APIVersion:"v1", ResourceVersion:"1054", FieldPath:""}): reason: 'failedScheduling' no nodes available to schedule pods the status of pod like : core@core-1-86 ~ $ kubectl get po -o wide NAME READY STATUS RESTARTS AGE NODE my-nginx-3w98h 0/1 Pending 0 56m my-nginx-4fau8 0/1 Pending 0 56m my-nginx-9zc4f 0/1 Pending 0 56m my-nginx-fzz5i 0/1 Pending 0 56m my-nginx-hqqpt 0/1 Pending 0 56m my-nginx-pm2bo 0/1 Pending 0 56m my-nginx-rf3tk 0/1 Pending 0 56m my-nginx-v1dj3 0/1 Pending 0 56m my-nginx-viiop 0/1 Pending 0 56m my-nginx-yy23r 0/1 Pending 0 56m the example rc : core@core-1-85 ~ $ cat wk/rc-nginx.yaml apiVersion: v1 kind: ReplicationController metadata: name: my-nginx spec: replicas: 10 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 and the node status in cluster is : core@core-1-85 ~ $ kubectl get node NAME LABELS STATUS AGE 10.12.1.90 kubernetes.io/hostname=10.12.1.90 Ready 37m 10.12.1.92 kubernetes.io/hostname=10.12.1.92 Ready 37m 10.12.1.93 kubernetes.io/hostname=10.12.1.93 Ready 37m 10.12.1.94 kubernetes.io/hostname=10.12.1.94 Ready 38m 10.12.1.95 kubernetes.io/hostname=10.12.1.95 Ready 38m 10.12.1.96 kubernetes.io/hostname=10.12.1.96 Ready 38m 10.12.1.97 kubernetes.io/hostname=10.12.1.97 Ready 38m 10.12.1.98 kubernetes.io/hostname=10.12.1.98 Ready 41m core-1-89 kubernetes.io/hostname=core-1-89 Ready 22m
Tim, I did run it again using your steps, no difference, didn't work, however today I switched to the version 1.1 docs here: http://kubernetes.io/v1.1/docs/getting-started-guides/docker.html and also switched container versions, currently using: gcr.io/google_containers/etcd:2.2.1 gcr.io/google_containers/hyperkube:v1.1.1 Lo and behold...it works!!! Containers can now talk to services! Thanks for the responses
###Instruction: Followed this guide to starting a local-machine kubernetes cluster: http://kubernetes.io/v1.0/docs/getting-started-guides/docker.html I've created various pods with .yaml files and everything works, I can access nginx and mysql using container IPs (in the 172.17.x.x range, with docker0), however when I create services, service IPs are in the 10.0.0.x range, unreachable from other containers. Isn't kube-proxy supposed to create iptables rules automatically, providing access to containers behind the service IP? No iptables changes are happening, and other containers can't reach services. Thanks! ### Response: Tim, I did run it again using your steps, no difference, didn't work, however today I switched to the version 1.1 docs here: http://kubernetes.io/v1.1/docs/getting-started-guides/docker.html and also switched container versions, currently using: gcr.io/google_containers/etcd:2.2.1 gcr.io/google_containers/hyperkube:v1.1.1 Lo and behold...it works!!! Containers can now talk to services! Thanks for the responses
Followed this guide to starting a local-machine kubernetes cluster: http://kubernetes.io/v1.0/docs/getting-started-guides/docker.html I've created various pods with .yaml files and everything works, I can access nginx and mysql using container IPs (in the 172.17.x.x range, with docker0), however when I create services, service IPs are in the 10.0.0.x range, unreachable from other containers. Isn't kube-proxy supposed to create iptables rules automatically, providing access to containers behind the service IP? No iptables changes are happening, and other containers can't reach services. Thanks!
Try passing this to the shell: "command": [ "/bin/bash", "-c", "export PGPASSWORD=password; psql -h myhost -U root -d AppPostgresDB < /db-backup/backup.sql" ]
###Instruction: I'm trying to create a pod with Postgres. After initialize, the Pod has to execute the following command: "lifecycle": { "postStart": { "exec": { "command": [ "export", "PGPASSWORD=password;", "psql", "-h", "myhost", "-U", "root", "-d", "AppPostgresDB", "<", "/db-backup/backup.sql" ] } } }, Without these command the pod works perfectly. I get the following status: NAME READY STATUS RESTARTS AGE postgres-import 0/1 ExitCode:137 0 15s I get these events: Mon, 16 Nov 2015 16:12:50 +0100 Mon, 16 Nov 2015 16:12:50 +0100 1 {kubelet ip-10-0-0-69.eu-west-1.compute.internal} spec.containers{postgres-import} created Created with docker id cfa5f8177beb Mon, 16 Nov 2015 16:12:50 +0100 Mon, 16 Nov 2015 16:12:50 +0100 1 {kubelet ip-10-0-0-69.eu-west-1.compute.internal} spec.containers{postgres-import} killing Killing with docker id 15ad0166af04 Mon, 16 Nov 2015 16:12:50 +0100 Mon, 16 Nov 2015 16:12:50 +0100 1 {kubelet ip-10-0-0-69.eu-west-1.compute.internal} spec.containers{postgres-import} started Started with docker id cfa5f8177beb Mon, 16 Nov 2015 16:13:00 +0100 Mon, 16 Nov 2015 16:13:00 +0100 1 {kubelet ip-10-0-0-69.eu-west-1.compute.internal} spec.containers{postgres-import} killing Killing with docker id cfa5f8177beb Mon, 16 Nov 2015 16:13:00 +0100 Mon, 16 Nov 2015 16:13:00 +0100 1 {kubelet ip-10-0-0-69.eu-west-1.compute.internal} spec.containers{postgres-import} created Created with docker id d910391582e9 Mon, 16 Nov 2015 16:13:01 +0100 Mon, 16 Nov 2015 16:13:01 +0100 1 {kubelet ip-10-0-0-69.eu-west-1.compute.internal} spec.containers{postgres-import} started Started with docker id d910391582e9 Mon, 16 Nov 2015 16:13:11 +0100 Mon, 16 Nov 2015 16:13:11 +0100 1 {kubelet ip-10-0-0-69.eu-west-1.compute.internal} spec.containers{postgres-import} killing Killing with docker id d910391582e9 What can I do to solve this issue? Thanks ### Response: Try passing this to the shell: "command": [ "/bin/bash", "-c", "export PGPASSWORD=password; psql -h myhost -U root -d AppPostgresDB < /db-backup/backup.sql" ]
I'm trying to create a pod with Postgres. After initialize, the Pod has to execute the following command: "lifecycle": { "postStart": { "exec": { "command": [ "export", "PGPASSWORD=password;", "psql", "-h", "myhost", "-U", "root", "-d", "AppPostgresDB", "<", "/db-backup/backup.sql" ] } } }, Without these command the pod works perfectly. I get the following status: NAME READY STATUS RESTARTS AGE postgres-import 0/1 ExitCode:137 0 15s I get these events: Mon, 16 Nov 2015 16:12:50 +0100 Mon, 16 Nov 2015 16:12:50 +0100 1 {kubelet ip-10-0-0-69.eu-west-1.compute.internal} spec.containers{postgres-import} created Created with docker id cfa5f8177beb Mon, 16 Nov 2015 16:12:50 +0100 Mon, 16 Nov 2015 16:12:50 +0100 1 {kubelet ip-10-0-0-69.eu-west-1.compute.internal} spec.containers{postgres-import} killing Killing with docker id 15ad0166af04 Mon, 16 Nov 2015 16:12:50 +0100 Mon, 16 Nov 2015 16:12:50 +0100 1 {kubelet ip-10-0-0-69.eu-west-1.compute.internal} spec.containers{postgres-import} started Started with docker id cfa5f8177beb Mon, 16 Nov 2015 16:13:00 +0100 Mon, 16 Nov 2015 16:13:00 +0100 1 {kubelet ip-10-0-0-69.eu-west-1.compute.internal} spec.containers{postgres-import} killing Killing with docker id cfa5f8177beb Mon, 16 Nov 2015 16:13:00 +0100 Mon, 16 Nov 2015 16:13:00 +0100 1 {kubelet ip-10-0-0-69.eu-west-1.compute.internal} spec.containers{postgres-import} created Created with docker id d910391582e9 Mon, 16 Nov 2015 16:13:01 +0100 Mon, 16 Nov 2015 16:13:01 +0100 1 {kubelet ip-10-0-0-69.eu-west-1.compute.internal} spec.containers{postgres-import} started Started with docker id d910391582e9 Mon, 16 Nov 2015 16:13:11 +0100 Mon, 16 Nov 2015 16:13:11 +0100 1 {kubelet ip-10-0-0-69.eu-west-1.compute.internal} spec.containers{postgres-import} killing Killing with docker id d910391582e9 What can I do to solve this issue? Thanks
Kubernetes uses the IP-per-pod model. If I understand correctly, you want to create three mongo pods, and write IP addresses of the three pods in /etc/hosts of each container. Modifying the /etc/host files directly might not be a good idea for many reasons (e.g., the pod may die and be replaced). For peer discovery in kubernetes, you need to Find out the IP addresses of the peers. Update your application with the addresses. (1) is achievable using Headless Service. (2) requires your to write a sidecar container to run along side with your mongo containers, performs (1), and configures your application. The sidecar container is highly application-specific and you may want to read some related stackoverflow questions about doing this for mongodb. As for (1), you can create a Headless Service by using this service.yaml with the clusterIP set to None. spec: clusterIP: None Then, you can create a replication controller which creates the desired number of mongo pods. For example, you can use mongo-controller.yaml, replaces the gcePersistentDisk with a desired local disk volume type (e.g. emptyDir or hostPath), and change the replica number to 3. Each of the mongo pod will be assigned an IP address automatically, and is labeled with name=mongo. The headless service uses a label selector to find the pods. When querying DNS with the service name from a node or a container, it will return a list of IP addresses of the mongo pods. E.g., $ host mongo mongo.default.svc.cluster.local has address 10.245.0.137 mongo.default.svc.cluster.local has address 10.245.3.80 mongo.default.svc.cluster.local has address 10.245.1.128 You can get the addresses in the sidecar container you wrote and configure mongodb-specific accordingly.
###Instruction: I hope everyone here is doing good. I am trying to find a way to add entries to the containers /etc/hosts file while spinning up a pod. I was just wondering to know if there is any option/parameter that I could mention in my "pod1.json" which adds the entries to the containers /etc/hosts when its being created. Something like "--add-host node1.example.com:${node1ip}" that serves the same purpose for docker as shown below. docker run \ --name mongo \ -v /home/core/mongo-files/data:/data/db \ -v /home/core/mongo-files:/opt/keyfile \ --hostname="node1.example.com" \ --add-host node1.example.com:${node1ip} \ --add-host node2.example.com:${node2ip} \ --add-host node3.example.com:${node3ip} \ -p 27017:27017 -d mongo:2.6.5 \ --smallfiles \ --keyFile /opt/keyfile/mongodb-keyfile \ --replSet "rs0" Any pointers are highly appreciated. Thank you. Regards, Aj ### Response: Kubernetes uses the IP-per-pod model. If I understand correctly, you want to create three mongo pods, and write IP addresses of the three pods in /etc/hosts of each container. Modifying the /etc/host files directly might not be a good idea for many reasons (e.g., the pod may die and be replaced). For peer discovery in kubernetes, you need to Find out the IP addresses of the peers. Update your application with the addresses. (1) is achievable using Headless Service. (2) requires your to write a sidecar container to run along side with your mongo containers, performs (1), and configures your application. The sidecar container is highly application-specific and you may want to read some related stackoverflow questions about doing this for mongodb. As for (1), you can create a Headless Service by using this service.yaml with the clusterIP set to None. spec: clusterIP: None Then, you can create a replication controller which creates the desired number of mongo pods. For example, you can use mongo-controller.yaml, replaces the gcePersistentDisk with a desired local disk volume type (e.g. emptyDir or hostPath), and change the replica number to 3. Each of the mongo pod will be assigned an IP address automatically, and is labeled with name=mongo. The headless service uses a label selector to find the pods. When querying DNS with the service name from a node or a container, it will return a list of IP addresses of the mongo pods. E.g., $ host mongo mongo.default.svc.cluster.local has address 10.245.0.137 mongo.default.svc.cluster.local has address 10.245.3.80 mongo.default.svc.cluster.local has address 10.245.1.128 You can get the addresses in the sidecar container you wrote and configure mongodb-specific accordingly.
I hope everyone here is doing good. I am trying to find a way to add entries to the containers /etc/hosts file while spinning up a pod. I was just wondering to know if there is any option/parameter that I could mention in my "pod1.json" which adds the entries to the containers /etc/hosts when its being created. Something like "--add-host node1.example.com:${node1ip}" that serves the same purpose for docker as shown below. docker run \ --name mongo \ -v /home/core/mongo-files/data:/data/db \ -v /home/core/mongo-files:/opt/keyfile \ --hostname="node1.example.com" \ --add-host node1.example.com:${node1ip} \ --add-host node2.example.com:${node2ip} \ --add-host node3.example.com:${node3ip} \ -p 27017:27017 -d mongo:2.6.5 \ --smallfiles \ --keyFile /opt/keyfile/mongodb-keyfile \ --replSet "rs0" Any pointers are highly appreciated. Thank you. Regards, Aj
Very interesting question! For that to answer I believe we need to walk the Star Wars walk and Use The Source: The ReplicationManager has some hints concerning expectations Then, there is controller_utils.go with some more indications However, the core of the calculation seems to be in updateReplicaCount UPDATE: My colleague Stefan Schimanski just pointed out to me that in fact the answer is a bit more complicated; key is FilterActivePods: func FilterActivePods(pods []api.Pod) []*api.Pod { var result []*api.Pod for i := range pods { if api.PodSucceeded != pods[i].Status.Phase && api.PodFailed != pods[i].Status.Phase && pods[i].DeletionTimestamp == nil { result = append(result, &pods[i]) } } return result } This means the ultimate condition is: pods which have not terminated yet and are not in graceful termination. Note that the definition of 'scheduled' in the context of Kubernetes is simply pod.spec.nodeName != "" The Kubelet on a specific node watches the API Server for pods that have a matching nodeName and will then launch the pod on said node.
###Instruction: When watching a replication controller, it returns it’s most recent replicas count under ReplicationControllerStatus. I could not find anywhere in the documentation what the status of the pod needs to be, in order for it to be included there. Is it enough for the pod to be scheduled? I’ve noticed a replication controller reporting pods in it’s status even if the pods are still pending. ### Response: Very interesting question! For that to answer I believe we need to walk the Star Wars walk and Use The Source: The ReplicationManager has some hints concerning expectations Then, there is controller_utils.go with some more indications However, the core of the calculation seems to be in updateReplicaCount UPDATE: My colleague Stefan Schimanski just pointed out to me that in fact the answer is a bit more complicated; key is FilterActivePods: func FilterActivePods(pods []api.Pod) []*api.Pod { var result []*api.Pod for i := range pods { if api.PodSucceeded != pods[i].Status.Phase && api.PodFailed != pods[i].Status.Phase && pods[i].DeletionTimestamp == nil { result = append(result, &pods[i]) } } return result } This means the ultimate condition is: pods which have not terminated yet and are not in graceful termination. Note that the definition of 'scheduled' in the context of Kubernetes is simply pod.spec.nodeName != "" The Kubelet on a specific node watches the API Server for pods that have a matching nodeName and will then launch the pod on said node.
When watching a replication controller, it returns it’s most recent replicas count under ReplicationControllerStatus. I could not find anywhere in the documentation what the status of the pod needs to be, in order for it to be included there. Is it enough for the pod to be scheduled? I’ve noticed a replication controller reporting pods in it’s status even if the pods are still pending.
If you name the ports in the pods, you can specify the target ports by name in the service rather than by number, and then the same service can direct target to pods using different port numbers. Or, as Yu-Ju suggested, you can do a read-modify-write of the live state of the service, such as via kubectl edit. The error message was due to not specifying the clusterIP that had already been set.
###Instruction: Background: Let's say I have a replication controller with some pods. When these pods were first deployed they were configured to expose port 8080. A service (of type LoadBalancer) was also create to expose port 8080 publicly. Later we decide that we want to export an additional port from the pods (port 8081). We change the pod definition and do a rolling-update with no downtime, great! But we want this port to be publicly accessible as well. Question: Is there a good way to update a service without downtime (for example by adding a an additional port to expose)? If I just do: kubectl replace -f my-service-with-an-additional-port.json I get the following error message: Replace failedspec.clusterIP: invalid value '': field is immutable ### Response: If you name the ports in the pods, you can specify the target ports by name in the service rather than by number, and then the same service can direct target to pods using different port numbers. Or, as Yu-Ju suggested, you can do a read-modify-write of the live state of the service, such as via kubectl edit. The error message was due to not specifying the clusterIP that had already been set.
Background: Let's say I have a replication controller with some pods. When these pods were first deployed they were configured to expose port 8080. A service (of type LoadBalancer) was also create to expose port 8080 publicly. Later we decide that we want to export an additional port from the pods (port 8081). We change the pod definition and do a rolling-update with no downtime, great! But we want this port to be publicly accessible as well. Question: Is there a good way to update a service without downtime (for example by adding a an additional port to expose)? If I just do: kubectl replace -f my-service-with-an-additional-port.json I get the following error message: Replace failedspec.clusterIP: invalid value '': field is immutable
In descending order, from most mature and battle-tested at scale to less so: Marathon, a Apache Mesos framework Kubernetes Docker Swarm HashiCorp's Nomad
###Instruction: I've been tasked with evaluating container management solutions. I'm aware there is a large number or options, but we need production ready, on premises solution. What are the options? ### Response: In descending order, from most mature and battle-tested at scale to less so: Marathon, a Apache Mesos framework Kubernetes Docker Swarm HashiCorp's Nomad
I've been tasked with evaluating container management solutions. I'm aware there is a large number or options, but we need production ready, on premises solution. What are the options?
This is a bug and is fixed by https://github.com/kubernetes/kubernetes/pull/16432
###Instruction: On my Kubernetes cluster on GKE, I have the following persistent volume claims (PVCs): kind: PersistentVolumeClaim apiVersion: v1 metadata: name: registry spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Gi and: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: postgresql-blobs spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Gi Amongst others, I have the following persistent volume defined: kind: PersistentVolume apiVersion: v1 metadata: name: pv0003 spec: capacity: storage: 100Gi accessModes: - ReadWriteOnce - ReadOnlyMany gcePersistentDisk: pdName: registry fsType: ext4 Now, both claims claimed the same volume: bronger:~$ kubectl describe pvc postgresql-blobs registry Name: postgresql-blobs Namespace: default Status: Bound Volume: pv0003 Labels: <none> Capacity: 100Gi Access Modes: RWO,ROX Name: registry Namespace: default Status: Bound Volume: pv0003 Labels: <none> Capacity: 100Gi Access Modes: RWO,ROX Funny enough, the PV knows only about one of the claims: bronger:~$ kubectl describe pv pv0003 Name: pv0003 Labels: <none> Status: Bound Claim: default/postgresql-blobs Reclaim Policy: Retain Access Modes: RWO,ROX Capacity: 100Gi Message: Source: Type: GCEPersistentDisk (a Persistent Disk resource in Google Compute Engine) PDName: registry FSType: ext4 Partition: 0 ReadOnly: false How can I prevent this from happening? ### Response: This is a bug and is fixed by https://github.com/kubernetes/kubernetes/pull/16432
On my Kubernetes cluster on GKE, I have the following persistent volume claims (PVCs): kind: PersistentVolumeClaim apiVersion: v1 metadata: name: registry spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Gi and: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: postgresql-blobs spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Gi Amongst others, I have the following persistent volume defined: kind: PersistentVolume apiVersion: v1 metadata: name: pv0003 spec: capacity: storage: 100Gi accessModes: - ReadWriteOnce - ReadOnlyMany gcePersistentDisk: pdName: registry fsType: ext4 Now, both claims claimed the same volume: bronger:~$ kubectl describe pvc postgresql-blobs registry Name: postgresql-blobs Namespace: default Status: Bound Volume: pv0003 Labels: <none> Capacity: 100Gi Access Modes: RWO,ROX Name: registry Namespace: default Status: Bound Volume: pv0003 Labels: <none> Capacity: 100Gi Access Modes: RWO,ROX Funny enough, the PV knows only about one of the claims: bronger:~$ kubectl describe pv pv0003 Name: pv0003 Labels: <none> Status: Bound Claim: default/postgresql-blobs Reclaim Policy: Retain Access Modes: RWO,ROX Capacity: 100Gi Message: Source: Type: GCEPersistentDisk (a Persistent Disk resource in Google Compute Engine) PDName: registry FSType: ext4 Partition: 0 ReadOnly: false How can I prevent this from happening?
By watching changes to a list of Endpoints: GET /api/v1/watch/namespaces/{namespace}/endpoints
###Instruction: Is there a way to discover all the endpoints of a headless service from outside the cluster? Preferably using DNS or Static IPs ### Response: By watching changes to a list of Endpoints: GET /api/v1/watch/namespaces/{namespace}/endpoints
Is there a way to discover all the endpoints of a headless service from outside the cluster? Preferably using DNS or Static IPs
The issue turned out to be an MTU issue between the node and the master. Once that was fixed the problem was resolved.
###Instruction: I've been working with a 6 node cluster for the last few weeks without issue. Earlier today we ran into an open file issue (https://github.com/kubernetes/kubernetes/pull/12443/files) and I patched and restarted kube-proxy. Since then, all rc deployed pods to ALL BUT node-01 get stuck in pending state and there log messages stating the cause. Looking at the docker daemon on the nodes, the containers in the pod are actually running and a delete of the rc removes them. It appears to be some sort of callback issue between the state according to kubelet and the kube-apiserver. Cluster is running v1.0.3 Here's an example of the state docker run --rm -it lachie83/kubectl:prod get pods --namespace=kube-system -o wide NAME READY STATUS RESTARTS AGE NODE kube-dns-v8-i0yac 0/4 Pending 0 4s 10.1.1.35 kube-dns-v8-jti2e 0/4 Pending 0 4s 10.1.1.34 get events Wed, 16 Sep 2015 06:25:42 +0000 Wed, 16 Sep 2015 06:25:42 +0000 1 kube-dns-v8 ReplicationController successfulCreate {replication-controller } Created pod: kube-dns-v8-i0yac Wed, 16 Sep 2015 06:25:42 +0000 Wed, 16 Sep 2015 06:25:42 +0000 1 kube-dns-v8-i0yac Pod scheduled {scheduler } Successfully assigned kube-dns-v8-i0yac to 10.1.1.35 Wed, 16 Sep 2015 06:25:42 +0000 Wed, 16 Sep 2015 06:25:42 +0000 1 kube-dns-v8-jti2e Pod scheduled {scheduler } Successfully assigned kube-dns-v8-jti2e to 10.1.1.34 Wed, 16 Sep 2015 06:25:42 +0000 Wed, 16 Sep 2015 06:25:42 +0000 1 kube-dns-v8 ReplicationController successfulCreate {replication-controller } Created pod: kube-dns-v8-jti2e scheduler log I0916 06:25:42.897814 10076 event.go:203] Event(api.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-dns-v8-jti2e", UID:"c1cafebe-5c3b-11e5-b3c4-020443b6797d", APIVersion:"v1", ResourceVersion:"670117", FieldPath:""}): reason: 'scheduled' Successfully assigned kube-dns-v8-jti2e to 10.1.1.34 I0916 06:25:42.904195 10076 event.go:203] Event(api.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-dns-v8-i0yac", UID:"c1cafc69-5c3b-11e5-b3c4-020443b6797d", APIVersion:"v1", ResourceVersion:"670118", FieldPath:""}): reason: 'scheduled' Successfully assigned kube-dns-v8-i0yac to 10.1.1.35 tailing kubelet log file during pod create tail -f kubelet.kube-node-03.root.log.INFO.20150916-060744.10668 I0916 06:25:04.448916 10668 config.go:253] Setting pods for source file : {[] 0 file} I0916 06:25:24.449253 10668 config.go:253] Setting pods for source file : {[] 0 file} I0916 06:25:44.449522 10668 config.go:253] Setting pods for source file : {[] 0 file} I0916 06:26:04.449774 10668 config.go:253] Setting pods for source file : {[] 0 file} I0916 06:26:24.450400 10668 config.go:253] Setting pods for source file : {[] 0 file} I0916 06:26:44.450995 10668 config.go:253] Setting pods for source file : {[] 0 file} I0916 06:27:04.451501 10668 config.go:253] Setting pods for source file : {[] 0 file} I0916 06:27:24.451910 10668 config.go:253] Setting pods for source file : {[] 0 file} I0916 06:27:44.452511 10668 config.go:253] Setting pods for source file : {[] 0 file} kubelet process root@kube-node-03:/var/log/kubernetes# ps -ef | grep kubelet root 10668 1 1 06:07 ? 00:00:13 /opt/bin/kubelet --address=10.1.1.34 --port=10250 --hostname_override=10.1.1.34 --api_servers=https://kube-master-01.sj.lithium.com:6443 --logtostderr=false --log_dir=/var/log/kubernetes --cluster_dns=10.1.2.53 --config=/etc/kubelet/conf --cluster_domain=prod-kube-sjc1-1.internal --v=4 --tls-cert-file=/etc/kubelet/certs/kubelet.pem --tls-private-key-file=/etc/kubelet/certs/kubelet-key.pem node list docker run --rm -it lachie83/kubectl:prod get nodes NAME LABELS STATUS 10.1.1.30 kubernetes.io/hostname=10.1.1.30,name=node-1 Ready 10.1.1.32 kubernetes.io/hostname=10.1.1.32,name=node-2 Ready 10.1.1.34 kubernetes.io/hostname=10.1.1.34,name=node-3 Ready 10.1.1.35 kubernetes.io/hostname=10.1.1.35,name=node-4 Ready 10.1.1.42 kubernetes.io/hostname=10.1.1.42,name=node-5 Ready 10.1.1.43 kubernetes.io/hostname=10.1.1.43,name=node-6 Ready ### Response: The issue turned out to be an MTU issue between the node and the master. Once that was fixed the problem was resolved.
I've been working with a 6 node cluster for the last few weeks without issue. Earlier today we ran into an open file issue (https://github.com/kubernetes/kubernetes/pull/12443/files) and I patched and restarted kube-proxy. Since then, all rc deployed pods to ALL BUT node-01 get stuck in pending state and there log messages stating the cause. Looking at the docker daemon on the nodes, the containers in the pod are actually running and a delete of the rc removes them. It appears to be some sort of callback issue between the state according to kubelet and the kube-apiserver. Cluster is running v1.0.3 Here's an example of the state docker run --rm -it lachie83/kubectl:prod get pods --namespace=kube-system -o wide NAME READY STATUS RESTARTS AGE NODE kube-dns-v8-i0yac 0/4 Pending 0 4s 10.1.1.35 kube-dns-v8-jti2e 0/4 Pending 0 4s 10.1.1.34 get events Wed, 16 Sep 2015 06:25:42 +0000 Wed, 16 Sep 2015 06:25:42 +0000 1 kube-dns-v8 ReplicationController successfulCreate {replication-controller } Created pod: kube-dns-v8-i0yac Wed, 16 Sep 2015 06:25:42 +0000 Wed, 16 Sep 2015 06:25:42 +0000 1 kube-dns-v8-i0yac Pod scheduled {scheduler } Successfully assigned kube-dns-v8-i0yac to 10.1.1.35 Wed, 16 Sep 2015 06:25:42 +0000 Wed, 16 Sep 2015 06:25:42 +0000 1 kube-dns-v8-jti2e Pod scheduled {scheduler } Successfully assigned kube-dns-v8-jti2e to 10.1.1.34 Wed, 16 Sep 2015 06:25:42 +0000 Wed, 16 Sep 2015 06:25:42 +0000 1 kube-dns-v8 ReplicationController successfulCreate {replication-controller } Created pod: kube-dns-v8-jti2e scheduler log I0916 06:25:42.897814 10076 event.go:203] Event(api.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-dns-v8-jti2e", UID:"c1cafebe-5c3b-11e5-b3c4-020443b6797d", APIVersion:"v1", ResourceVersion:"670117", FieldPath:""}): reason: 'scheduled' Successfully assigned kube-dns-v8-jti2e to 10.1.1.34 I0916 06:25:42.904195 10076 event.go:203] Event(api.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-dns-v8-i0yac", UID:"c1cafc69-5c3b-11e5-b3c4-020443b6797d", APIVersion:"v1", ResourceVersion:"670118", FieldPath:""}): reason: 'scheduled' Successfully assigned kube-dns-v8-i0yac to 10.1.1.35 tailing kubelet log file during pod create tail -f kubelet.kube-node-03.root.log.INFO.20150916-060744.10668 I0916 06:25:04.448916 10668 config.go:253] Setting pods for source file : {[] 0 file} I0916 06:25:24.449253 10668 config.go:253] Setting pods for source file : {[] 0 file} I0916 06:25:44.449522 10668 config.go:253] Setting pods for source file : {[] 0 file} I0916 06:26:04.449774 10668 config.go:253] Setting pods for source file : {[] 0 file} I0916 06:26:24.450400 10668 config.go:253] Setting pods for source file : {[] 0 file} I0916 06:26:44.450995 10668 config.go:253] Setting pods for source file : {[] 0 file} I0916 06:27:04.451501 10668 config.go:253] Setting pods for source file : {[] 0 file} I0916 06:27:24.451910 10668 config.go:253] Setting pods for source file : {[] 0 file} I0916 06:27:44.452511 10668 config.go:253] Setting pods for source file : {[] 0 file} kubelet process root@kube-node-03:/var/log/kubernetes# ps -ef | grep kubelet root 10668 1 1 06:07 ? 00:00:13 /opt/bin/kubelet --address=10.1.1.34 --port=10250 --hostname_override=10.1.1.34 --api_servers=https://kube-master-01.sj.lithium.com:6443 --logtostderr=false --log_dir=/var/log/kubernetes --cluster_dns=10.1.2.53 --config=/etc/kubelet/conf --cluster_domain=prod-kube-sjc1-1.internal --v=4 --tls-cert-file=/etc/kubelet/certs/kubelet.pem --tls-private-key-file=/etc/kubelet/certs/kubelet-key.pem node list docker run --rm -it lachie83/kubectl:prod get nodes NAME LABELS STATUS 10.1.1.30 kubernetes.io/hostname=10.1.1.30,name=node-1 Ready 10.1.1.32 kubernetes.io/hostname=10.1.1.32,name=node-2 Ready 10.1.1.34 kubernetes.io/hostname=10.1.1.34,name=node-3 Ready 10.1.1.35 kubernetes.io/hostname=10.1.1.35,name=node-4 Ready 10.1.1.42 kubernetes.io/hostname=10.1.1.42,name=node-5 Ready 10.1.1.43 kubernetes.io/hostname=10.1.1.43,name=node-6 Ready
Starting with the available documentation: in your case on logging and volumes. Taking these two sources together we arrive at something like the following: ... containers: - name: syslogtest image: ubuntu:14.04 volumeMounts: - name: logvol mountPath: /dev/log readOnly: false volumes: - name: logvol source: hostDir: path: /dev/log ...
###Instruction: With docker, I can pass log-driver=syslog command line option to forward container logs to syslog. How do I pass these docker arguments via Kubernetes yaml/json descriptor? ### Response: Starting with the available documentation: in your case on logging and volumes. Taking these two sources together we arrive at something like the following: ... containers: - name: syslogtest image: ubuntu:14.04 volumeMounts: - name: logvol mountPath: /dev/log readOnly: false volumes: - name: logvol source: hostDir: path: /dev/log ...
With docker, I can pass log-driver=syslog command line option to forward container logs to syslog. How do I pass these docker arguments via Kubernetes yaml/json descriptor?
You'll get "no volume plugins matched" if the rbd command isn't installed and in the path. As the example specifies, you need to ensure that ceph is installed on your Kubernetes nodes. For instance, in Fedora: $ sudo yum -y install ceph-common I'll file an issue to clarify the error messages.
###Instruction: I follow the example to use rbd in kubernetes, but can not success. who can help me!! the error : Nov 09 17:58:03 core-1-97 kubelet[1254]: E1109 17:58:03.289702 1254 volumes.go:114] Could not create volume builder for pod 5df3610e-86c8-11e5-bc34-002590fdf95c: can't use volume plugins for (volume.Spec){Name:(string)rbdpd VolumeSource:(api.VolumeSource){HostPath:(*api.HostPathVolumeSource)<nil> EmptyDir:(*api.EmptyDirVolumeSource)<nil> GCEPersistentDisk:(*api.GCEPersistentDiskVolumeSource)<nil> AWSElasticBlockStore:(*api.AWSElasticBlockStoreVolumeSource)<nil> GitRepo:(*api.GitRepoVolumeSource)<nil> Secret:(*api.SecretVolumeSource)<nil> NFS:(*api.NFSVolumeSource)<nil> ISCSI:(*api.ISCSIVolumeSource)<nil> Glusterfs:(*api.GlusterfsVolumeSource)<nil> PersistentVolumeClaim:(*api.PersistentVolumeClaimVolumeSource)<nil> RBD:(*api.RBDVolumeSource){CephMonitors:([]string)[10.14.1.33:6789 10.14.1.35:6789 10.14.1.36:6789] RBDImage:(string)foo FSType:(string)ext4 RBDPool:(string)rbd RadosUser:(string)admin Keyring:(string) SecretRef:(*api.LocalObjectReference){Name:(string)ceph-secret} ReadOnly:(bool)true}} PersistentVolumeSource:(api.PersistentVolumeSource){GCEPersistentDisk:(*api.GCEPersistentDiskVolumeSource)<nil> AWSElasticBlockStore:(*api.AWSElasticBlockStoreVolumeSource)<nil> HostPath:(*api.HostPathVolumeSource)<nil> Glusterfs:(*api.GlusterfsVolumeSource)<nil> NFS:(*api.NFSVolumeSource)<nil> RBD:(*api.RBDVolumeSource)<nil> ISCSI:(*api.ISCSIVolumeSource)<nil>}}: no volume plugin matched Nov 09 17:58:03 core-1-97 kubelet[1254]: E1109 17:58:03.289770 1254 kubelet.go:1210] Unable to mount volumes for pod "rbd2_default": can't use volume plugins for (volume.Spec){Name:(string)rbdpd VolumeSource:(api.VolumeSource){HostPath:(*api.HostPathVolumeSource)<nil> EmptyDir:(*api.EmptyDirVolumeSource)<nil> GCEPersistentDisk:(*api.GCEPersistentDiskVolumeSource)<nil> AWSElasticBlockStore:(*api.AWSElasticBlockStoreVolumeSource)<nil> GitRepo:(*api.GitRepoVolumeSource)<nil> Secret:(*api.SecretVolumeSource)<nil> NFS:(*api.NFSVolumeSource)<nil> ISCSI:(*api.ISCSIVolumeSource)<nil> Glusterfs:(*api.GlusterfsVolumeSource)<nil> PersistentVolumeClaim:(*api.PersistentVolumeClaimVolumeSource)<nil> RBD:(*api.RBDVolumeSource){CephMonitors:([]string)[10.14.1.33:6789 10.14.1.35:6789 10.14.1.36:6789] RBDImage:(string)foo FSType:(string)ext4 RBDPool:(string)rbd RadosUser:(string)admin Keyring:(string) SecretRef:(*api.LocalObjectReference){Name:(string)ceph-secret} ReadOnly:(bool)true}} PersistentVolumeSource:(api.PersistentVolumeSource){GCEPersistentDisk:(*api.GCEPersistentDiskVolumeSource)<nil> AWSElasticBlockStore:(*api.AWSElasticBlockStoreVolumeSource)<nil> HostPath:(*api.HostPathVolumeSource)<nil> Glusterfs:(*api.GlusterfsVolumeSource)<nil> NFS:(*api.NFSVolumeSource)<nil> RBD:(*api.RBDVolumeSource)<nil> ISCSI:(*api.ISCSIVolumeSource)<nil>}}: no volume plugin matched; skipping pod Nov 09 17:58:03 core-1-97 kubelet[1254]: E1109 17:58:03.299458 1254 pod_workers.go:111] Error syncing pod 5df3610e-86c8-11e5-bc34-002590fdf95c, skipping: can't use volume plugins for (volume.Spec){Name:(string)rbdpd VolumeSource:(api.VolumeSource){HostPath:(*api.HostPathVolumeSource)<nil> EmptyDir:(*api.EmptyDirVolumeSource)<nil> GCEPersistentDisk:(*api.GCEPersistentDiskVolumeSource)<nil> AWSElasticBlockStore:(*api.AWSElasticBlockStoreVolumeSource)<nil> GitRepo:(*api.GitRepoVolumeSource)<nil> Secret:(*api.SecretVolumeSource)<nil> NFS:(*api.NFSVolumeSource)<nil> ISCSI:(*api.ISCSIVolumeSource)<nil> Glusterfs:(*api.GlusterfsVolumeSource)<nil> PersistentVolumeClaim:(*api.PersistentVolumeClaimVolumeSource)<nil> RBD:(*api.RBDVolumeSource){CephMonitors:([]string)[10.14.1.33:6789 10.14.1.35:6789 10.14.1.36:6789] RBDImage:(string)foo FSType:(string)ext4 RBDPool:(string)rbd RadosUser:(string)admin Keyring:(string) SecretRef:(*api.LocalObjectReference){Name:(string)ceph-secret} ReadOnly:(bool)true}} PersistentVolumeSource:(api.PersistentVolumeSource){GCEPersistentDisk:(*api.GCEPersistentDiskVolumeSource)<nil> AWSElasticBlockStore:(*api.AWSElasticBlockStoreVolumeSource)<nil> HostPath:(*api.HostPathVolumeSource)<nil> Glusterfs:(*api.GlusterfsVolumeSource)<nil> NFS:(*api.NFSVolumeSource)<nil> RBD:(*api.RBDVolumeSource)<nil> ISCSI:(*api.ISCSIVolumeSource)<nil>}}: no volume plugin matched And The template file I used rbd-with-secret.json: core@core-1-94 ~/kubernetes/examples/rbd $ cat rbd-with-secret.json { "apiVersion": "v1", "id": "rbdpd2", "kind": "Pod", "metadata": { "name": "rbd2" }, "spec": { "nodeSelector": {"kubernetes.io/hostname" :"10.12.1.97"}, "containers": [ { "name": "rbd-rw", "image": "kubernetes/pause", "volumeMounts": [ { "mountPath": "/mnt/rbd", "name": "rbdpd" } ] } ], "volumes": [ { "name": "rbdpd", "rbd": { "monitors": [ "10.14.1.33:6789", "10.14.1.35:6789", "10.14.1.36:6789" ], "pool": "rbd", "image": "foo", "user": "admin", "secretRef": {"name": "ceph-secret"}, "fsType": "ext4", "readOnly": true } } ] } } The secret: apiVersion: v1 kind: Secret metadata: name: ceph-secret data: key: QVFBemV6bFdZTXdXQWhBQThxeG1IT2NKa0QrYnE0K3RZUmtsVncK the ceph config is in /etc/ceph/ core@core-1-94 ~/kubernetes/examples/rbd $ ls -alh /etc/ceph total 20K drwxr-xr-x 2 root root 4.0K Nov 6 18:38 . drwxr-xr-x 26 root root 4.0K Nov 9 17:07 .. -rw------- 1 root root 63 Nov 4 11:27 ceph.client.admin.keyring -rw-r--r-- 1 root root 264 Nov 6 18:38 ceph.conf -rw-r--r-- 1 root root 384 Nov 6 14:35 ceph.conf.orig -rw------- 1 root root 0 Nov 4 11:27 tmpkqDKwf and the key as : core@core-1-94 ~/kubernetes/examples/rbd $ sudo cat /etc/ceph/ceph.client.admin.keyring [client.admin] key = AQAzezlWYMwWAhAA8qxmHOcJkD+bq4+tYRklVw== ### Response: You'll get "no volume plugins matched" if the rbd command isn't installed and in the path. As the example specifies, you need to ensure that ceph is installed on your Kubernetes nodes. For instance, in Fedora: $ sudo yum -y install ceph-common I'll file an issue to clarify the error messages.
I follow the example to use rbd in kubernetes, but can not success. who can help me!! the error : Nov 09 17:58:03 core-1-97 kubelet[1254]: E1109 17:58:03.289702 1254 volumes.go:114] Could not create volume builder for pod 5df3610e-86c8-11e5-bc34-002590fdf95c: can't use volume plugins for (volume.Spec){Name:(string)rbdpd VolumeSource:(api.VolumeSource){HostPath:(*api.HostPathVolumeSource)<nil> EmptyDir:(*api.EmptyDirVolumeSource)<nil> GCEPersistentDisk:(*api.GCEPersistentDiskVolumeSource)<nil> AWSElasticBlockStore:(*api.AWSElasticBlockStoreVolumeSource)<nil> GitRepo:(*api.GitRepoVolumeSource)<nil> Secret:(*api.SecretVolumeSource)<nil> NFS:(*api.NFSVolumeSource)<nil> ISCSI:(*api.ISCSIVolumeSource)<nil> Glusterfs:(*api.GlusterfsVolumeSource)<nil> PersistentVolumeClaim:(*api.PersistentVolumeClaimVolumeSource)<nil> RBD:(*api.RBDVolumeSource){CephMonitors:([]string)[10.14.1.33:6789 10.14.1.35:6789 10.14.1.36:6789] RBDImage:(string)foo FSType:(string)ext4 RBDPool:(string)rbd RadosUser:(string)admin Keyring:(string) SecretRef:(*api.LocalObjectReference){Name:(string)ceph-secret} ReadOnly:(bool)true}} PersistentVolumeSource:(api.PersistentVolumeSource){GCEPersistentDisk:(*api.GCEPersistentDiskVolumeSource)<nil> AWSElasticBlockStore:(*api.AWSElasticBlockStoreVolumeSource)<nil> HostPath:(*api.HostPathVolumeSource)<nil> Glusterfs:(*api.GlusterfsVolumeSource)<nil> NFS:(*api.NFSVolumeSource)<nil> RBD:(*api.RBDVolumeSource)<nil> ISCSI:(*api.ISCSIVolumeSource)<nil>}}: no volume plugin matched Nov 09 17:58:03 core-1-97 kubelet[1254]: E1109 17:58:03.289770 1254 kubelet.go:1210] Unable to mount volumes for pod "rbd2_default": can't use volume plugins for (volume.Spec){Name:(string)rbdpd VolumeSource:(api.VolumeSource){HostPath:(*api.HostPathVolumeSource)<nil> EmptyDir:(*api.EmptyDirVolumeSource)<nil> GCEPersistentDisk:(*api.GCEPersistentDiskVolumeSource)<nil> AWSElasticBlockStore:(*api.AWSElasticBlockStoreVolumeSource)<nil> GitRepo:(*api.GitRepoVolumeSource)<nil> Secret:(*api.SecretVolumeSource)<nil> NFS:(*api.NFSVolumeSource)<nil> ISCSI:(*api.ISCSIVolumeSource)<nil> Glusterfs:(*api.GlusterfsVolumeSource)<nil> PersistentVolumeClaim:(*api.PersistentVolumeClaimVolumeSource)<nil> RBD:(*api.RBDVolumeSource){CephMonitors:([]string)[10.14.1.33:6789 10.14.1.35:6789 10.14.1.36:6789] RBDImage:(string)foo FSType:(string)ext4 RBDPool:(string)rbd RadosUser:(string)admin Keyring:(string) SecretRef:(*api.LocalObjectReference){Name:(string)ceph-secret} ReadOnly:(bool)true}} PersistentVolumeSource:(api.PersistentVolumeSource){GCEPersistentDisk:(*api.GCEPersistentDiskVolumeSource)<nil> AWSElasticBlockStore:(*api.AWSElasticBlockStoreVolumeSource)<nil> HostPath:(*api.HostPathVolumeSource)<nil> Glusterfs:(*api.GlusterfsVolumeSource)<nil> NFS:(*api.NFSVolumeSource)<nil> RBD:(*api.RBDVolumeSource)<nil> ISCSI:(*api.ISCSIVolumeSource)<nil>}}: no volume plugin matched; skipping pod Nov 09 17:58:03 core-1-97 kubelet[1254]: E1109 17:58:03.299458 1254 pod_workers.go:111] Error syncing pod 5df3610e-86c8-11e5-bc34-002590fdf95c, skipping: can't use volume plugins for (volume.Spec){Name:(string)rbdpd VolumeSource:(api.VolumeSource){HostPath:(*api.HostPathVolumeSource)<nil> EmptyDir:(*api.EmptyDirVolumeSource)<nil> GCEPersistentDisk:(*api.GCEPersistentDiskVolumeSource)<nil> AWSElasticBlockStore:(*api.AWSElasticBlockStoreVolumeSource)<nil> GitRepo:(*api.GitRepoVolumeSource)<nil> Secret:(*api.SecretVolumeSource)<nil> NFS:(*api.NFSVolumeSource)<nil> ISCSI:(*api.ISCSIVolumeSource)<nil> Glusterfs:(*api.GlusterfsVolumeSource)<nil> PersistentVolumeClaim:(*api.PersistentVolumeClaimVolumeSource)<nil> RBD:(*api.RBDVolumeSource){CephMonitors:([]string)[10.14.1.33:6789 10.14.1.35:6789 10.14.1.36:6789] RBDImage:(string)foo FSType:(string)ext4 RBDPool:(string)rbd RadosUser:(string)admin Keyring:(string) SecretRef:(*api.LocalObjectReference){Name:(string)ceph-secret} ReadOnly:(bool)true}} PersistentVolumeSource:(api.PersistentVolumeSource){GCEPersistentDisk:(*api.GCEPersistentDiskVolumeSource)<nil> AWSElasticBlockStore:(*api.AWSElasticBlockStoreVolumeSource)<nil> HostPath:(*api.HostPathVolumeSource)<nil> Glusterfs:(*api.GlusterfsVolumeSource)<nil> NFS:(*api.NFSVolumeSource)<nil> RBD:(*api.RBDVolumeSource)<nil> ISCSI:(*api.ISCSIVolumeSource)<nil>}}: no volume plugin matched And The template file I used rbd-with-secret.json: core@core-1-94 ~/kubernetes/examples/rbd $ cat rbd-with-secret.json { "apiVersion": "v1", "id": "rbdpd2", "kind": "Pod", "metadata": { "name": "rbd2" }, "spec": { "nodeSelector": {"kubernetes.io/hostname" :"10.12.1.97"}, "containers": [ { "name": "rbd-rw", "image": "kubernetes/pause", "volumeMounts": [ { "mountPath": "/mnt/rbd", "name": "rbdpd" } ] } ], "volumes": [ { "name": "rbdpd", "rbd": { "monitors": [ "10.14.1.33:6789", "10.14.1.35:6789", "10.14.1.36:6789" ], "pool": "rbd", "image": "foo", "user": "admin", "secretRef": {"name": "ceph-secret"}, "fsType": "ext4", "readOnly": true } } ] } } The secret: apiVersion: v1 kind: Secret metadata: name: ceph-secret data: key: QVFBemV6bFdZTXdXQWhBQThxeG1IT2NKa0QrYnE0K3RZUmtsVncK the ceph config is in /etc/ceph/ core@core-1-94 ~/kubernetes/examples/rbd $ ls -alh /etc/ceph total 20K drwxr-xr-x 2 root root 4.0K Nov 6 18:38 . drwxr-xr-x 26 root root 4.0K Nov 9 17:07 .. -rw------- 1 root root 63 Nov 4 11:27 ceph.client.admin.keyring -rw-r--r-- 1 root root 264 Nov 6 18:38 ceph.conf -rw-r--r-- 1 root root 384 Nov 6 14:35 ceph.conf.orig -rw------- 1 root root 0 Nov 4 11:27 tmpkqDKwf and the key as : core@core-1-94 ~/kubernetes/examples/rbd $ sudo cat /etc/ceph/ceph.client.admin.keyring [client.admin] key = AQAzezlWYMwWAhAA8qxmHOcJkD+bq4+tYRklVw==
There is a known (benevolent) bug in the ingress controller that is automatically added to your cluster in Kubernetes 1.1.1. If you are not using the controller, you can scale the number of replicas to zero: kubectl scale rc l7-lb-controller --namespace=kube-system --replicas=0 which should make your CPU usage go back to a normal level. The ingress controller isn't doing any harm (other than affecting monitoring metrics) and will be automatically nice'd by the kernel if you run other pods on the same node (so it isn't affecting performance of your cluster). This bug will be fixed in the upcoming 1.1.2 release of Kubernetes.
###Instruction: I'm following the container engine walkthrough and I see that my VM Instance has a min CPU usage of ~80%. However, if I ssh into this box and run 'top' I see a much lower utilization. Can someone explain this to me as I must be missing something simple. Thank you. ### Response: There is a known (benevolent) bug in the ingress controller that is automatically added to your cluster in Kubernetes 1.1.1. If you are not using the controller, you can scale the number of replicas to zero: kubectl scale rc l7-lb-controller --namespace=kube-system --replicas=0 which should make your CPU usage go back to a normal level. The ingress controller isn't doing any harm (other than affecting monitoring metrics) and will be automatically nice'd by the kernel if you run other pods on the same node (so it isn't affecting performance of your cluster). This bug will be fixed in the upcoming 1.1.2 release of Kubernetes.
I'm following the container engine walkthrough and I see that my VM Instance has a min CPU usage of ~80%. However, if I ssh into this box and run 'top' I see a much lower utilization. Can someone explain this to me as I must be missing something simple. Thank you.
Not so hard, actually. Check my gists may give you some tips: https://gist.github.com/resouer/378bcdaef1d9601ed6aa See, do not try to consume files from outside, just package them in a docker image, and consume them by sidecar mode.
###Instruction: I have a noob question. If I'm using a docker image that uses a folder located in the host to do something, Where should be located the folder in the kubernetes cluster? I'm ok doing this with docker since I know where is my host filesystem but I get lost when I'm on a kubernetes cluster. Actually, I don't know if this is the best approach. But what I'm trying to do is build a development environment for a php backbend. Since what I want is that every person can run a container environment with their own files (which are on their computers), I'm trying to build a sidecar container so when launching the container I can pass the files to the php container. The problem is that I'm running kubernetes to build development environment for my company using a vagrant (coreos + kubernetes) solution since we don't have a cloud service right now so I can't use a persiten disk. I try NFS but it seems be too much for what I want (just pass some information to the pod regardless of the PC where I am). Also I try to use hostPAth in Kubernetes but the problem is that the machines where I want connect to the containers are located outside of the kubernetes cluster (Vagrant + CoreOS + Kubernetes so I-m trying to expose some container to public IPs but I can not figure out how to past the files (located in the machines outside of the cluster) to the containers. Thanks for your help, I appreciate your comments. ### Response: Not so hard, actually. Check my gists may give you some tips: https://gist.github.com/resouer/378bcdaef1d9601ed6aa See, do not try to consume files from outside, just package them in a docker image, and consume them by sidecar mode.
I have a noob question. If I'm using a docker image that uses a folder located in the host to do something, Where should be located the folder in the kubernetes cluster? I'm ok doing this with docker since I know where is my host filesystem but I get lost when I'm on a kubernetes cluster. Actually, I don't know if this is the best approach. But what I'm trying to do is build a development environment for a php backbend. Since what I want is that every person can run a container environment with their own files (which are on their computers), I'm trying to build a sidecar container so when launching the container I can pass the files to the php container. The problem is that I'm running kubernetes to build development environment for my company using a vagrant (coreos + kubernetes) solution since we don't have a cloud service right now so I can't use a persiten disk. I try NFS but it seems be too much for what I want (just pass some information to the pod regardless of the PC where I am). Also I try to use hostPAth in Kubernetes but the problem is that the machines where I want connect to the containers are located outside of the kubernetes cluster (Vagrant + CoreOS + Kubernetes so I-m trying to expose some container to public IPs but I can not figure out how to past the files (located in the machines outside of the cluster) to the containers. Thanks for your help, I appreciate your comments.
Starting last week, new GKE clusters should be on created on 1.1.1 by default, and the nfs-common package is installed on all 1.1.1 clusters. (For existing clusters, you'll need to wait until the hosted master is upgraded, then initiate a node upgrade.) See https://github.com/kubernetes/kubernetes/blob/release-1.1/examples/nfs/README.md for a larger example.
###Instruction: After I manually install nfs client package under each node, then it works. But in GKE, slave node can be scale in and out. After create a new slave node, I lose nfs client package again. Is there any way we can install software package when kubernetes spin up a new slave node? ### Response: Starting last week, new GKE clusters should be on created on 1.1.1 by default, and the nfs-common package is installed on all 1.1.1 clusters. (For existing clusters, you'll need to wait until the hosted master is upgraded, then initiate a node upgrade.) See https://github.com/kubernetes/kubernetes/blob/release-1.1/examples/nfs/README.md for a larger example.
After I manually install nfs client package under each node, then it works. But in GKE, slave node can be scale in and out. After create a new slave node, I lose nfs client package again. Is there any way we can install software package when kubernetes spin up a new slave node?
It is not recommended to use external container garbage collection scripts. Kubernetes relies on exited containers as tombstones to reconstruct the pod status and/or serve logs. Even if you don't care about container logs, if you remove the exited containers before kubernetes examines them and properly records the status, it may cause inaccurate status and restart decisions. This reliance may be eliminated in the future. For now, the best way to achieve more aggressive container garbage collection is through adjusting the parameters, as detailed in this guide. FYI, there are also open issues to improve the garbage collection behavior. #13287 is one example. If you really want to clean up the containers yourself, it is safe to remove containers associated with deleted pods. Removing multiple exited containers that belong to the same pod/container while keeping the most recent few exited containers is also relatively low-risk.
###Instruction: I am now doing experiments on using Kubernetes and Docker to provisioning services such as WordPress, Spark and Storm on 10 physical machines But after lots times of launching and terminating Docker Containers, the used memory would increase even I kill all the Containers via Kubernetes delete or Docker kill commands. I noticed that there were lots of Containers with status Exited, and after I remove all the Exited Containers, it frees lots of memories. And I came up with a solution, which is writing a removing exited containers schedule into cron table on each Docker host. But is this appropriate? if not, how can I release the memories? ### Response: It is not recommended to use external container garbage collection scripts. Kubernetes relies on exited containers as tombstones to reconstruct the pod status and/or serve logs. Even if you don't care about container logs, if you remove the exited containers before kubernetes examines them and properly records the status, it may cause inaccurate status and restart decisions. This reliance may be eliminated in the future. For now, the best way to achieve more aggressive container garbage collection is through adjusting the parameters, as detailed in this guide. FYI, there are also open issues to improve the garbage collection behavior. #13287 is one example. If you really want to clean up the containers yourself, it is safe to remove containers associated with deleted pods. Removing multiple exited containers that belong to the same pod/container while keeping the most recent few exited containers is also relatively low-risk.
I am now doing experiments on using Kubernetes and Docker to provisioning services such as WordPress, Spark and Storm on 10 physical machines But after lots times of launching and terminating Docker Containers, the used memory would increase even I kill all the Containers via Kubernetes delete or Docker kill commands. I noticed that there were lots of Containers with status Exited, and after I remove all the Exited Containers, it frees lots of memories. And I came up with a solution, which is writing a removing exited containers schedule into cron table on each Docker host. But is this appropriate? if not, how can I release the memories?
You probably need to remove the manifest files for influxdb from the /etc/kubernetes/addons/ directory on your "master" host. Many of the kube-up.sh implementations use a service (usually at /etc/kubernetes/kube-master-addons.sh) that runs periodically and makes sure that all the manifests in /etc/kubernetes/addons/ are active. You can also restart your cluster, but run export ENABLE_CLUSTER_MONITORING=none before running kube-up.sh. You can see other environment settings that impact the cluster kube-up.sh builds at cluster/aws/config-default.sh
###Instruction: I have spun up a Kubernetes cluster in AWS using the official "kube-up" mechanism. By default, an addon that monitors the cluster and logs to InfluxDB is created. It has been noted in this post that InfluxDB quickly fills up disk space on nodes, and I am seeing this same issue. The problem is, when I try to kill the InfluxDB replication controller and service, it "magically" comes back after a time. I do this: kubectl delete rc --namespace=kube-system monitoring-influx-grafana-v1 kubectl delete service --namespace=kube-system monitoring-influxdb kubectl delete service --namespace=kube-system monitoring-grafana Then if I say: kubectl get pods --namespace=kube-system I do not see the pods running anymore. However after some amount of time (minutes to hours), the replication controllers, services, and pods are back. I don't know what is restarting them. I would like to kill them permanently. ### Response: You probably need to remove the manifest files for influxdb from the /etc/kubernetes/addons/ directory on your "master" host. Many of the kube-up.sh implementations use a service (usually at /etc/kubernetes/kube-master-addons.sh) that runs periodically and makes sure that all the manifests in /etc/kubernetes/addons/ are active. You can also restart your cluster, but run export ENABLE_CLUSTER_MONITORING=none before running kube-up.sh. You can see other environment settings that impact the cluster kube-up.sh builds at cluster/aws/config-default.sh
I have spun up a Kubernetes cluster in AWS using the official "kube-up" mechanism. By default, an addon that monitors the cluster and logs to InfluxDB is created. It has been noted in this post that InfluxDB quickly fills up disk space on nodes, and I am seeing this same issue. The problem is, when I try to kill the InfluxDB replication controller and service, it "magically" comes back after a time. I do this: kubectl delete rc --namespace=kube-system monitoring-influx-grafana-v1 kubectl delete service --namespace=kube-system monitoring-influxdb kubectl delete service --namespace=kube-system monitoring-grafana Then if I say: kubectl get pods --namespace=kube-system I do not see the pods running anymore. However after some amount of time (minutes to hours), the replication controllers, services, and pods are back. I don't know what is restarting them. I would like to kill them permanently.
Please also see https://github.com/kubernetes/kubernetes/issues/16741 where we're discussing nfs and pretty much exactly this problem (amongst others)
###Instruction: After I manually install nfs client package under each node, then it works. But in GKE, slave node can be scale in and out. After create a new slave node, I lose nfs client package again. Is there any way we can install software package when kubernetes spin up a new slave node? ### Response: Please also see https://github.com/kubernetes/kubernetes/issues/16741 where we're discussing nfs and pretty much exactly this problem (amongst others)
After I manually install nfs client package under each node, then it works. But in GKE, slave node can be scale in and out. After create a new slave node, I lose nfs client package again. Is there any way we can install software package when kubernetes spin up a new slave node?
Unfortunately those URLs are incomplete. Influx's ports are named, so you need to say which port you want. https://MASTER_IP/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb:http or https://MASTER_IP/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb:api There's a bug open to give better errors for this.
###Instruction: I have a new Kubernetes cluster on AWS that was built using the kube-up script from v1.1.1. I can successfully access the Elasticsearch/Kibana/KubeUI/Grafana endpoints, but cannot access Heapster/KubeDNS/InfluxDB from my machine, through the API proxy. I have seen some ancillary issues related to this on the K8S project, but no clear identification as to what's going on. From what I can gather, everything is running fine so I'm not sure what is wrong here? I'd really like to use the embedded monitoring of Grafana/Influx/Heapster but the Grafana dashboard is just blank with an series error. Kubernetes version $ kubectl version Client Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.1", GitCommit:"92635e23dfafb2ddc828c8ac6c03c7a7205a84d8", GitTreeState:"clean"} Server Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.1", GitCommit:"92635e23dfafb2ddc828c8ac6c03c7a7205a84d8", GitTreeState:"clean"} Cluster-info $ kubectl cluster-info Kubernetes master is running at https://MASTER_IP Elasticsearch is running at https://MASTER_IP/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging Heapster is running at https://MASTER_IP/api/v1/proxy/namespaces/kube-system/services/heapster Kibana is running at https://MASTER_IP/api/v1/proxy/namespaces/kube-system/services/kibana-logging KubeDNS is running at https://MASTER_IP/api/v1/proxy/namespaces/kube-system/services/kube-dns KubeUI is running at https://MASTER_IP/api/v1/proxy/namespaces/kube-system/services/kube-ui Grafana is running at https://MASTER_IP/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana InfluxDB is running at https://MASTER_IP/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb Accessing influxDB from the API proxy URL above { "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Failure", "message": "no endpoints available for service \"monitoring-influxdb\"", "reason": "ServiceUnavailable", "code": 503 } Endpoint details from the Host $ curl http://localhost:8080/api/v1/namespaces/kube-system/endpoints/monitoring-influxdb { "kind": "Endpoints", "apiVersion": "v1", "metadata": { "name": "monitoring-influxdb", "namespace": "kube-system", "selfLink": "/api/v1/namespaces/kube-system/endpoints/monitoring-influxdb", "uid": "2f75b259-8a22-11e5-b248-028ff74b9b1b", "resourceVersion": "131", "creationTimestamp": "2015-11-13T16:18:33Z", "labels": { "kubernetes.io/cluster-service": "true", "kubernetes.io/name": "InfluxDB" } }, "subsets": [ { "addresses": [ { "ip": "10.244.1.4", "targetRef": { "kind": "Pod", "namespace": "kube-system", "name": "monitoring-influxdb-grafana-v2-n6jx1", "uid": "2f31ed90-8a22-11e5-b248-028ff74b9b1b", "resourceVersion": "127" } } ], "ports": [ { "name": "http", "port": 8083, "protocol": "TCP" }, { "name": "api", "port": 8086, "protocol": "TCP" } ] } ] } Querying the service from the Host $ curl -IL 10.244.1.4:8083 HTTP/1.1 200 OK Accept-Ranges: bytes Content-Length: 13751 Content-Type: text/html; charset=utf-8 Last-Modified: Fri, 14 Nov 2014 21:55:58 GMT Date: Tue, 17 Nov 2015 21:31:48 GMT Monitoring-InfluxDB Service $ curl http://localhost:8080/api/v1/namespaces/kube-system/services/monitoring-influxdb { "kind": "Service", "apiVersion": "v1", "metadata": { "name": "monitoring-influxdb", "namespace": "kube-system", "selfLink": "/api/v1/namespaces/kube-system/services/monitoring-influxdb", "uid": "2f715831-8a22-11e5-b248-028ff74b9b1b", "resourceVersion": "60", "creationTimestamp": "2015-11-13T16:18:33Z", "labels": { "kubernetes.io/cluster-service": "true", "kubernetes.io/name": "InfluxDB" } }, "spec": { "ports": [ { "name": "http", "protocol": "TCP", "port": 8083, "targetPort": 8083 }, { "name": "api", "protocol": "TCP", "port": 8086, "targetPort": 8086 } ], "selector": { "k8s-app": "influxGrafana" }, "clusterIP": "10.0.35.241", "type": "ClusterIP", "sessionAffinity": "None" }, "status": { "loadBalancer": {} } } Pod Details $ kubectl describe pod --namespace=kube-system monitoring-influxdb-grafana-v2-n6jx Name: monitoring-influxdb-grafana-v2-n6jx1 Namespace: kube-system Image(s): gcr.io/google_containers/heapster_influxdb:v0.4,beta.gcr.io/google_containers/heapster_grafana:v2.1.1 Node: ip-172-20-0-44.us-west-2.compute.internal/172.20.0.44 Start Time: Fri, 13 Nov 2015 08:21:36 -0800 Labels: k8s-app=influxGrafana,kubernetes.io/cluster-service=true,version=v2 Status: Running Reason: Message: IP: 10.244.1.4 Replication Controllers: monitoring-influxdb-grafana-v2 (1/1 replicas created) Containers: influxdb: Container ID: docker://564724318ca81d33d6079978d24f78b3c6ff8eb08a9023c845e250eeb888aafd Image: gcr.io/google_containers/heapster_influxdb:v0.4 Image ID: docker://8b8118c488e431cc43e7ff9060968d88402cc6c38a6390c4221352403aa7ac1b QoS Tier: memory: Guaranteed cpu: Guaranteed Limits: memory: 200Mi cpu: 100m Requests: memory: 200Mi cpu: 100m State: Running Started: Fri, 13 Nov 2015 08:22:55 -0800 Ready: True Restart Count: 0 Environment Variables: grafana: Container ID: docker://518dea564a0ee014345e9006da6113fb6584ff1ebc6d0cc9609a608abc995f45 Image: beta.gcr.io/google_containers/heapster_grafana:v2.1.1 Image ID: docker://200e77ba156a5a86879e49667b97afe84dca42b5bb67ab1e06217e6a19c5a6a6 QoS Tier: cpu: Guaranteed memory: Guaranteed Limits: memory: 100Mi cpu: 100m Requests: cpu: 100m memory: 100Mi State: Running Started: Fri, 13 Nov 2015 08:22:35 -0800 Ready: True Restart Count: 0 Environment Variables: INFLUXDB_SERVICE_URL: http://monitoring-influxdb:8086 GF_AUTH_BASIC_ENABLED: false GF_AUTH_ANONYMOUS_ENABLED: true GF_AUTH_ANONYMOUS_ORG_ROLE: Admin GF_SERVER_ROOT_URL: /api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/ Conditions: Type Status Ready True Volumes: influxdb-persistent-storage: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: grafana-persistent-storage: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: default-token-bo89c: Type: Secret (a secret that should populate this volume) SecretName: default-token-bo89c No events. ### Response: Unfortunately those URLs are incomplete. Influx's ports are named, so you need to say which port you want. https://MASTER_IP/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb:http or https://MASTER_IP/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb:api There's a bug open to give better errors for this.
I have a new Kubernetes cluster on AWS that was built using the kube-up script from v1.1.1. I can successfully access the Elasticsearch/Kibana/KubeUI/Grafana endpoints, but cannot access Heapster/KubeDNS/InfluxDB from my machine, through the API proxy. I have seen some ancillary issues related to this on the K8S project, but no clear identification as to what's going on. From what I can gather, everything is running fine so I'm not sure what is wrong here? I'd really like to use the embedded monitoring of Grafana/Influx/Heapster but the Grafana dashboard is just blank with an series error. Kubernetes version $ kubectl version Client Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.1", GitCommit:"92635e23dfafb2ddc828c8ac6c03c7a7205a84d8", GitTreeState:"clean"} Server Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.1", GitCommit:"92635e23dfafb2ddc828c8ac6c03c7a7205a84d8", GitTreeState:"clean"} Cluster-info $ kubectl cluster-info Kubernetes master is running at https://MASTER_IP Elasticsearch is running at https://MASTER_IP/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging Heapster is running at https://MASTER_IP/api/v1/proxy/namespaces/kube-system/services/heapster Kibana is running at https://MASTER_IP/api/v1/proxy/namespaces/kube-system/services/kibana-logging KubeDNS is running at https://MASTER_IP/api/v1/proxy/namespaces/kube-system/services/kube-dns KubeUI is running at https://MASTER_IP/api/v1/proxy/namespaces/kube-system/services/kube-ui Grafana is running at https://MASTER_IP/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana InfluxDB is running at https://MASTER_IP/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb Accessing influxDB from the API proxy URL above { "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Failure", "message": "no endpoints available for service \"monitoring-influxdb\"", "reason": "ServiceUnavailable", "code": 503 } Endpoint details from the Host $ curl http://localhost:8080/api/v1/namespaces/kube-system/endpoints/monitoring-influxdb { "kind": "Endpoints", "apiVersion": "v1", "metadata": { "name": "monitoring-influxdb", "namespace": "kube-system", "selfLink": "/api/v1/namespaces/kube-system/endpoints/monitoring-influxdb", "uid": "2f75b259-8a22-11e5-b248-028ff74b9b1b", "resourceVersion": "131", "creationTimestamp": "2015-11-13T16:18:33Z", "labels": { "kubernetes.io/cluster-service": "true", "kubernetes.io/name": "InfluxDB" } }, "subsets": [ { "addresses": [ { "ip": "10.244.1.4", "targetRef": { "kind": "Pod", "namespace": "kube-system", "name": "monitoring-influxdb-grafana-v2-n6jx1", "uid": "2f31ed90-8a22-11e5-b248-028ff74b9b1b", "resourceVersion": "127" } } ], "ports": [ { "name": "http", "port": 8083, "protocol": "TCP" }, { "name": "api", "port": 8086, "protocol": "TCP" } ] } ] } Querying the service from the Host $ curl -IL 10.244.1.4:8083 HTTP/1.1 200 OK Accept-Ranges: bytes Content-Length: 13751 Content-Type: text/html; charset=utf-8 Last-Modified: Fri, 14 Nov 2014 21:55:58 GMT Date: Tue, 17 Nov 2015 21:31:48 GMT Monitoring-InfluxDB Service $ curl http://localhost:8080/api/v1/namespaces/kube-system/services/monitoring-influxdb { "kind": "Service", "apiVersion": "v1", "metadata": { "name": "monitoring-influxdb", "namespace": "kube-system", "selfLink": "/api/v1/namespaces/kube-system/services/monitoring-influxdb", "uid": "2f715831-8a22-11e5-b248-028ff74b9b1b", "resourceVersion": "60", "creationTimestamp": "2015-11-13T16:18:33Z", "labels": { "kubernetes.io/cluster-service": "true", "kubernetes.io/name": "InfluxDB" } }, "spec": { "ports": [ { "name": "http", "protocol": "TCP", "port": 8083, "targetPort": 8083 }, { "name": "api", "protocol": "TCP", "port": 8086, "targetPort": 8086 } ], "selector": { "k8s-app": "influxGrafana" }, "clusterIP": "10.0.35.241", "type": "ClusterIP", "sessionAffinity": "None" }, "status": { "loadBalancer": {} } } Pod Details $ kubectl describe pod --namespace=kube-system monitoring-influxdb-grafana-v2-n6jx Name: monitoring-influxdb-grafana-v2-n6jx1 Namespace: kube-system Image(s): gcr.io/google_containers/heapster_influxdb:v0.4,beta.gcr.io/google_containers/heapster_grafana:v2.1.1 Node: ip-172-20-0-44.us-west-2.compute.internal/172.20.0.44 Start Time: Fri, 13 Nov 2015 08:21:36 -0800 Labels: k8s-app=influxGrafana,kubernetes.io/cluster-service=true,version=v2 Status: Running Reason: Message: IP: 10.244.1.4 Replication Controllers: monitoring-influxdb-grafana-v2 (1/1 replicas created) Containers: influxdb: Container ID: docker://564724318ca81d33d6079978d24f78b3c6ff8eb08a9023c845e250eeb888aafd Image: gcr.io/google_containers/heapster_influxdb:v0.4 Image ID: docker://8b8118c488e431cc43e7ff9060968d88402cc6c38a6390c4221352403aa7ac1b QoS Tier: memory: Guaranteed cpu: Guaranteed Limits: memory: 200Mi cpu: 100m Requests: memory: 200Mi cpu: 100m State: Running Started: Fri, 13 Nov 2015 08:22:55 -0800 Ready: True Restart Count: 0 Environment Variables: grafana: Container ID: docker://518dea564a0ee014345e9006da6113fb6584ff1ebc6d0cc9609a608abc995f45 Image: beta.gcr.io/google_containers/heapster_grafana:v2.1.1 Image ID: docker://200e77ba156a5a86879e49667b97afe84dca42b5bb67ab1e06217e6a19c5a6a6 QoS Tier: cpu: Guaranteed memory: Guaranteed Limits: memory: 100Mi cpu: 100m Requests: cpu: 100m memory: 100Mi State: Running Started: Fri, 13 Nov 2015 08:22:35 -0800 Ready: True Restart Count: 0 Environment Variables: INFLUXDB_SERVICE_URL: http://monitoring-influxdb:8086 GF_AUTH_BASIC_ENABLED: false GF_AUTH_ANONYMOUS_ENABLED: true GF_AUTH_ANONYMOUS_ORG_ROLE: Admin GF_SERVER_ROOT_URL: /api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/ Conditions: Type Status Ready True Volumes: influxdb-persistent-storage: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: grafana-persistent-storage: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: default-token-bo89c: Type: Secret (a secret that should populate this volume) SecretName: default-token-bo89c No events.
Then this will not work ( ReferenceError: process is not defined ) , as these angularjs code is executed at the client side. If the client is outside the cluster, the only way it will be able to access the NodeJS API is if you expose it to the client's network, which is probably the public internet. If you're concerned about the security implications of that, there are a number of different ways to authenticate the service, such as using nginx auth_basic. "containers": { { "name":"container-name", "command": { "sh" , "sudo nginx Command-line parameters 'https://$NODE_API_SERVICE_HOST:$NODE_API_SERVICE_PORT/node/api/v1'" } } } Is this sustainable for the PODs restart? IF so, how do I get this variable in AngularJS? Yes, service IP & port is stable, even across pod restarts. As for how to communicate the NODE_API_SERVICE_{HOST,PORT} variables to the client, you will need to inject them from a process running server side (within your cluster) into the response (e.g. directly into the JS code, or as a JSON response).
###Instruction: Using Kubernetes -- Gogle Container Enginer setup , Within the Same google cloud Cluster, I am having the Front end Service -> nginx + Angular JS and REST API service --> NodeJS API. I don't want to expose NodeJS API KubeCTL Service public domain. So, 'ServiceType' is set to only 'ClusterIP' . How do we infer this NODE_API_SERIVCE_HOST , NODE_API_SERIVCE_PORT -- {SVCNAME}_SERVICE_HOST and {SVCNAME}_SERVICE_PORT -- inside AngularJS program. (function() { 'use strict'; angular .module('mymodule', []) .config(["RestangularProvider", function(RestangularProvider) { var apiDomainHost = process.env.NODE_API_SERVICE_HOST; var apiDomainPort = process.env.NODE_API_SERVICE_PORT; RestangularProvider.setBaseUrl('https://'+apiDomainHost+':'+apiDomainPort+'/node/api/v1'); }]); })(); Then this will not work ( ReferenceError: process is not defined ) , as these angularjs code is executed at the client side. My docker file is simple to inherit the nginx stop / start controls. FROM nginx COPY public/angular-folder /usr/share/nginx/html/projectSubDomainName Page 43 of 62 in http://www.slideshare.net/carlossg/scaling-docker-with-kubernetes, explains that can we invoke the Command sh "containers": { { "name":"container-name", "command": { "sh" , "sudo nginx Command-line parameters 'https://$NODE_API_SERVICE_HOST:$NODE_API_SERVICE_PORT/node/api/v1'" } } } Is this sustainable for the PODs restart? IF so, how do I get this variable in AngularJS? ### Response: Then this will not work ( ReferenceError: process is not defined ) , as these angularjs code is executed at the client side. If the client is outside the cluster, the only way it will be able to access the NodeJS API is if you expose it to the client's network, which is probably the public internet. If you're concerned about the security implications of that, there are a number of different ways to authenticate the service, such as using nginx auth_basic. "containers": { { "name":"container-name", "command": { "sh" , "sudo nginx Command-line parameters 'https://$NODE_API_SERVICE_HOST:$NODE_API_SERVICE_PORT/node/api/v1'" } } } Is this sustainable for the PODs restart? IF so, how do I get this variable in AngularJS? Yes, service IP & port is stable, even across pod restarts. As for how to communicate the NODE_API_SERVICE_{HOST,PORT} variables to the client, you will need to inject them from a process running server side (within your cluster) into the response (e.g. directly into the JS code, or as a JSON response).
Using Kubernetes -- Gogle Container Enginer setup , Within the Same google cloud Cluster, I am having the Front end Service -> nginx + Angular JS and REST API service --> NodeJS API. I don't want to expose NodeJS API KubeCTL Service public domain. So, 'ServiceType' is set to only 'ClusterIP' . How do we infer this NODE_API_SERIVCE_HOST , NODE_API_SERIVCE_PORT -- {SVCNAME}_SERVICE_HOST and {SVCNAME}_SERVICE_PORT -- inside AngularJS program. (function() { 'use strict'; angular .module('mymodule', []) .config(["RestangularProvider", function(RestangularProvider) { var apiDomainHost = process.env.NODE_API_SERVICE_HOST; var apiDomainPort = process.env.NODE_API_SERVICE_PORT; RestangularProvider.setBaseUrl('https://'+apiDomainHost+':'+apiDomainPort+'/node/api/v1'); }]); })(); Then this will not work ( ReferenceError: process is not defined ) , as these angularjs code is executed at the client side. My docker file is simple to inherit the nginx stop / start controls. FROM nginx COPY public/angular-folder /usr/share/nginx/html/projectSubDomainName Page 43 of 62 in http://www.slideshare.net/carlossg/scaling-docker-with-kubernetes, explains that can we invoke the Command sh "containers": { { "name":"container-name", "command": { "sh" , "sudo nginx Command-line parameters 'https://$NODE_API_SERVICE_HOST:$NODE_API_SERVICE_PORT/node/api/v1'" } } } Is this sustainable for the PODs restart? IF so, how do I get this variable in AngularJS?
An ingress controller can completely bypass kube-proxy. The haproxy controller for example, does this and goes straight to endpoints. However it doesn't use the Ingress in the typical sense. You could do the same with the nginx controller, all you need to lookup endpoints and insert them instead of the DNS name it currently uses (i.e swap this line for a pointer to an upstream that contains the endpoints).
###Instruction: I'd like to implement a sticky-session Ingress controller. Cookies or IP hashing would both be fine; I'm happy as long as the same client is generally routed to the same pod. What I'm stuck on: it seems like the Kubernetes service model means my connections are going to be proxied randomly no matter what. I can configure my Ingress controller with session affinity, but as soon as the the connection gets past the that and hits a service, kube-proxy is just going to route me randomly. There's the sessionAffinity: ClientIP flag on services, but that doesn't help me -- the Client IP will always be the internal IP of the Ingress pod. Am I missing something? Is this possible given Kubernetes' current architecture? ### Response: An ingress controller can completely bypass kube-proxy. The haproxy controller for example, does this and goes straight to endpoints. However it doesn't use the Ingress in the typical sense. You could do the same with the nginx controller, all you need to lookup endpoints and insert them instead of the DNS name it currently uses (i.e swap this line for a pointer to an upstream that contains the endpoints).
I'd like to implement a sticky-session Ingress controller. Cookies or IP hashing would both be fine; I'm happy as long as the same client is generally routed to the same pod. What I'm stuck on: it seems like the Kubernetes service model means my connections are going to be proxied randomly no matter what. I can configure my Ingress controller with session affinity, but as soon as the the connection gets past the that and hits a service, kube-proxy is just going to route me randomly. There's the sessionAffinity: ClientIP flag on services, but that doesn't help me -- the Client IP will always be the internal IP of the Ingress pod. Am I missing something? Is this possible given Kubernetes' current architecture?
You need to increase the quota assigned for your project. Please see https://cloud.google.com/compute/docs/resource-quotas for the explanation of resource quotas, and follow the link on that page to check and/or request a quota increase.
###Instruction: I'm using Kubernetes and I'm trying to create an ingress resource. I create it using: $ kubectl create -f my-ingress.yaml I wait a while and a load balancer doesn't seem to be created. Running: $ kubectl describe ing my-ingress returns: Events: FirstSeen LastSeen Count From SubobjectPath Reason Message ───────── ──────── ───── ──── ───────────── ────── ─────── 46s 46s 1 {loadbalancer-controller } ADD my-ingress 23s 11s 2 {loadbalancer-controller } GCE :Quota googleapi: Error 403: Quota 'BACKEND_SERVICES' exceeded. Limit: 3.0 Is there a way to increase the number of backend services that can be created? ### Response: You need to increase the quota assigned for your project. Please see https://cloud.google.com/compute/docs/resource-quotas for the explanation of resource quotas, and follow the link on that page to check and/or request a quota increase.
I'm using Kubernetes and I'm trying to create an ingress resource. I create it using: $ kubectl create -f my-ingress.yaml I wait a while and a load balancer doesn't seem to be created. Running: $ kubectl describe ing my-ingress returns: Events: FirstSeen LastSeen Count From SubobjectPath Reason Message ───────── ──────── ───── ──── ───────────── ────── ─────── 46s 46s 1 {loadbalancer-controller } ADD my-ingress 23s 11s 2 {loadbalancer-controller } GCE :Quota googleapi: Error 403: Quota 'BACKEND_SERVICES' exceeded. Limit: 3.0 Is there a way to increase the number of backend services that can be created?
With Windows, you need docker-machine and boot2docker VMs to run anything docker related. There is no (not yet) "docker for Windows". Note that issue 7428 mentioned "Can't run kubernetes within boot2docker". So even when you follow instructions (from a default VM created with docker-machine), you might still get errors: ➜ workspace docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.14.2 /hyperkube kubelet --api_servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=127.0.0.1 --config=/etc/kubernetes/manifests ee0b490f74f6bc9b70c1336115487b38d124bdcebf09b248cec91832e0e9af1d ➜ workspace docker logs -f ee0b490f74f6bc9b70c1336115487b38d124bdcebf09b248cec91832e0e9af1d W0428 09:09:41.479862 1 server.go:249] Could not load kubernetes auth path: stat : no such file or directory. Continuing with defaults. I0428 09:09:41.479989 1 server.go:168] Using root directory: /var/lib/kubelet The alternative would be to try on a full-fledge Linux VM (like the latest Ubuntu), instead of a boot2docker-like VM (based on a TinyCore distro).
###Instruction: I'm very new to kubernetes and trying to conceptualize it as well as set it up locally in order to try developing something on it. There's a confound though that I am running on a windows machine. Their "getting started" documentation in github says you have to run Linux to use kubernetes. As docker runs on windows, I was wondering if it was possible to create a kubernetes instance as a container in windows docker and use it to manage the rest of the cluster in the same windows docker instance. From reading the setup instructions, it seems like docker, kubernetes, and something called etcd all have to run "in parallel" on a single host operating system... But part of me thinks it might be possible to Start docker, boot 'default' machine. Create kubernetes container - configure to communicate with the existing docker 'default' machine Use kubernetes to manage existing docker. Pipe dream? Wrongheaded foolishness? I see there are some options around running it in a vagrant instance. Does that mean docker, etcd, & kubernetes together in a single VM (which in turn creates a cluster of virtual machines inside it?) I feel like I need to draw a picture of what this all looks like in terms of physical hardware and "memory boxes" to really wrap my head around this. ### Response: With Windows, you need docker-machine and boot2docker VMs to run anything docker related. There is no (not yet) "docker for Windows". Note that issue 7428 mentioned "Can't run kubernetes within boot2docker". So even when you follow instructions (from a default VM created with docker-machine), you might still get errors: ➜ workspace docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.14.2 /hyperkube kubelet --api_servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=127.0.0.1 --config=/etc/kubernetes/manifests ee0b490f74f6bc9b70c1336115487b38d124bdcebf09b248cec91832e0e9af1d ➜ workspace docker logs -f ee0b490f74f6bc9b70c1336115487b38d124bdcebf09b248cec91832e0e9af1d W0428 09:09:41.479862 1 server.go:249] Could not load kubernetes auth path: stat : no such file or directory. Continuing with defaults. I0428 09:09:41.479989 1 server.go:168] Using root directory: /var/lib/kubelet The alternative would be to try on a full-fledge Linux VM (like the latest Ubuntu), instead of a boot2docker-like VM (based on a TinyCore distro).
I'm very new to kubernetes and trying to conceptualize it as well as set it up locally in order to try developing something on it. There's a confound though that I am running on a windows machine. Their "getting started" documentation in github says you have to run Linux to use kubernetes. As docker runs on windows, I was wondering if it was possible to create a kubernetes instance as a container in windows docker and use it to manage the rest of the cluster in the same windows docker instance. From reading the setup instructions, it seems like docker, kubernetes, and something called etcd all have to run "in parallel" on a single host operating system... But part of me thinks it might be possible to Start docker, boot 'default' machine. Create kubernetes container - configure to communicate with the existing docker 'default' machine Use kubernetes to manage existing docker. Pipe dream? Wrongheaded foolishness? I see there are some options around running it in a vagrant instance. Does that mean docker, etcd, & kubernetes together in a single VM (which in turn creates a cluster of virtual machines inside it?) I feel like I need to draw a picture of what this all looks like in terms of physical hardware and "memory boxes" to really wrap my head around this.
Yes, it is possible. The Replication Controller template or Pod spec image isn't special. If you specify image: redis you will get the latest tag of the official Docker Hub library Redis image, just as if you did docker pull redis.
###Instruction: Still new to Containers and Kubernetes here but I am dabbling with deploying a cluster on Google Containers Engine and was wondering if you can use a docker hub hosted image to deploy containers, so in my .yaml configuration file I'd say: ... image: hub.docker.com/r/my-team/my-image:latest ... Is this possible? Or one has to download/build image locally and then upload it to Google Containers Registery? Thanks so much ### Response: Yes, it is possible. The Replication Controller template or Pod spec image isn't special. If you specify image: redis you will get the latest tag of the official Docker Hub library Redis image, just as if you did docker pull redis.
Still new to Containers and Kubernetes here but I am dabbling with deploying a cluster on Google Containers Engine and was wondering if you can use a docker hub hosted image to deploy containers, so in my .yaml configuration file I'd say: ... image: hub.docker.com/r/my-team/my-image:latest ... Is this possible? Or one has to download/build image locally and then upload it to Google Containers Registery? Thanks so much
So you basically you need to expose the mysql port to your host: docker run --name mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=password -d mysql/mysql-server:latest Then you can access from your host using the mysql command line: mysql -h127.0.0.1 -ppassword -uroot Not sure why you are trying to run another container to connect (perhaps you meant linking two containers) If you are using Mac (or Windows) with docker-machine you want to connect to the IP address of your docker-machine VM. For example: $ docker-machine ssh default ## . ## ## ## == ## ## ## ## ## === /"""""""""""""""""\___/ === ~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ / ===- ~~~ \______ o __/ \ \ __/ \____\_______/ _ _ ____ _ _ | |__ ___ ___ | |_|___ \ __| | ___ ___| | _____ _ __ | '_ \ / _ \ / _ \| __| __) / _` |/ _ \ / __| |/ / _ \ '__| | |_) | (_) | (_) | |_ / __/ (_| | (_) | (__| < __/ | |_.__/ \___/ \___/ \__|_____\__,_|\___/ \___|_|\_\___|_| Boot2Docker version 1.9.0, build master : 16e4a2a - Tue Nov 3 19:49:22 UTC 2015 Docker version 1.9.0, build 76d6bc9 docker@default:~$ ifconfig eth1 eth1 Link encap:Ethernet HWaddr 08:00:27:E6:C7:20 inet addr:192.168.99.100 Bcast:192.168.99.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fee6:c720/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:18827 errors:0 dropped:0 overruns:0 frame:0 TX packets:10280 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1791527 (1.7 MiB) TX bytes:2242596 (2.1 MiB) Then connect to: mysql -h192.168.99.100 -ppassword -uroot
###Instruction: I have already googled on this subject and found few threads. Based on these threads I have followed the following steps. But I am facing a problem. Basically, I want to create a docker image for mysql and then connect to it from my host machine (Mac OS X). Based on this post , I have to share the mysql unix socket with the host. towards this I have done the following steps 1. Start docker quick terminal 2. docker run --name mysql -e MYSQL_ROOT_PASSWORD=password -d mysql/mysql-server:latest 3. docker exec -it mysql bash 4. mysql -uroot -p 5. create database MyDB; 6. GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'password'; 7. exit; 8. mkdir /Users/abhi/host 9. docker run -it -v /host:/shared mysql/mysql-server:latest Now I get the error MacBook-Pro:~$ docker run -it -v /Users/abhi/host:/shared mysql/mysql-server error: database is uninitialized and password option is not specified You need to specify one of MYSQL_ROOT_PASSWORD, MYSQL_ALLOW_EMPTY_PASSWORD and MYSQL_RANDOM_ROOT_PASSWORD But you see that I have provided the password and initialized my database. All I want is that from my host machine, I can connect to the mysql database running inside docker. EDIT:: ----- solution which worked ------ Thanks RICO. Finally the steps which worked for me are 1. Start docker quick terminal 2. docker run --name mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=password -d mysql/mysql-server:latest 3. docker exec -it mysql bash 4. mysql -uroot -p 5. create database MyDB; or: CREATE USER 'root'@'%' IDENTIFIED BY 'root'; GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' WITH GRANT OPTION; 6. GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'password'; 7. exit; 8. docker-machine env default Use the IP address obtained in step 8. port is 3306, user is root, password is password, database is MyDB. Connection is successful! ### Response: So you basically you need to expose the mysql port to your host: docker run --name mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=password -d mysql/mysql-server:latest Then you can access from your host using the mysql command line: mysql -h127.0.0.1 -ppassword -uroot Not sure why you are trying to run another container to connect (perhaps you meant linking two containers) If you are using Mac (or Windows) with docker-machine you want to connect to the IP address of your docker-machine VM. For example: $ docker-machine ssh default ## . ## ## ## == ## ## ## ## ## === /"""""""""""""""""\___/ === ~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ / ===- ~~~ \______ o __/ \ \ __/ \____\_______/ _ _ ____ _ _ | |__ ___ ___ | |_|___ \ __| | ___ ___| | _____ _ __ | '_ \ / _ \ / _ \| __| __) / _` |/ _ \ / __| |/ / _ \ '__| | |_) | (_) | (_) | |_ / __/ (_| | (_) | (__| < __/ | |_.__/ \___/ \___/ \__|_____\__,_|\___/ \___|_|\_\___|_| Boot2Docker version 1.9.0, build master : 16e4a2a - Tue Nov 3 19:49:22 UTC 2015 Docker version 1.9.0, build 76d6bc9 docker@default:~$ ifconfig eth1 eth1 Link encap:Ethernet HWaddr 08:00:27:E6:C7:20 inet addr:192.168.99.100 Bcast:192.168.99.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fee6:c720/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:18827 errors:0 dropped:0 overruns:0 frame:0 TX packets:10280 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1791527 (1.7 MiB) TX bytes:2242596 (2.1 MiB) Then connect to: mysql -h192.168.99.100 -ppassword -uroot
I have already googled on this subject and found few threads. Based on these threads I have followed the following steps. But I am facing a problem. Basically, I want to create a docker image for mysql and then connect to it from my host machine (Mac OS X). Based on this post , I have to share the mysql unix socket with the host. towards this I have done the following steps 1. Start docker quick terminal 2. docker run --name mysql -e MYSQL_ROOT_PASSWORD=password -d mysql/mysql-server:latest 3. docker exec -it mysql bash 4. mysql -uroot -p 5. create database MyDB; 6. GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'password'; 7. exit; 8. mkdir /Users/abhi/host 9. docker run -it -v /host:/shared mysql/mysql-server:latest Now I get the error MacBook-Pro:~$ docker run -it -v /Users/abhi/host:/shared mysql/mysql-server error: database is uninitialized and password option is not specified You need to specify one of MYSQL_ROOT_PASSWORD, MYSQL_ALLOW_EMPTY_PASSWORD and MYSQL_RANDOM_ROOT_PASSWORD But you see that I have provided the password and initialized my database. All I want is that from my host machine, I can connect to the mysql database running inside docker. EDIT:: ----- solution which worked ------ Thanks RICO. Finally the steps which worked for me are 1. Start docker quick terminal 2. docker run --name mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=password -d mysql/mysql-server:latest 3. docker exec -it mysql bash 4. mysql -uroot -p 5. create database MyDB; or: CREATE USER 'root'@'%' IDENTIFIED BY 'root'; GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' WITH GRANT OPTION; 6. GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'password'; 7. exit; 8. docker-machine env default Use the IP address obtained in step 8. port is 3306, user is root, password is password, database is MyDB. Connection is successful!
This is missing functionality in the ABAC authorizer. The fix is in progress: #16148. As for a workaround, from the authorization doc: For miscellaneous endpoints, like /version, the resource is the empty string. So you may be able to solve by defining a policy: {"user":"bob", "readonly": true, "resource": ""} (note the empty string for resource) to grant access to unversioned endpoints. If that doesn't work I don't think there's a clean workaround that will let you use kubectl with --authorization-mode=ABAC.
###Instruction: I sent up a 4 node cluster (1 master 3 workers) running Kubernetes on Ubuntu. I turned on --authorization-mode=ABAC and set up a policy file with an entry like the following {"user":"bob", "readonly": true, "namespace": "projectgino"} I want user bob to only be able to look at resources in projectgino. I'm having problems using kubectl command line as user Bob. When I run the following command kubectl get pods --token=xxx --namespace=projectgino --server=https://xxx.xxx.xxx.xx:6443 I get the following error error: couldn't read version from server: the server does not allow access to the requested resource I traced the kubectl command line code and the problem seems to caused by kubectl calling function NegotiateVersion in pkg/client/helper.go. This makes a call to /api on the server to get the version of Kubernetes. This call fails because the rest path doesn't contain namespace projectgino. I added trace code to pkg/auth/authorizer/abac/abac.go and it fails on the namespace check. I haven't moved up the the latest 1.1.1 version of Kubernetes yet, but looking at the code I didn't see anything that has changed in this area. Does anybody know how to configure Kubernetes to get around the problem? ### Response: This is missing functionality in the ABAC authorizer. The fix is in progress: #16148. As for a workaround, from the authorization doc: For miscellaneous endpoints, like /version, the resource is the empty string. So you may be able to solve by defining a policy: {"user":"bob", "readonly": true, "resource": ""} (note the empty string for resource) to grant access to unversioned endpoints. If that doesn't work I don't think there's a clean workaround that will let you use kubectl with --authorization-mode=ABAC.
I sent up a 4 node cluster (1 master 3 workers) running Kubernetes on Ubuntu. I turned on --authorization-mode=ABAC and set up a policy file with an entry like the following {"user":"bob", "readonly": true, "namespace": "projectgino"} I want user bob to only be able to look at resources in projectgino. I'm having problems using kubectl command line as user Bob. When I run the following command kubectl get pods --token=xxx --namespace=projectgino --server=https://xxx.xxx.xxx.xx:6443 I get the following error error: couldn't read version from server: the server does not allow access to the requested resource I traced the kubectl command line code and the problem seems to caused by kubectl calling function NegotiateVersion in pkg/client/helper.go. This makes a call to /api on the server to get the version of Kubernetes. This call fails because the rest path doesn't contain namespace projectgino. I added trace code to pkg/auth/authorizer/abac/abac.go and it fails on the namespace check. I haven't moved up the the latest 1.1.1 version of Kubernetes yet, but looking at the code I didn't see anything that has changed in this area. Does anybody know how to configure Kubernetes to get around the problem?
There is currently no supported way to achieve what you're trying to do. But there is a Kubernetes project under way to address it, which goes under the name of Kubernetes Cluster Federation, alternatively known as "Ubernetes". Further details are available here: http://www.slideshare.net/quintonh/federation-of-kubernetes-clusters-aka-ubernetes-kubecon-2015-slides-quinton-hoole http://tinyurl.com/ubernetesv2 http://tinyurl.com/ubernetes-wg-notes
###Instruction: Is it in any way possible to configure a Kubernetes Cluster that utilizes ressources from multiple IaaS providers at the same time e.g. a cluster running partially on GCE and AWS? Or a Kubernetes Cluster running on your bare metal and an IaaS provider? Maybe in combination with some other tools like Mesos? Are there any other tools like Kubernetes that provide this capability? If it's not possbile with Kubernetes, what would one have to do in order to provide that feature? Any help or suggestions would be very much appreciated. ### Response: There is currently no supported way to achieve what you're trying to do. But there is a Kubernetes project under way to address it, which goes under the name of Kubernetes Cluster Federation, alternatively known as "Ubernetes". Further details are available here: http://www.slideshare.net/quintonh/federation-of-kubernetes-clusters-aka-ubernetes-kubecon-2015-slides-quinton-hoole http://tinyurl.com/ubernetesv2 http://tinyurl.com/ubernetes-wg-notes
Is it in any way possible to configure a Kubernetes Cluster that utilizes ressources from multiple IaaS providers at the same time e.g. a cluster running partially on GCE and AWS? Or a Kubernetes Cluster running on your bare metal and an IaaS provider? Maybe in combination with some other tools like Mesos? Are there any other tools like Kubernetes that provide this capability? If it's not possbile with Kubernetes, what would one have to do in order to provide that feature? Any help or suggestions would be very much appreciated.
Actually, once you delete the pod, it will be in "terminating" state until it is destroyed (after terminationGracePeriodSeconds) which means it is removed from the service load balancer, but still capable of serving existing requests. We also use "readiness" health checks, and preStop is synchronous, so you could make your preStop hook mark the readiness of the pod to be false, and then wait for it to be removed from the load balancer, before having the preStop hook exit.
###Instruction: Standard practice for a rolling update of hosts behind load balancer is to gracefully take the hosts out of rotation. This can be done by marking the host "un-healthy" and ensuring the host is no longer receiving requests from the load balancer. Does Kubernetes do something similar for pods managed by a ReplicationController and servicing a LoadBalancer Service? I.e., does Kubernetes take a pod out of the LoadBalancer rotation, ensure incoming traffic has died-down, and only then issue pod shutdown? ### Response: Actually, once you delete the pod, it will be in "terminating" state until it is destroyed (after terminationGracePeriodSeconds) which means it is removed from the service load balancer, but still capable of serving existing requests. We also use "readiness" health checks, and preStop is synchronous, so you could make your preStop hook mark the readiness of the pod to be false, and then wait for it to be removed from the load balancer, before having the preStop hook exit.
Standard practice for a rolling update of hosts behind load balancer is to gracefully take the hosts out of rotation. This can be done by marking the host "un-healthy" and ensuring the host is no longer receiving requests from the load balancer. Does Kubernetes do something similar for pods managed by a ReplicationController and servicing a LoadBalancer Service? I.e., does Kubernetes take a pod out of the LoadBalancer rotation, ensure incoming traffic has died-down, and only then issue pod shutdown?
You would need to create the SSL cert like Yu-Ju Hong said, then you would have to tell ruby to use the certificate when connecting something like http://makandracards.com/makandra/1701-use-ssl-for-amazon-rds-mysql-and-your-rails-app The bit about: sslca: /path/to/mysql-ssl-ca-cert.pem
###Instruction: I am using Kubernetes to deploy a Rails application to Google Container Engine. The database is using Google Cloud SQL. I know the database's ip address and set it into my Kubernetes config file: # web-controller.yml apiVersion: v1 kind: ReplicationController metadata: labels: name: web name: web-controller spec: replicas: 2 selector: name: web template: metadata: labels: name: web spec: containers: - name: web image: gcr.io/my-project-id/myapp:v1 ports: - containerPort: 3000 name: http-server env: - name: RAILS_ENV value: "production" - name: DATABASE_URL value: "mysql2://[my_username]:[my_password]@[database_ip]/myapp" Then create: $ kubectl create -f web-controller.yml From the pod log I saw: $ kubectl logs web-controller-038dl Lost connection to MySQL server at 'reading initial communication packet', system error: 0 /usr/local/bundle/gems/mysql2-0.3.20/lib/mysql2/client.rb:70:in `connect' /usr/local/bundle/gems/mysql2-0.3.20/lib/mysql2/client.rb:70:in `initialize' ... I can see the LoadBalancer Ingress ip address from the Kubernetes UI page in web service section. From the Google Developers Console -> Storage -> SQL, select the running db and click the link. From Access Controler -> Authorization -> Authorized Networks, add a new item and add that IP to there. But the result was the same. ### Response: You would need to create the SSL cert like Yu-Ju Hong said, then you would have to tell ruby to use the certificate when connecting something like http://makandracards.com/makandra/1701-use-ssl-for-amazon-rds-mysql-and-your-rails-app The bit about: sslca: /path/to/mysql-ssl-ca-cert.pem
I am using Kubernetes to deploy a Rails application to Google Container Engine. The database is using Google Cloud SQL. I know the database's ip address and set it into my Kubernetes config file: # web-controller.yml apiVersion: v1 kind: ReplicationController metadata: labels: name: web name: web-controller spec: replicas: 2 selector: name: web template: metadata: labels: name: web spec: containers: - name: web image: gcr.io/my-project-id/myapp:v1 ports: - containerPort: 3000 name: http-server env: - name: RAILS_ENV value: "production" - name: DATABASE_URL value: "mysql2://[my_username]:[my_password]@[database_ip]/myapp" Then create: $ kubectl create -f web-controller.yml From the pod log I saw: $ kubectl logs web-controller-038dl Lost connection to MySQL server at 'reading initial communication packet', system error: 0 /usr/local/bundle/gems/mysql2-0.3.20/lib/mysql2/client.rb:70:in `connect' /usr/local/bundle/gems/mysql2-0.3.20/lib/mysql2/client.rb:70:in `initialize' ... I can see the LoadBalancer Ingress ip address from the Kubernetes UI page in web service section. From the Google Developers Console -> Storage -> SQL, select the running db and click the link. From Access Controler -> Authorization -> Authorized Networks, add a new item and add that IP to there. But the result was the same.
Looking at your question, I believe by account you mean project. The command for pulling an image from the registry is: $ gcloud docker pull gcr.io/your-project-id/example-image This means as long as your account is a member of the project which the image belongs to, you can pull the image from that project to any other projects that your account is a member of.
###Instruction: We'd like to have a separate test and prod project on the Google Cloud Platform but we want to reuse the same docker images in both environments. Is it possible for the Kubernetes cluster running on the test project to use images pushed to the prod project? If so, how? ### Response: Looking at your question, I believe by account you mean project. The command for pulling an image from the registry is: $ gcloud docker pull gcr.io/your-project-id/example-image This means as long as your account is a member of the project which the image belongs to, you can pull the image from that project to any other projects that your account is a member of.
We'd like to have a separate test and prod project on the Google Cloud Platform but we want to reuse the same docker images in both environments. Is it possible for the Kubernetes cluster running on the test project to use images pushed to the prod project? If so, how?
TL;DR Google Container Engine running Kubernetes v1.1 supports loadBalancerIP just mark the auto-assigned IP as static first. Kubernetes v1.1 supports externalIPs: apiVersion: v1 kind: Service spec: type: LoadBalancer loadBalancerIP: 10.10.10.10 ... So far there isn't a really good consistent documentation on how to use it on GCE. What is sure is that this IP must first be one of your pre-allocated static IPs. The cross-region load balancing documentation is mostly for Compute Engine and not Kubernetes/Container Engine, but it's still useful especially the part "Configure the load balancing service". If you just create a Kubernetes LoadBalancer on GCE, it will create a network Compute Engine > Network > Network load balancing > Forwarding Rule pointing to a target pool made of your machines on your cluster (normally only those running the Pods matching the service selector). It looks like deleting a namespace doesn't nicely clean-up the those created rules. Update It is actually now supported (even though under documented): Check that you're running Kubernetes 1.1 or later (under GKE edit your cluster and check "Node version") Allocate static IPs under Networking > External IP addresses, either: Deploy once without loadBalancerIP, wait until you've an external IP allocated when you run kubectl get svc, and look up that IP in the list on that page and change those from Ephemeral to Static. Click "Reserver a static address" regional in the region of your cluster, attached to None. Edit your LoadBalancer to have loadBalancerIP=10.10.10.10 as above (adapt to the IP that was given to you by Google). Now if you delete your LoadBalancer or even your namespace, it'll preserve that IP address upon re-reploying on that cluster. Update 2016-11-14 See also Kubernetes article describing how to set up a static IP for single or multiple domains on Kubernetes.
###Instruction: I have a Kubernetes cluster running on Google Compute Engine and I would like to assign static IP addresses to my external services (type: LoadBalancer). I am unsure about whether this is possible at the moment or not. I found the following sources on that topic: Kubernetes Service Documentation lets you define an external IP address, but it fails with cannot unmarshal object into Go value of type []v1.LoadBalancerIngress The publicIPs field seems to let me specify external IPs, but it doesn't seem to work either This Github issue states that what I'm trying to do is not supported yet, but will be in Kubernetes v1.1 The clusterIP field also lets me specify an IP address, but fails with "provided IP is not in the valid range" I feel like the usage of static IPs is quite important when setting up web services. Am I missing something here? I'd be very grateful if somebody could enlighten me here! EDIT: For clarification: I am not using Container Engine, I set up a cluster myself using the official installation instructions for Compute Engine. All IP addresses associated with my k8s services are marked as "ephemeral", which means recreating a kubernetes service may lead to a different external IP address (which is why I need them to be static). ### Response: TL;DR Google Container Engine running Kubernetes v1.1 supports loadBalancerIP just mark the auto-assigned IP as static first. Kubernetes v1.1 supports externalIPs: apiVersion: v1 kind: Service spec: type: LoadBalancer loadBalancerIP: 10.10.10.10 ... So far there isn't a really good consistent documentation on how to use it on GCE. What is sure is that this IP must first be one of your pre-allocated static IPs. The cross-region load balancing documentation is mostly for Compute Engine and not Kubernetes/Container Engine, but it's still useful especially the part "Configure the load balancing service". If you just create a Kubernetes LoadBalancer on GCE, it will create a network Compute Engine > Network > Network load balancing > Forwarding Rule pointing to a target pool made of your machines on your cluster (normally only those running the Pods matching the service selector). It looks like deleting a namespace doesn't nicely clean-up the those created rules. Update It is actually now supported (even though under documented): Check that you're running Kubernetes 1.1 or later (under GKE edit your cluster and check "Node version") Allocate static IPs under Networking > External IP addresses, either: Deploy once without loadBalancerIP, wait until you've an external IP allocated when you run kubectl get svc, and look up that IP in the list on that page and change those from Ephemeral to Static. Click "Reserver a static address" regional in the region of your cluster, attached to None. Edit your LoadBalancer to have loadBalancerIP=10.10.10.10 as above (adapt to the IP that was given to you by Google). Now if you delete your LoadBalancer or even your namespace, it'll preserve that IP address upon re-reploying on that cluster. Update 2016-11-14 See also Kubernetes article describing how to set up a static IP for single or multiple domains on Kubernetes.
I have a Kubernetes cluster running on Google Compute Engine and I would like to assign static IP addresses to my external services (type: LoadBalancer). I am unsure about whether this is possible at the moment or not. I found the following sources on that topic: Kubernetes Service Documentation lets you define an external IP address, but it fails with cannot unmarshal object into Go value of type []v1.LoadBalancerIngress The publicIPs field seems to let me specify external IPs, but it doesn't seem to work either This Github issue states that what I'm trying to do is not supported yet, but will be in Kubernetes v1.1 The clusterIP field also lets me specify an IP address, but fails with "provided IP is not in the valid range" I feel like the usage of static IPs is quite important when setting up web services. Am I missing something here? I'd be very grateful if somebody could enlighten me here! EDIT: For clarification: I am not using Container Engine, I set up a cluster myself using the official installation instructions for Compute Engine. All IP addresses associated with my k8s services are marked as "ephemeral", which means recreating a kubernetes service may lead to a different external IP address (which is why I need them to be static).
PR #12112 changed kubectl get pods to not show terminated pods by default. You can get the old behavior (show all pods) by using kubectl get pods -a
###Instruction: All running computation Monte Carlo jobs on Google Compute Engine. Last time I ran them was September, and things have changed a bit since then. I used to run a lot of jobs with kubectl from some pod.json file, no RC, no restart, fire-and-forget setup. After I started jobs I used to get pods (kubectl get pods) and typically output looks like NAME READY STATUS RESTARTS AGE r8o3il08c25-y0z10 1/1 Running 0 56m r8o3il08c25-y0z15 0/1 Pending 0 56m After one is done and second is started, I used to get output NAME READY STATUS RESTARTS AGE r8o3il08c25-y0z10 1/1 Exit:0 0 1h r8o3il08c25-y0z15 1/1 Running 0 1h So I could, using simple grep, get the picture how many are running, how many are pending, and how many are done, and query exit code (so to check if there are errors with some pods) etc Now output with latest SDK (Google Cloud SDK 0.9.87) looks like this NAME READY STATUS RESTARTS AGE All finished pods are now invisible. Could I get old behavior back? Why it was changed? ### Response: PR #12112 changed kubectl get pods to not show terminated pods by default. You can get the old behavior (show all pods) by using kubectl get pods -a
All running computation Monte Carlo jobs on Google Compute Engine. Last time I ran them was September, and things have changed a bit since then. I used to run a lot of jobs with kubectl from some pod.json file, no RC, no restart, fire-and-forget setup. After I started jobs I used to get pods (kubectl get pods) and typically output looks like NAME READY STATUS RESTARTS AGE r8o3il08c25-y0z10 1/1 Running 0 56m r8o3il08c25-y0z15 0/1 Pending 0 56m After one is done and second is started, I used to get output NAME READY STATUS RESTARTS AGE r8o3il08c25-y0z10 1/1 Exit:0 0 1h r8o3il08c25-y0z15 1/1 Running 0 1h So I could, using simple grep, get the picture how many are running, how many are pending, and how many are done, and query exit code (so to check if there are errors with some pods) etc Now output with latest SDK (Google Cloud SDK 0.9.87) looks like this NAME READY STATUS RESTARTS AGE All finished pods are now invisible. Could I get old behavior back? Why it was changed?
You'll need to tell Jenkins how and where to run your builds by adding your Kubernetes cluster as a 'cloud' in the Jenkins configuration. Go to Manage Jenkins -> Configure System -> Cloud -> Add new cloud and select 'Kubernetes'. You'll find the server certificate key, user name and password in your local kubectl configuration (usually in ~/.kube/config). The values for 'Kubernetes URL' and 'Jenkins URL' depend on your cluster setup. Next, you'll need to configure the docker images that should be used to run your builds by selecting 'Add Docker Template'. Use labels to define which tasks should be run with which image! Here's a good video tutorial and here you'll find a nice tutorial which explains everything in detail.
###Instruction: I've been struggling with setting up the Jenkins Kubernetes Plugin on the Google Container Engine. I have the plugin installed but I think all my builds are still running on master. I haven't found any good documentation or guides on configuring this. UPDATE I removed the master executor from my Jenkins image. So now my builds aren't running on master but now they have no executor so they don't run at all. Just waits in the queue forever. ### Response: You'll need to tell Jenkins how and where to run your builds by adding your Kubernetes cluster as a 'cloud' in the Jenkins configuration. Go to Manage Jenkins -> Configure System -> Cloud -> Add new cloud and select 'Kubernetes'. You'll find the server certificate key, user name and password in your local kubectl configuration (usually in ~/.kube/config). The values for 'Kubernetes URL' and 'Jenkins URL' depend on your cluster setup. Next, you'll need to configure the docker images that should be used to run your builds by selecting 'Add Docker Template'. Use labels to define which tasks should be run with which image! Here's a good video tutorial and here you'll find a nice tutorial which explains everything in detail.
I've been struggling with setting up the Jenkins Kubernetes Plugin on the Google Container Engine. I have the plugin installed but I think all my builds are still running on master. I haven't found any good documentation or guides on configuring this. UPDATE I removed the master executor from my Jenkins image. So now my builds aren't running on master but now they have no executor so they don't run at all. Just waits in the queue forever.
In a standard Kubernetes installation, all pods (even across namespaces) share a flat IP space and can all communicate with each other. To get isolation, you'll need to customize your install to prevent cross namespace communication. One way to do this is to use OpenContrail. They recently wrote a blog post describing an example deployment using the Guestbook from the Kubernetes repository.
###Instruction: We need to know about pods network isolation. Is there a possibility to access one pod from another one in cluster? Maybe by namespace dividing? We also need pod's membership in local networks, which are not accessible from outside. Any plans? Is it will be soon? ### Response: In a standard Kubernetes installation, all pods (even across namespaces) share a flat IP space and can all communicate with each other. To get isolation, you'll need to customize your install to prevent cross namespace communication. One way to do this is to use OpenContrail. They recently wrote a blog post describing an example deployment using the Guestbook from the Kubernetes repository.
We need to know about pods network isolation. Is there a possibility to access one pod from another one in cluster? Maybe by namespace dividing? We also need pod's membership in local networks, which are not accessible from outside. Any plans? Is it will be soon?
Yes, it is new behavior in v1.1.1. PR #9165 added graceful deletion of pods, which causes them to appear in the "Terminating" state for a short amount of time. Issue #1535 has some more background discussion.
###Instruction: A new GKE cluster created at v1.1.1 and using latest kubectl (from gcloud components update) when deleting resources (say a pod) sometimes kubectl get pods in a 'Terminating' state and other times they delete (are removed from kubectl get pods output) right away. NAME READY STATUS RESTARTS AGE cassandra 1/1 Terminating 0 44s Is this new behavior of kubectl? I don't recall it doing this at my prior levels. ### Response: Yes, it is new behavior in v1.1.1. PR #9165 added graceful deletion of pods, which causes them to appear in the "Terminating" state for a short amount of time. Issue #1535 has some more background discussion.
A new GKE cluster created at v1.1.1 and using latest kubectl (from gcloud components update) when deleting resources (say a pod) sometimes kubectl get pods in a 'Terminating' state and other times they delete (are removed from kubectl get pods output) right away. NAME READY STATUS RESTARTS AGE cassandra 1/1 Terminating 0 44s Is this new behavior of kubectl? I don't recall it doing this at my prior levels.
There are definitely folks running Kubernetes HA masters in production following the instructions for High Availability Kubernetes Clusters. As noted at the beginning of that page, it's an advanced use case and requires in-depth knowledge of how the Kubernetes master components work.
###Instruction: Hi all we are looking for practically and tested guide or reference for kubernetes master high availability or other solution for master node fail over. ### Response: There are definitely folks running Kubernetes HA masters in production following the instructions for High Availability Kubernetes Clusters. As noted at the beginning of that page, it's an advanced use case and requires in-depth knowledge of how the Kubernetes master components work.
Hi all we are looking for practically and tested guide or reference for kubernetes master high availability or other solution for master node fail over.
You do not need to install Powerstrip anymore. (it's been deprecated) Powerstrip was a useful tool early on to prototype docker extensions but we've moved on since Docker has added the docker api via the plugins model. (Powerstrip was essentially a Precurser to docker plugins) docker --volume-driver=flocker docker volume create --name -d flocker If you have the docker plugin installed you should be fine. instructions on manual plugin setup are located here http://doc-dev.clusterhq.com/install/install-node.html
###Instruction: I am using flocker volumes. Should I install Powerstrip? I have installed flocker, but not installed Powerstrip. I create flocker pod fail: Unable to mount volumes for pod "flocker-web-3gy69_default": Get https://localhost:4523/v1/configuration/datasets: x509: certificate is valid for control-service, hostname, not localhost. I have set FLOCKER_CONTROL_SERVICE_BASE_URL and MY_NETWORK_IDENTITY in flocker-docker-plugin.service file. ### Response: You do not need to install Powerstrip anymore. (it's been deprecated) Powerstrip was a useful tool early on to prototype docker extensions but we've moved on since Docker has added the docker api via the plugins model. (Powerstrip was essentially a Precurser to docker plugins) docker --volume-driver=flocker docker volume create --name -d flocker If you have the docker plugin installed you should be fine. instructions on manual plugin setup are located here http://doc-dev.clusterhq.com/install/install-node.html
I am using flocker volumes. Should I install Powerstrip? I have installed flocker, but not installed Powerstrip. I create flocker pod fail: Unable to mount volumes for pod "flocker-web-3gy69_default": Get https://localhost:4523/v1/configuration/datasets: x509: certificate is valid for control-service, hostname, not localhost. I have set FLOCKER_CONTROL_SERVICE_BASE_URL and MY_NETWORK_IDENTITY in flocker-docker-plugin.service file.
command: ["/bin/sh","-c"] args: ["command one; command two && command three"] Explanation: The command ["/bin/sh", "-c"] says "run a shell, and execute the following instructions". The args are then passed as commands to the shell. In shell scripting a semicolon separates commands, and && conditionally runs the following command if the first succeed. In the above example, it always runs command one followed by command two, and only runs command three if command two succeeded. Alternative: In many cases, some of the commands you want to run are probably setting up the final command to run. In this case, building your own Dockerfile is the way to go. Look at the RUN directive in particular.
###Instruction: In this official document, it can run command in a yaml config file: https://kubernetes.io/docs/tasks/configure-pod-container/ apiVersion: v1 kind: Pod metadata: name: hello-world spec: # specification of the pod’s contents restartPolicy: Never containers: - name: hello image: "ubuntu:14.04" env: - name: MESSAGE value: "hello world" command: ["/bin/sh","-c"] args: ["/bin/echo \"${MESSAGE}\""] If I want to run more than one command, how to do? ### Response: command: ["/bin/sh","-c"] args: ["command one; command two && command three"] Explanation: The command ["/bin/sh", "-c"] says "run a shell, and execute the following instructions". The args are then passed as commands to the shell. In shell scripting a semicolon separates commands, and && conditionally runs the following command if the first succeed. In the above example, it always runs command one followed by command two, and only runs command three if command two succeeded. Alternative: In many cases, some of the commands you want to run are probably setting up the final command to run. In this case, building your own Dockerfile is the way to go. Look at the RUN directive in particular.
In this official document, it can run command in a yaml config file: https://kubernetes.io/docs/tasks/configure-pod-container/ apiVersion: v1 kind: Pod metadata: name: hello-world spec: # specification of the pod’s contents restartPolicy: Never containers: - name: hello image: "ubuntu:14.04" env: - name: MESSAGE value: "hello world" command: ["/bin/sh","-c"] args: ["/bin/echo \"${MESSAGE}\""] If I want to run more than one command, how to do?
POSTing this to the API server will have no effect if you have not configured an Ingress controller. You need to choose the ingress controller implementation that is the best fit for your cluster, or implement one. Examples and instructions can be found here.
###Instruction: I have a setup of kubernetes on a coreos baremetal. For now I did the connection from outside world to service with a nginx reverse-proxy. I'm trying the new Ingress resource. for now I have added a simple ingress: apiVersion: extensions/v1beta1 kind: Ingress metadata: name: kube-ui spec: backend: serviceName: kube-ui servicePort: 80 that starts like this: INGRESS NAME RULE BACKEND ADDRESS kube-ui - kube-ui:80 My question is how to connect from the outside internet to that ingress point as this resource have no ADDRESS ... ? ### Response: POSTing this to the API server will have no effect if you have not configured an Ingress controller. You need to choose the ingress controller implementation that is the best fit for your cluster, or implement one. Examples and instructions can be found here.
I have a setup of kubernetes on a coreos baremetal. For now I did the connection from outside world to service with a nginx reverse-proxy. I'm trying the new Ingress resource. for now I have added a simple ingress: apiVersion: extensions/v1beta1 kind: Ingress metadata: name: kube-ui spec: backend: serviceName: kube-ui servicePort: 80 that starts like this: INGRESS NAME RULE BACKEND ADDRESS kube-ui - kube-ui:80 My question is how to connect from the outside internet to that ingress point as this resource have no ADDRESS ... ?
Routes are http, HTTPS, or TCP wrapped with TLS. You can use a service with a "node port", which load balances your app instances over TCP or udp at a high port exposed on each node. Routes point to services to get their source data, but since routes expect to be able to identify which backend service to route traffic to by looking at the incoming HTTP Host header or TLS SNI info, routes today only support those protocols.
###Instruction: I am going through the Openshift V3 documentation and got confused by services and routes details. The description in service says that: Each node also runs a simple network proxy that reflects the services defined in the API on that node. This allows the node to do simple TCP and UDP stream forwarding across a set of back ends. it can forward TCP/UDP stream while description in routes says: Routers support the following protocols: HTTP HTTPS (with SNI) WebSockets TLS with SNI Basically, my requirement is to run an SIP application which runs over UDP and port 5060. Please help me understand what is meant by service and route in the above context and can I deploy my application on Openshift V3. I found few related questions but those are fairly old. EDIT Tagged Kubernetes because it is also used within and may be someone from them can help. Thanks ### Response: Routes are http, HTTPS, or TCP wrapped with TLS. You can use a service with a "node port", which load balances your app instances over TCP or udp at a high port exposed on each node. Routes point to services to get their source data, but since routes expect to be able to identify which backend service to route traffic to by looking at the incoming HTTP Host header or TLS SNI info, routes today only support those protocols.
I am going through the Openshift V3 documentation and got confused by services and routes details. The description in service says that: Each node also runs a simple network proxy that reflects the services defined in the API on that node. This allows the node to do simple TCP and UDP stream forwarding across a set of back ends. it can forward TCP/UDP stream while description in routes says: Routers support the following protocols: HTTP HTTPS (with SNI) WebSockets TLS with SNI Basically, my requirement is to run an SIP application which runs over UDP and port 5060. Please help me understand what is meant by service and route in the above context and can I deploy my application on Openshift V3. I found few related questions but those are fairly old. EDIT Tagged Kubernetes because it is also used within and may be someone from them can help. Thanks
You cannot add or remove containers in a running pod. If you are using replication controller, kubectl rolling-update is the easiest solution, but this will require editing the pod template. That said, are you sure you need to add your containers to the existing pod? Unless strictly necessary, it's better to just run the new containers in a separate pod, e.g. with kubectl run <name> --image=<image> Note: This is the generic kubernetes answer, there may be a more elegant solution for OpenShift
###Instruction: I can add a container to a pod by editing the pod template, but I'm looking for something simpler. Is there any way to add a container to a deployed OpenShift pod without editing the pod template? CLI preferable. ### Response: You cannot add or remove containers in a running pod. If you are using replication controller, kubectl rolling-update is the easiest solution, but this will require editing the pod template. That said, are you sure you need to add your containers to the existing pod? Unless strictly necessary, it's better to just run the new containers in a separate pod, e.g. with kubectl run <name> --image=<image> Note: This is the generic kubernetes answer, there may be a more elegant solution for OpenShift
I can add a container to a pod by editing the pod template, but I'm looking for something simpler. Is there any way to add a container to a deployed OpenShift pod without editing the pod template? CLI preferable.
Check out the Downward API, which allows the container to know more about itself.
###Instruction: When we create a yml for the replication controller, we can give labels for the pod that is being created. apiVersion: v1 kind: ReplicationController metadata: name: redis spec: template: metadata: labels: app: redis tier: backend Can the containers that reside in this pod access those label values? ### Response: Check out the Downward API, which allows the container to know more about itself.
When we create a yml for the replication controller, we can give labels for the pod that is being created. apiVersion: v1 kind: ReplicationController metadata: name: redis spec: template: metadata: labels: app: redis tier: backend Can the containers that reside in this pod access those label values?
Sorry about this question, I had find it. http://kubernetes.io/v1.1/docs/api-reference/v1/definitions.html
###Instruction: I feel very confused when use the kubernetes! Where can I find about the specify api about the components of kubernetes ? such as pod, service, volumes, and Persistent Volumes and so on, when I create the components use the configure files. Who can help me? ### Response: Sorry about this question, I had find it. http://kubernetes.io/v1.1/docs/api-reference/v1/definitions.html
I feel very confused when use the kubernetes! Where can I find about the specify api about the components of kubernetes ? such as pod, service, volumes, and Persistent Volumes and so on, when I create the components use the configure files. Who can help me?
If you used this command to create your certificate: openssl x509 -req -days 365 -in server.csr -CA ca.pem -CAkey ca-key.pem \ -CAcreateserial -out server-cert.pem Then your issue can be resolved by doing the following as the 'client' cert uses an -extfile extfile.cnf: echo subjectAltName = IP:worker_node_ip > extfile.cnf openssl x509 -req -days 365 -in server.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial \ -out server-cert.pem -extfile extfile.cnf You can specify any number of IP addresses, such as IP:127.0.0.1,IP:127.0.1.1 (non localhost as well).
###Instruction: I'm trying to use kubectl exec to enter one of my containers, but I'm getting stuck on this error. $ kubectl exec -it ubuntu -- bash error: Unable to upgrade connection: { "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Failure", "message": "x509: cannot validate certificate for <worker_node_ip> because it doesn't contain any IP SANs", "code": 500 } I have configured kubectl with my CA certificate and admin keys, etc according to this guide https://coreos.com/kubernetes/docs/1.0.6/configure-kubectl.html Update I also found the same error in the API server's logs E1125 17:33:16.308389 1 errors.go:62] apiserver received an error that is not an unversioned.Status: x509: cannot validate certificate for <worker_node_ip> because it doesn't contain any IP SANs Does this mean I have configured the certs incorrectly on my worker/master nodes or on kubectl on my local machine? ### Response: If you used this command to create your certificate: openssl x509 -req -days 365 -in server.csr -CA ca.pem -CAkey ca-key.pem \ -CAcreateserial -out server-cert.pem Then your issue can be resolved by doing the following as the 'client' cert uses an -extfile extfile.cnf: echo subjectAltName = IP:worker_node_ip > extfile.cnf openssl x509 -req -days 365 -in server.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial \ -out server-cert.pem -extfile extfile.cnf You can specify any number of IP addresses, such as IP:127.0.0.1,IP:127.0.1.1 (non localhost as well).
I'm trying to use kubectl exec to enter one of my containers, but I'm getting stuck on this error. $ kubectl exec -it ubuntu -- bash error: Unable to upgrade connection: { "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Failure", "message": "x509: cannot validate certificate for <worker_node_ip> because it doesn't contain any IP SANs", "code": 500 } I have configured kubectl with my CA certificate and admin keys, etc according to this guide https://coreos.com/kubernetes/docs/1.0.6/configure-kubectl.html Update I also found the same error in the API server's logs E1125 17:33:16.308389 1 errors.go:62] apiserver received an error that is not an unversioned.Status: x509: cannot validate certificate for <worker_node_ip> because it doesn't contain any IP SANs Does this mean I have configured the certs incorrectly on my worker/master nodes or on kubectl on my local machine?
This is one of the large problems that Kubernetes is trying to solve with Cross-Cluster Service Discovery as a part of the Cluster Federation plans. You can also check out/contribute to the Federation SIG. If you've used one of the hacks solutions described here, you might be able to hack up your /etc/resolve.conf to also search the nameserver from the other cluster. Be careful, because this may run you into truncation issues. You might also be able to modify the sky-dns RC for your clusters to include an extra kube2sky pod that points at the other cluster's kubernetes service (I haven't tried this, or thought through all of the implications). Neither of the two hacks I've described above would prevent name collision, so you'd have to manually prevent that.
###Instruction: Relates to How to call a service exposed by a Kubernetes cluster from another Kubernetes cluster in same project. Asking again since Kubernetes has been changes a lot since July. Context: I'm working on an infrastructure with multiple clusters serving different purposes, e.g.: Cluster A runs services/apps creating data for consumption Cluster B runs services/apps consuming data created by apps in cluster A Cluster C runs data services like Redis, Memcache, etc. All clusters are in the default namespace. Problem: In Kubernetes, each cluster gets its own kubernetes (in the default namespace) and kube-dns (in the kube-system namespace) service with a different IP. What happens with this setup is that, services in cluster A and B above can't discover (in service discovery terminology), let's say, Redis in cluster C. So a nslookup redis.default.svc.cluster.local from one of the services in cluster A/B comes back with ** server can't find redis.default.svc.cluster.local: NXDOMAIN. Note: This works from within cluster C. I've read as many documents as I found about kube-dns, and pretty much all assume one cluster setup. Clusters info: Here are /etc/resolv.conf from two different clusters showing DNS nameservers with no common kube-dns ancestor: Cluster A: nameserver 10.67.240.10 nameserver 169.254.169.254 nameserver 10.240.0.1 search default.svc.cluster.local svc.cluster.local cluster.local c.project-name.internal. 1025230764914.google.internal. google.internal. Cluster C: nameserver 10.91.240.10 nameserver 169.254.169.254 nameserver 10.240.0.1 search default.svc.cluster.local svc.cluster.local cluster.local c.project-name.internal. google.internal. options ndots:5 Both clusters have these services running with their respective IPs for their cluster in the kube-system namespace: NAME LABELS SELECTOR kube-dns k8s-app=kube-dns,kubernetes.io/cluster-service=true,kubernetes.io/name=KubeDNS k8s-app=kube-dns kube-ui k8s-app=kube-ui,kubernetes.io/cluster-service=true,kubernetes.io/name=KubeUI k8s-app=kube-ui monitoring-heapster kubernetes.io/cluster-service=true,kubernetes.io/name=Heapster k8s-app=heapster What is the ideal fix/update to this setup that can get the shared services discovered across all Kubernetes clusters in a GCE environment? ### Response: This is one of the large problems that Kubernetes is trying to solve with Cross-Cluster Service Discovery as a part of the Cluster Federation plans. You can also check out/contribute to the Federation SIG. If you've used one of the hacks solutions described here, you might be able to hack up your /etc/resolve.conf to also search the nameserver from the other cluster. Be careful, because this may run you into truncation issues. You might also be able to modify the sky-dns RC for your clusters to include an extra kube2sky pod that points at the other cluster's kubernetes service (I haven't tried this, or thought through all of the implications). Neither of the two hacks I've described above would prevent name collision, so you'd have to manually prevent that.
Relates to How to call a service exposed by a Kubernetes cluster from another Kubernetes cluster in same project. Asking again since Kubernetes has been changes a lot since July. Context: I'm working on an infrastructure with multiple clusters serving different purposes, e.g.: Cluster A runs services/apps creating data for consumption Cluster B runs services/apps consuming data created by apps in cluster A Cluster C runs data services like Redis, Memcache, etc. All clusters are in the default namespace. Problem: In Kubernetes, each cluster gets its own kubernetes (in the default namespace) and kube-dns (in the kube-system namespace) service with a different IP. What happens with this setup is that, services in cluster A and B above can't discover (in service discovery terminology), let's say, Redis in cluster C. So a nslookup redis.default.svc.cluster.local from one of the services in cluster A/B comes back with ** server can't find redis.default.svc.cluster.local: NXDOMAIN. Note: This works from within cluster C. I've read as many documents as I found about kube-dns, and pretty much all assume one cluster setup. Clusters info: Here are /etc/resolv.conf from two different clusters showing DNS nameservers with no common kube-dns ancestor: Cluster A: nameserver 10.67.240.10 nameserver 169.254.169.254 nameserver 10.240.0.1 search default.svc.cluster.local svc.cluster.local cluster.local c.project-name.internal. 1025230764914.google.internal. google.internal. Cluster C: nameserver 10.91.240.10 nameserver 169.254.169.254 nameserver 10.240.0.1 search default.svc.cluster.local svc.cluster.local cluster.local c.project-name.internal. google.internal. options ndots:5 Both clusters have these services running with their respective IPs for their cluster in the kube-system namespace: NAME LABELS SELECTOR kube-dns k8s-app=kube-dns,kubernetes.io/cluster-service=true,kubernetes.io/name=KubeDNS k8s-app=kube-dns kube-ui k8s-app=kube-ui,kubernetes.io/cluster-service=true,kubernetes.io/name=KubeUI k8s-app=kube-ui monitoring-heapster kubernetes.io/cluster-service=true,kubernetes.io/name=Heapster k8s-app=heapster What is the ideal fix/update to this setup that can get the shared services discovered across all Kubernetes clusters in a GCE environment?
There is no command today that makes it easy to add a container to the pod template for an RC or deployment. You can use oc new-app to quickly generate deployment configs that have multiple containers with oc new-app php+apache+somethingelse But this won't let you deeply customize those containers. Agree this would be nice to have - as a mode to "run", perhaps.
###Instruction: I can add a container to a pod by editing the pod template, but I'm looking for something simpler. Is there any way to add a container to a deployed OpenShift pod without editing the pod template? CLI preferable. ### Response: There is no command today that makes it easy to add a container to the pod template for an RC or deployment. You can use oc new-app to quickly generate deployment configs that have multiple containers with oc new-app php+apache+somethingelse But this won't let you deeply customize those containers. Agree this would be nice to have - as a mode to "run", perhaps.
I can add a container to a pod by editing the pod template, but I'm looking for something simpler. Is there any way to add a container to a deployed OpenShift pod without editing the pod template? CLI preferable.
You can't clone only one file. gitRepo executes git clone which only allows you to clone the entire repository. volumeMounts doesn't support executing command in it.
###Instruction: I know how to mount git repo when I start pod. See: apiVersion: v1 kind: Pod metadata: name: server spec: containers: - image: nginx name: nginx volumeMounts: - mountPath: /mypath name: git-volume volumes: - name: git-volume gitRepo: repository: "git@somewhere:me/my-git-repository.git" revision: "22f1d8406d464b0c0874075539c1f2e96c253775" That's perfect, but it means that I need to clone whole repository. What I need is to obtain "clone" only one file. - name: git-volume gitRepo: repository: "git@somewhere:me/my-git-repository.git/some/long/path/to/specific/file/configuration.cfg" Is it possible? Or can I mount some volume and execute some command in it? Something like: ... containers: - image: nginx name: nginx volumeMounts: - mountPath: /mypath name: git-volume command: wget htttp://gitrepo/path/to/file/config.conf Thanks. ### Response: You can't clone only one file. gitRepo executes git clone which only allows you to clone the entire repository. volumeMounts doesn't support executing command in it.
I know how to mount git repo when I start pod. See: apiVersion: v1 kind: Pod metadata: name: server spec: containers: - image: nginx name: nginx volumeMounts: - mountPath: /mypath name: git-volume volumes: - name: git-volume gitRepo: repository: "git@somewhere:me/my-git-repository.git" revision: "22f1d8406d464b0c0874075539c1f2e96c253775" That's perfect, but it means that I need to clone whole repository. What I need is to obtain "clone" only one file. - name: git-volume gitRepo: repository: "git@somewhere:me/my-git-repository.git/some/long/path/to/specific/file/configuration.cfg" Is it possible? Or can I mount some volume and execute some command in it? Something like: ... containers: - image: nginx name: nginx volumeMounts: - mountPath: /mypath name: git-volume command: wget htttp://gitrepo/path/to/file/config.conf Thanks.
A few potential problems here: 1) Ceph needs its nodes to be up and running to be accessible: did you say the nodes were mounting disk from a different Ceph cluster, or is the Ceph cluster running on the same nodes? If the same nodes, then it makes sense that the drive not being accessible paralyzes K8s. 2) There is a bug (at least it was there on 1.0.6 and not sure if it was resolved) about pods not being able to start when trying to mount a disk that was already mounted on a different node, because it was never unmounted. This is a K8s issue to follow (sorry can't seem to find the link right now) 3) etcd may also get stuck waiting for node 3 if it only has 2 nodes, as it needs a majority to elect a master.
###Instruction: I am setting up a small Kubernetes cluster using a VM (master) and 3 bare metal servers (all running Ubuntu 14.04). I followed the Kubernetes install tutorial for Ubuntu. Each bare metal server also has 2T of disk space exported using Ceph 0.94.5. Everything was working fine, but when one node failed to start (it wasn't able to mount a partition) the only service the cluster was providing also stopped working. I run some commands: $ kubectl get nodes NAME LABELS STATUS 10.70.2.1 kubernetes.io/hostname=10.70.2.1 Ready,SchedulingDisabled 10.70.2.2 kubernetes.io/hostname=10.70.2.2 Ready 10.70.2.3 kubernetes.io/hostname=10.70.2.3 NotReady 10.70.2.4 kubernetes.io/hostname=10.70.2.4 Ready It just showed that I had a node down. $ kubectl get pods NAME READY STATUS RESTARTS AGE java-mysql-5v7iu 1/1 Running 1 5d java-site-vboaq 1/1 Running 0 4d $ kubectl get services NAME LABELS SELECTOR IP(S) PORT(S) java-mysql name=java-mysql name=java-mysql ***.***.3.12 3306/TCP java-site name=java-site name=java-site ***.***.3.11 80/TCP kubernetes component=apiserver,provider=kubernetes <none> ***.***.3.1 443/TCP It showed all pods and services working fine. However, I could not connect to one of the pods (java-site-vboaq): $ kubectl exec java-site-vboaq -i -t -- bash error: Error executing remote command: Error executing command in container: container not found ("java-site") But, the pods weren't even running on the downed node: $ kubectl describe pod java-mysql-5v7iu Image(s): mysql:5 Node: 10.70.2.2/10.70.2.2 Status: Running $ kubectl describe pod java-site-vboaq Image(s): javasite-img Node: 10.70.2.2/10.70.2.2 Status: Running After the downed node (10.70.2.3) was back, everything went back to normal. How do I fix this problem? If a node is out, I want Kubernetes to migrate pods accordingly and keep the services working. Does it have to do with the fact that the downed node was stuck in the boot (waiting for a partition to mount) and not 100% down? ### Response: A few potential problems here: 1) Ceph needs its nodes to be up and running to be accessible: did you say the nodes were mounting disk from a different Ceph cluster, or is the Ceph cluster running on the same nodes? If the same nodes, then it makes sense that the drive not being accessible paralyzes K8s. 2) There is a bug (at least it was there on 1.0.6 and not sure if it was resolved) about pods not being able to start when trying to mount a disk that was already mounted on a different node, because it was never unmounted. This is a K8s issue to follow (sorry can't seem to find the link right now) 3) etcd may also get stuck waiting for node 3 if it only has 2 nodes, as it needs a majority to elect a master.
I am setting up a small Kubernetes cluster using a VM (master) and 3 bare metal servers (all running Ubuntu 14.04). I followed the Kubernetes install tutorial for Ubuntu. Each bare metal server also has 2T of disk space exported using Ceph 0.94.5. Everything was working fine, but when one node failed to start (it wasn't able to mount a partition) the only service the cluster was providing also stopped working. I run some commands: $ kubectl get nodes NAME LABELS STATUS 10.70.2.1 kubernetes.io/hostname=10.70.2.1 Ready,SchedulingDisabled 10.70.2.2 kubernetes.io/hostname=10.70.2.2 Ready 10.70.2.3 kubernetes.io/hostname=10.70.2.3 NotReady 10.70.2.4 kubernetes.io/hostname=10.70.2.4 Ready It just showed that I had a node down. $ kubectl get pods NAME READY STATUS RESTARTS AGE java-mysql-5v7iu 1/1 Running 1 5d java-site-vboaq 1/1 Running 0 4d $ kubectl get services NAME LABELS SELECTOR IP(S) PORT(S) java-mysql name=java-mysql name=java-mysql ***.***.3.12 3306/TCP java-site name=java-site name=java-site ***.***.3.11 80/TCP kubernetes component=apiserver,provider=kubernetes <none> ***.***.3.1 443/TCP It showed all pods and services working fine. However, I could not connect to one of the pods (java-site-vboaq): $ kubectl exec java-site-vboaq -i -t -- bash error: Error executing remote command: Error executing command in container: container not found ("java-site") But, the pods weren't even running on the downed node: $ kubectl describe pod java-mysql-5v7iu Image(s): mysql:5 Node: 10.70.2.2/10.70.2.2 Status: Running $ kubectl describe pod java-site-vboaq Image(s): javasite-img Node: 10.70.2.2/10.70.2.2 Status: Running After the downed node (10.70.2.3) was back, everything went back to normal. How do I fix this problem? If a node is out, I want Kubernetes to migrate pods accordingly and keep the services working. Does it have to do with the fact that the downed node was stuck in the boot (waiting for a partition to mount) and not 100% down?
You can't really do this right now. The issue to follow if you're interested in templating is https://github.com/kubernetes/kubernetes/issues/11492 Also, this is currently a topic of discussion in the configuration SIG https://groups.google.com/forum/#!forum/kubernetes-sig-config I'd suggest you register your interest in the feature in one of those places.
###Instruction: I have two separate Kubernetes clusters that will be used for a staging and a production environment, respectively. I want to have YAML manifests for the Kubernetes API objects I will be submitting to each cluster, but some of the objects will have slightly different configurations between the two environments. As a made up but illustrative example, imagine running an internal Docker registry on each cluster, one using S3 as the storage back end and one using the GCS back end. The registry container can accept these configuration values as environment variables or read from a file, both of which Kubernetes manifests support, but how should I populate these values for each environment? Essentially what I want is a way to have a manifest that looks something like this, where the $() syntax is variable interpolation that would happen on the server when the manifest is submitted: --- apiVersion: v1 kind: Pod metadata: name: foo spec: containers: - name: foo image: foo env: - name: bar value: $(etcdctl get /path/to/bar) I could write templates that use placeholders for the values and then process the template, pulling in real values from some external source, to produce the final manifest that is actually submitted to the cluster. However, I thought I'd ask first in case a tool that does this already exists, or there is some Kubernetes-blessed way of doing this that I'm not aware of. It certainly seems like something that many people will need to do in order to use Kubernetes. Other ideas I've had include using some combination of etcd, confd, and scripts on the host nodes, but this starts to get into the realm of host configuration management which I want to avoid at pretty much all costs. I'm using CoreOS and the hosts are provisioned entirely through coreos-cloudinit. In other words, nothing is manipulated on the host system that is not defined at the time the node is created, so traditional configuration management tools like Ansible are out. I'm aware of Kubernetes's secrets system, but some of these variable values could be quite large, and most of them are not secrets. ### Response: You can't really do this right now. The issue to follow if you're interested in templating is https://github.com/kubernetes/kubernetes/issues/11492 Also, this is currently a topic of discussion in the configuration SIG https://groups.google.com/forum/#!forum/kubernetes-sig-config I'd suggest you register your interest in the feature in one of those places.
I have two separate Kubernetes clusters that will be used for a staging and a production environment, respectively. I want to have YAML manifests for the Kubernetes API objects I will be submitting to each cluster, but some of the objects will have slightly different configurations between the two environments. As a made up but illustrative example, imagine running an internal Docker registry on each cluster, one using S3 as the storage back end and one using the GCS back end. The registry container can accept these configuration values as environment variables or read from a file, both of which Kubernetes manifests support, but how should I populate these values for each environment? Essentially what I want is a way to have a manifest that looks something like this, where the $() syntax is variable interpolation that would happen on the server when the manifest is submitted: --- apiVersion: v1 kind: Pod metadata: name: foo spec: containers: - name: foo image: foo env: - name: bar value: $(etcdctl get /path/to/bar) I could write templates that use placeholders for the values and then process the template, pulling in real values from some external source, to produce the final manifest that is actually submitted to the cluster. However, I thought I'd ask first in case a tool that does this already exists, or there is some Kubernetes-blessed way of doing this that I'm not aware of. It certainly seems like something that many people will need to do in order to use Kubernetes. Other ideas I've had include using some combination of etcd, confd, and scripts on the host nodes, but this starts to get into the realm of host configuration management which I want to avoid at pretty much all costs. I'm using CoreOS and the hosts are provisioned entirely through coreos-cloudinit. In other words, nothing is manipulated on the host system that is not defined at the time the node is created, so traditional configuration management tools like Ansible are out. I'm aware of Kubernetes's secrets system, but some of these variable values could be quite large, and most of them are not secrets.
The containers in a pod continue to run, yes. If the master components are not available this means no new pods/services can be launched, but existing ones continue to operate. Note that this behaviour is also one of the good practices and lessons learned from Borg.
###Instruction: If kube-master or etcd service down in my kubernetes cluster, can my Pod/Service still work? Also, if the network is still work? ### Response: The containers in a pod continue to run, yes. If the master components are not available this means no new pods/services can be launched, but existing ones continue to operate. Note that this behaviour is also one of the good practices and lessons learned from Borg.
If kube-master or etcd service down in my kubernetes cluster, can my Pod/Service still work? Also, if the network is still work?
Two options: Talk to the Kubernetes API to get the endpoints for the service. (either with kubectl get endpoints SVCNAME or by GETing the /api/v1/namespaces/{namespace}/endpoints/{svcname} path on the apiserver) Less likely to be of use, but if you create a service without a cluster IP, the DNS for that service will return a list of the IP addresses of the backing pods rather than a virtual IP address. The IPs returned in either case are the IP addresses of all the pods backing the service.
###Instruction: I have a proxy service that wraps 3 pods (say pod A, pod B, pod C). Some container inside pod A needs to get virtual IPs of other two pods. How can I do this? ### Response: Two options: Talk to the Kubernetes API to get the endpoints for the service. (either with kubectl get endpoints SVCNAME or by GETing the /api/v1/namespaces/{namespace}/endpoints/{svcname} path on the apiserver) Less likely to be of use, but if you create a service without a cluster IP, the DNS for that service will return a list of the IP addresses of the backing pods rather than a virtual IP address. The IPs returned in either case are the IP addresses of all the pods backing the service.
I have a proxy service that wraps 3 pods (say pod A, pod B, pod C). Some container inside pod A needs to get virtual IPs of other two pods. How can I do this?
docker compose is just a way to declare the container you have to start: it has no notion of node or cluster, unless it launches swarm master and swarm nodes, but that is docker swarm) Update July 2016, 7 months later: docker 1.12 blurs the lines and includes a "swarm mode". It is vastly different from kubernetes, a google tool to manage thousands of containers groups as Pod, over tens or hundreds of machines. A Kubernetes Pod would be closer from a docker swarm: Imagine individual Docker containers as packing boxes. The boxes that need to stay together because they need to go to the same location or have an affinity to each other are loaded into shipping containers. In this analogy, the packing boxes are Docker containers, and the shipping containers are Kubernetes pods. As commented below by ealeon: I think pod is equivalent to compose except that kubernetes can orchestrated pods, whereas there is nothing orchestrating compose unless it is used with swarm like you've mentioned. You can launch kubernetes commands with docker-compose by the way. In terms of how Kubernetes differs from other container management systems out there, such as Swarm, Kubernetes is the third iteration of cluster managers that Google has developed. You can hear more about kubernetes in the episode #3 of Google Cloud Platform Podcast. While it is true both can create a multi-container application, a Pod also serves as a unit of deployment and horizontal scaling/replication, which docker compose does not provide. Plus, you don't create a pod directly, but use controllers (like replication controllers). POD lives within a larger platform which offers Co-location (co-scheduling), fate sharing, coordinated replication, resource sharing, and dependency management. Docker-compose lives... on its own, with its docker-compose.yml file
###Instruction: Both Kubernetes Pods and the results of Docker Compose scripts (henceforth: "Compositions") appear to result in clusters of virtual computers. The computers in the clusters can all be configured to talk to each other so you can write a single script that mirrors your entire end-to-end production config. A single script allows you to deploy that cluster on any container-host. Given the similarities between the two systems, I'm struggling to understand what the differences are between the two. Why would I choose one over the other? Are they mutually exclusive systems or can I run compositions in kubernetes. Are there any critical considerations that need to be accounted for when designing for a container system? If I am designing the architecture for a site today and would like to try and build a container-based system. What are the highest priority things I should design for? (as compared to building on a single machine system) ### Response: docker compose is just a way to declare the container you have to start: it has no notion of node or cluster, unless it launches swarm master and swarm nodes, but that is docker swarm) Update July 2016, 7 months later: docker 1.12 blurs the lines and includes a "swarm mode". It is vastly different from kubernetes, a google tool to manage thousands of containers groups as Pod, over tens or hundreds of machines. A Kubernetes Pod would be closer from a docker swarm: Imagine individual Docker containers as packing boxes. The boxes that need to stay together because they need to go to the same location or have an affinity to each other are loaded into shipping containers. In this analogy, the packing boxes are Docker containers, and the shipping containers are Kubernetes pods. As commented below by ealeon: I think pod is equivalent to compose except that kubernetes can orchestrated pods, whereas there is nothing orchestrating compose unless it is used with swarm like you've mentioned. You can launch kubernetes commands with docker-compose by the way. In terms of how Kubernetes differs from other container management systems out there, such as Swarm, Kubernetes is the third iteration of cluster managers that Google has developed. You can hear more about kubernetes in the episode #3 of Google Cloud Platform Podcast. While it is true both can create a multi-container application, a Pod also serves as a unit of deployment and horizontal scaling/replication, which docker compose does not provide. Plus, you don't create a pod directly, but use controllers (like replication controllers). POD lives within a larger platform which offers Co-location (co-scheduling), fate sharing, coordinated replication, resource sharing, and dependency management. Docker-compose lives... on its own, with its docker-compose.yml file
Both Kubernetes Pods and the results of Docker Compose scripts (henceforth: "Compositions") appear to result in clusters of virtual computers. The computers in the clusters can all be configured to talk to each other so you can write a single script that mirrors your entire end-to-end production config. A single script allows you to deploy that cluster on any container-host. Given the similarities between the two systems, I'm struggling to understand what the differences are between the two. Why would I choose one over the other? Are they mutually exclusive systems or can I run compositions in kubernetes. Are there any critical considerations that need to be accounted for when designing for a container system? If I am designing the architecture for a site today and would like to try and build a container-based system. What are the highest priority things I should design for? (as compared to building on a single machine system)
After a lot of experimentations I believe adding the script to the Dockerfile: ADD prod_start.sh /backend/prod_start.sh And then calling the command like this in the yaml controller file: command: ['/bin/sh', './prod_start.sh'] Fixed it.
###Instruction: I am trying to run a shell script at the start of a docker container running on Google Cloud Containers using Kubernetes. The structure of my app directory is something like this. I'd like to run prod_start.sh script at the start of the container (I don't want to put it as part of the Dockerfile though). The current setup fails to start the container with Command not found file ./prod_start.sh does not exist. Any idea how to fix this? app/ ... Dockerfile prod_start.sh web-controller.yaml Gemfile ... Dockerfile FROM ruby RUN mkdir /backend WORKDIR /backend ADD Gemfile /backend/Gemfile ADD Gemfile.lock /backend/Gemfile.lock RUN bundle install web-controller.yaml apiVersion: v1 kind: ReplicationController metadata: name: backend labels: app: myapp tier: backend spec: replicas: 1 selector: app: myapp tier: backend template: metadata: labels: app: myapp tier: backend spec: volumes: - name: secrets secret: secretName: secrets containers: - name: my-backend command: ['./prod_start.sh'] image: gcr.io/myapp-id/myapp-backend:v1 volumeMounts: - name: secrets mountPath: /etc/secrets readOnly: true resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 name: http-server ### Response: After a lot of experimentations I believe adding the script to the Dockerfile: ADD prod_start.sh /backend/prod_start.sh And then calling the command like this in the yaml controller file: command: ['/bin/sh', './prod_start.sh'] Fixed it.
I am trying to run a shell script at the start of a docker container running on Google Cloud Containers using Kubernetes. The structure of my app directory is something like this. I'd like to run prod_start.sh script at the start of the container (I don't want to put it as part of the Dockerfile though). The current setup fails to start the container with Command not found file ./prod_start.sh does not exist. Any idea how to fix this? app/ ... Dockerfile prod_start.sh web-controller.yaml Gemfile ... Dockerfile FROM ruby RUN mkdir /backend WORKDIR /backend ADD Gemfile /backend/Gemfile ADD Gemfile.lock /backend/Gemfile.lock RUN bundle install web-controller.yaml apiVersion: v1 kind: ReplicationController metadata: name: backend labels: app: myapp tier: backend spec: replicas: 1 selector: app: myapp tier: backend template: metadata: labels: app: myapp tier: backend spec: volumes: - name: secrets secret: secretName: secrets containers: - name: my-backend command: ['./prod_start.sh'] image: gcr.io/myapp-id/myapp-backend:v1 volumeMounts: - name: secrets mountPath: /etc/secrets readOnly: true resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 name: http-server
There can only be a single entrypoint in a container... if you want to run multiple commands like that, make bash be the entry point, and make all the other commands be an argument for bash to run: command: ["/bin/bash","-c","touch /foo && echo 'here' && ls /"]
###Instruction: I want send multiple entrypoint commands to a Docker container in the command tag of kubernetes config file. apiVersion: v1 kind: Pod metadata: name: hello-world spec: # specification of the pod’s contents restartPolicy: Never containers: - name: hello image: "ubuntu:14.04" command: ["command1 arg1 arg2 && command2 arg3 && command3 arg 4"] But it seems like it does not work. What is the correct format of sending multiple commands in the command tag? ### Response: There can only be a single entrypoint in a container... if you want to run multiple commands like that, make bash be the entry point, and make all the other commands be an argument for bash to run: command: ["/bin/bash","-c","touch /foo && echo 'here' && ls /"]
I want send multiple entrypoint commands to a Docker container in the command tag of kubernetes config file. apiVersion: v1 kind: Pod metadata: name: hello-world spec: # specification of the pod’s contents restartPolicy: Never containers: - name: hello image: "ubuntu:14.04" command: ["command1 arg1 arg2 && command2 arg3 && command3 arg 4"] But it seems like it does not work. What is the correct format of sending multiple commands in the command tag?
The doc you link to describes the methodology used (specifically the master VM size). The cluster is created in Google Compute Engine using the default cluster/kube-up.sh script from the repository, with all the default settings implied by that. How large is the master that you're using? If it's really small, it's possible that it could struggle with a lot of nodes and pods.
###Instruction: From the performance test report we can find that the kubernetes can support 100 nodes. To do the same test, I have set up a 100 nodes kubernetes cluster, but the kube-apiserver became slow when the cluster is set up. That meant when I typed kubectl get nodes, it suspended and can not get any respond. To find the reason, I chech the connections of kube-apiserver and found there were about 660+ ESTABLISHED connections on port 8080 (I used the insecure port of apiserver), and when I stop some(about 20) slaves, the apiserver recovered common. So I thought the reason for kube-apiserver becoming slow is the too large concurrency. So I am wonder how the Google set up a 100 nodes cluster? Is there something wrong in my work? PS: The --max-requests-inflight of kube-apiserver has been set to 0. ### Response: The doc you link to describes the methodology used (specifically the master VM size). The cluster is created in Google Compute Engine using the default cluster/kube-up.sh script from the repository, with all the default settings implied by that. How large is the master that you're using? If it's really small, it's possible that it could struggle with a lot of nodes and pods.
From the performance test report we can find that the kubernetes can support 100 nodes. To do the same test, I have set up a 100 nodes kubernetes cluster, but the kube-apiserver became slow when the cluster is set up. That meant when I typed kubectl get nodes, it suspended and can not get any respond. To find the reason, I chech the connections of kube-apiserver and found there were about 660+ ESTABLISHED connections on port 8080 (I used the insecure port of apiserver), and when I stop some(about 20) slaves, the apiserver recovered common. So I thought the reason for kube-apiserver becoming slow is the too large concurrency. So I am wonder how the Google set up a 100 nodes cluster? Is there something wrong in my work? PS: The --max-requests-inflight of kube-apiserver has been set to 0.
For future, the results of iptables-save are much easier to read (to me anyway). I don't see anything missing here. KUBE-SERVICES traps 10.116.0.2 port 53/UDP and passes it to KUBE-SVC-TCOU7JCQXEZGVUNU KUBE-SVC-TCOU7JCQXEZGVUNU has just one endpoint so jumps to KUBE-SEP-GSM3BZTEXEBWDXPN KUBE-SEP-GSM3BZTEXEBWDXPN DNATs to 10.116.160.7 port 53/UDP If you assert that 10.116.160.7 works while 10.116.0.2 does not, that is strange indeed. It suggests that the iptables rules are not triggering at all. Are you testing from the node itself or from a container? What networking are you using? L3 (underlay?) Flannel? OVS? Something else? What cloud provider (if any)? First step to debug: run: for c in PREROUTING OUTPUT; do iptables -t nat -I $c -d 10.116.0.2 -j LOG --log-prefix "DBG@$c: "; done That will log any packets that iptables sees to your service IP. Now look at dmesg.
###Instruction: I have Kubernetes: v.1.1.1 iptables v1.4.21 kernel: 4.2.0-18-generic which come with Ubuntu wily Networking is done via L2 VLAN terminated on switch no cloud provider what I do I'm experimenting with iptables mode for kube-proxy. I have enabled it with --proxy_mode=iptables argument. It seems some rule is missing: iptables -t nat -nvL Chain PREROUTING (policy ACCEPT 8 packets, 459 bytes) pkts bytes target prot opt in out source destination 2116 120K KUBE-SERVICES all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */ Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 2 packets, 120 bytes) pkts bytes target prot opt in out source destination 718 45203 KUBE-SERVICES all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */ Chain POSTROUTING (policy ACCEPT 5 packets, 339 bytes) pkts bytes target prot opt in out source destination 0 0 MASQUERADE all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service traffic requiring SNAT */ mark match 0x4d415351 Chain KUBE-NODEPORTS (1 references) pkts bytes target prot opt in out source destination 0 0 MARK tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/docker-registry-fe:tcp */ tcp dpt:31195 MARK set 0x4d415351 0 0 KUBE-SVC-XZFGDLM7GMJHZHOY tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/docker-registry-fe:tcp */ tcp dpt:31195 0 0 MARK tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* mngbox/jumpbox:ssh */ tcp dpt:30873 MARK set 0x4d415351 0 0 KUBE-SVC-GLKZVFIDXOFHLJLC tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* mngbox/jumpbox:ssh */ tcp dpt:30873 Chain KUBE-SEP-5IXMK7UWPGVTWOJ7 (1 references) pkts bytes target prot opt in out source destination 0 0 MARK all -- * * 10.116.160.8 0.0.0.0/0 /* mngbox/jumpbox:ssh */ MARK set 0x4d415351 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* mngbox/jumpbox:ssh */ tcp to:10.116.160.8:22 Chain KUBE-SEP-BNPLX5HQYOZINWEQ (1 references) pkts bytes target prot opt in out source destination 0 0 MARK all -- * * 10.116.161.6 0.0.0.0/0 /* kube-system/monitoring-influxdb:api */ MARK set 0x4d415351 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/monitoring-influxdb:api */ tcp to:10.116.161.6:8086 Chain KUBE-SEP-CJMHKLXPTJLTE3OP (1 references) pkts bytes target prot opt in out source destination 0 0 MARK all -- * * 10.116.254.2 0.0.0.0/0 /* default/kubernetes: */ MARK set 0x4d415351 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes: */ tcp to:10.116.254.2:6443 Chain KUBE-SEP-GSM3BZTEXEBWDXPN (1 references) pkts bytes target prot opt in out source destination 0 0 MARK all -- * * 10.116.160.7 0.0.0.0/0 /* kube-system/kube-dns:dns */ MARK set 0x4d415351 0 0 DNAT udp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns */ udp to:10.116.160.7:53 Chain KUBE-SEP-OAYOAJINXRPUQDA3 (1 references) pkts bytes target prot opt in out source destination 0 0 MARK all -- * * 10.116.160.7 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */ MARK set 0x4d415351 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */ tcp to:10.116.160.7:53 Chain KUBE-SEP-PJJZDQNXDGWM7MU6 (1 references) pkts bytes target prot opt in out source destination 0 0 MARK all -- * * 10.116.160.5 0.0.0.0/0 /* default/docker-registry-fe:tcp */ MARK set 0x4d415351 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/docker-registry-fe:tcp */ tcp to:10.116.160.5:443 Chain KUBE-SEP-RWODGLKOVWXGOHUR (1 references) pkts bytes target prot opt in out source destination 0 0 MARK all -- * * 10.116.161.6 0.0.0.0/0 /* kube-system/monitoring-influxdb:http */ MARK set 0x4d415351 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/monitoring-influxdb:http */ tcp to:10.116.161.6:8083 Chain KUBE-SEP-WE3Z7KMHA6KPJWKK (1 references) pkts bytes target prot opt in out source destination 0 0 MARK all -- * * 10.116.161.6 0.0.0.0/0 /* kube-system/monitoring-grafana: */ MARK set 0x4d415351 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/monitoring-grafana: */ tcp to:10.116.161.6:8080 Chain KUBE-SEP-YBQVM4LA4YMMZIWH (1 references) pkts bytes target prot opt in out source destination 0 0 MARK all -- * * 10.116.161.3 0.0.0.0/0 /* kube-system/monitoring-heapster: */ MARK set 0x4d415351 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/monitoring-heapster: */ tcp to:10.116.161.3:8082 Chain KUBE-SEP-YMZS7BLP4Y6MWTX5 (1 references) pkts bytes target prot opt in out source destination 0 0 MARK all -- * * 10.116.160.9 0.0.0.0/0 /* infra/docker-registry-backend:docker-registry-backend */ MARK set 0x4d415351 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* infra/docker-registry-backend:docker-registry-backend */ tcp to:10.116.160.9:5000 Chain KUBE-SEP-ZDOOYAKDERKR43R3 (1 references) pkts bytes target prot opt in out source destination 0 0 MARK all -- * * 10.116.160.10 0.0.0.0/0 /* default/kibana-logging: */ MARK set 0x4d415351 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kibana-logging: */ tcp to:10.116.160.10:5601 Chain KUBE-SERVICES (2 references) pkts bytes target prot opt in out source destination 0 0 KUBE-SVC-JRXTEHDDTAFMSEAS tcp -- * * 0.0.0.0/0 10.116.0.48 /* kube-system/monitoring-grafana: cluster IP */ tcp dpt:80 0 0 KUBE-SVC-CK6HVV5A27TDFNIA tcp -- * * 0.0.0.0/0 10.116.0.188 /* kube-system/monitoring-influxdb:api cluster IP */ tcp dpt:8086 0 0 KUBE-SVC-DKEW3YDJFV3YJLS2 tcp -- * * 0.0.0.0/0 10.116.0.6 /* infra/docker-registry-backend:docker-registry-backend cluster IP */ tcp dpt:5000 0 0 KUBE-SVC-TCOU7JCQXEZGVUNU udp -- * * 0.0.0.0/0 10.116.0.2 /* kube-system/kube-dns:dns cluster IP */ udp dpt:53 0 0 KUBE-SVC-WEHLQ23XZWSA5ZX3 tcp -- * * 0.0.0.0/0 10.116.0.188 /* kube-system/monitoring-influxdb:http cluster IP */ tcp dpt:8083 0 0 KUBE-SVC-XZFGDLM7GMJHZHOY tcp -- * * 0.0.0.0/0 10.116.1.142 /* default/docker-registry-fe:tcp cluster IP */ tcp dpt:443 0 0 MARK tcp -- * * 0.0.0.0/0 10.116.254.3 /* default/docker-registry-fe:tcp external IP */ tcp dpt:443 MARK set 0x4d415351 0 0 KUBE-SVC-XZFGDLM7GMJHZHOY tcp -- * * 0.0.0.0/0 10.116.254.3 /* default/docker-registry-fe:tcp external IP */ tcp dpt:443 PHYSDEV match ! --physdev-is-in ADDRTYPE match src-type !LOCAL 0 0 KUBE-SVC-XZFGDLM7GMJHZHOY tcp -- * * 0.0.0.0/0 10.116.254.3 /* default/docker-registry-fe:tcp external IP */ tcp dpt:443 ADDRTYPE match dst-type LOCAL 0 0 KUBE-SVC-ERIFXISQEP7F7OF4 tcp -- * * 0.0.0.0/0 10.116.0.2 /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:53 0 0 KUBE-SVC-7IHGTXJ4CF2KVXJZ tcp -- * * 0.0.0.0/0 10.116.1.126 /* kube-system/monitoring-heapster: cluster IP */ tcp dpt:80 0 0 KUBE-SVC-GLKZVFIDXOFHLJLC tcp -- * * 0.0.0.0/0 10.116.1.175 /* mngbox/jumpbox:ssh cluster IP */ tcp dpt:2345 0 0 MARK tcp -- * * 0.0.0.0/0 10.116.254.3 /* mngbox/jumpbox:ssh external IP */ tcp dpt:2345 MARK set 0x4d415351 0 0 KUBE-SVC-GLKZVFIDXOFHLJLC tcp -- * * 0.0.0.0/0 10.116.254.3 /* mngbox/jumpbox:ssh external IP */ tcp dpt:2345 PHYSDEV match ! --physdev-is-in ADDRTYPE match src-type !LOCAL 0 0 KUBE-SVC-GLKZVFIDXOFHLJLC tcp -- * * 0.0.0.0/0 10.116.254.3 /* mngbox/jumpbox:ssh external IP */ tcp dpt:2345 ADDRTYPE match dst-type LOCAL 0 0 KUBE-SVC-6N4SJQIF3IX3FORG tcp -- * * 0.0.0.0/0 10.116.0.1 /* default/kubernetes: cluster IP */ tcp dpt:443 0 0 KUBE-SVC-B6ZEWWY2BII6JG2L tcp -- * * 0.0.0.0/0 10.116.0.233 /* default/kibana-logging: cluster IP */ tcp dpt:8888 0 0 MARK tcp -- * * 0.0.0.0/0 10.116.254.3 /* default/kibana-logging: external IP */ tcp dpt:8888 MARK set 0x4d415351 0 0 KUBE-SVC-B6ZEWWY2BII6JG2L tcp -- * * 0.0.0.0/0 10.116.254.3 /* default/kibana-logging: external IP */ tcp dpt:8888 PHYSDEV match ! --physdev-is-in ADDRTYPE match src-type !LOCAL 0 0 KUBE-SVC-B6ZEWWY2BII6JG2L tcp -- * * 0.0.0.0/0 10.116.254.3 /* default/kibana-logging: external IP */ tcp dpt:8888 ADDRTYPE match dst-type LOCAL 0 0 KUBE-NODEPORTS all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL Chain KUBE-SVC-6N4SJQIF3IX3FORG (1 references) pkts bytes target prot opt in out source destination 0 0 KUBE-SEP-CJMHKLXPTJLTE3OP all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes: */ Chain KUBE-SVC-7IHGTXJ4CF2KVXJZ (1 references) pkts bytes target prot opt in out source destination 0 0 KUBE-SEP-YBQVM4LA4YMMZIWH all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/monitoring-heapster: */ Chain KUBE-SVC-B6ZEWWY2BII6JG2L (3 references) pkts bytes target prot opt in out source destination 0 0 KUBE-SEP-ZDOOYAKDERKR43R3 all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kibana-logging: */ Chain KUBE-SVC-CK6HVV5A27TDFNIA (1 references) pkts bytes target prot opt in out source destination 0 0 KUBE-SEP-BNPLX5HQYOZINWEQ all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/monitoring-influxdb:api */ Chain KUBE-SVC-DKEW3YDJFV3YJLS2 (1 references) pkts bytes target prot opt in out source destination 0 0 KUBE-SEP-YMZS7BLP4Y6MWTX5 all -- * * 0.0.0.0/0 0.0.0.0/0 /* infra/docker-registry-backend:docker-registry-backend */ Chain KUBE-SVC-ERIFXISQEP7F7OF4 (1 references) pkts bytes target prot opt in out source destination 0 0 KUBE-SEP-OAYOAJINXRPUQDA3 all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */ Chain KUBE-SVC-GLKZVFIDXOFHLJLC (4 references) pkts bytes target prot opt in out source destination 0 0 KUBE-SEP-5IXMK7UWPGVTWOJ7 all -- * * 0.0.0.0/0 0.0.0.0/0 /* mngbox/jumpbox:ssh */ Chain KUBE-SVC-JRXTEHDDTAFMSEAS (1 references) pkts bytes target prot opt in out source destination 0 0 KUBE-SEP-WE3Z7KMHA6KPJWKK all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/monitoring-grafana: */ Chain KUBE-SVC-TCOU7JCQXEZGVUNU (1 references) pkts bytes target prot opt in out source destination 0 0 KUBE-SEP-GSM3BZTEXEBWDXPN all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns */ Chain KUBE-SVC-WEHLQ23XZWSA5ZX3 (1 references) pkts bytes target prot opt in out source destination 0 0 KUBE-SEP-RWODGLKOVWXGOHUR all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/monitoring-influxdb:http */ Chain KUBE-SVC-XZFGDLM7GMJHZHOY (4 references) pkts bytes target prot opt in out source destination 0 0 KUBE-SEP-PJJZDQNXDGWM7MU6 all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/docker-registry-fe:tcp */ When I do request to the service ip, in my case it's 10.116.0.2 I got an error ;; connection timed out; no servers could be reached while when I do request to the 10.116.160.7 server it's working fine. I can see that traffic is not directed to kube-proxy rules at all, so there is something missing probably. I will highly appreciate any hint about missing rule EDIT Ive updated my initial request with missing information requested by thokin, he pointed to the really good way to debug the iptables rules for kube-proxy, and I could identify my problem with: for c in PREROUTING OUTPUT POSTROUTING; do iptables -t nat -I $c -d 10.116.160.7 -j LOG --log-prefix "DBG@$c: "; done for c in PREROUTING OUTPUT POSTROUTING; do iptables -t nat -I $c -d 10.116.0.2 -j LOG --log-prefix "DBG@$c: "; done Then I've executed following commands: # nslookup kubernetes.default.svc.psc01.cluster 10.116.160.7 Server: 10.116.160.7 Address: 10.116.160.7#53 Name: kubernetes.default.svc.psc01.cluster Address: 10.116.0.1 # nslookup kubernetes.default.svc.psc01.cluster 10.116.0.2 ;; connection timed out; no servers could be reached As a result I've got different "source" address and outgoing interface: [701768.263847] DBG@OUTPUT: IN= OUT=bond1.300 SRC=10.116.250.252 DST=10.116.0.2 LEN=82 TOS=0x00 PREC=0x00 TTL=64 ID=12436 PROTO=UDP SPT=54501 DPT=53 LEN=62 [702620.454211] DBG@OUTPUT: IN= OUT=docker0 SRC=10.116.176.1 DST=10.116.160.7 LEN=82 TOS=0x00 PREC=0x00 TTL=64 ID=22733 PROTO=UDP SPT=28704 DPT=53 LEN=62 [702620.454224] DBG@POSTROUTING: IN= OUT=docker0 SRC=10.116.176.1 DST=10.116.160.7 LEN=82 TOS=0x00 PREC=0x00 TTL=64 ID=22733 PROTO=UDP SPT=28704 DPT=53 LEN=62 [702626.318258] DBG@OUTPUT: IN= OUT=bond1.300 SRC=10.116.250.252 DST=10.116.0.2 LEN=82 TOS=0x00 PREC=0x00 TTL=64 ID=30608 PROTO=UDP SPT=39443 DPT=53 LEN=62 [702626.318263] DBG@OUTPUT: IN= OUT=bond1.300 SRC=10.116.250.252 DST=10.116.0.2 LEN=82 TOS=0x00 PREC=0x00 TTL=64 ID=30608 PROTO=UDP SPT=39443 DPT=53 LEN=62 [702626.318266] DBG@OUTPUT: IN= OUT=bond1.300 SRC=10.116.250.252 DST=10.116.0.2 LEN=82 TOS=0x00 PREC=0x00 TTL=64 ID=30608 PROTO=UDP SPT=39443 DPT=53 LEN=62 [702626.318270] DBG@OUTPUT: IN= OUT=bond1.300 SRC=10.116.250.252 DST=10.116.0.2 LEN=82 TOS=0x00 PREC=0x00 TTL=64 ID=30608 PROTO=UDP SPT=39443 DPT=53 LEN=62 [702626.318284] DBG@POSTROUTING: IN= OUT=docker0 SRC=10.116.250.252 DST=10.116.160.7 LEN=82 TOS=0x00 PREC=0x00 TTL=64 ID=30608 PROTO=UDP SPT=39443 DPT=53 LEN=62 So, by adding the route ip route add 10.116.0.0/23 dev docker0 Now it's working fine! ### Response: For future, the results of iptables-save are much easier to read (to me anyway). I don't see anything missing here. KUBE-SERVICES traps 10.116.0.2 port 53/UDP and passes it to KUBE-SVC-TCOU7JCQXEZGVUNU KUBE-SVC-TCOU7JCQXEZGVUNU has just one endpoint so jumps to KUBE-SEP-GSM3BZTEXEBWDXPN KUBE-SEP-GSM3BZTEXEBWDXPN DNATs to 10.116.160.7 port 53/UDP If you assert that 10.116.160.7 works while 10.116.0.2 does not, that is strange indeed. It suggests that the iptables rules are not triggering at all. Are you testing from the node itself or from a container? What networking are you using? L3 (underlay?) Flannel? OVS? Something else? What cloud provider (if any)? First step to debug: run: for c in PREROUTING OUTPUT; do iptables -t nat -I $c -d 10.116.0.2 -j LOG --log-prefix "DBG@$c: "; done That will log any packets that iptables sees to your service IP. Now look at dmesg.
I have Kubernetes: v.1.1.1 iptables v1.4.21 kernel: 4.2.0-18-generic which come with Ubuntu wily Networking is done via L2 VLAN terminated on switch no cloud provider what I do I'm experimenting with iptables mode for kube-proxy. I have enabled it with --proxy_mode=iptables argument. It seems some rule is missing: iptables -t nat -nvL Chain PREROUTING (policy ACCEPT 8 packets, 459 bytes) pkts bytes target prot opt in out source destination 2116 120K KUBE-SERVICES all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */ Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 2 packets, 120 bytes) pkts bytes target prot opt in out source destination 718 45203 KUBE-SERVICES all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */ Chain POSTROUTING (policy ACCEPT 5 packets, 339 bytes) pkts bytes target prot opt in out source destination 0 0 MASQUERADE all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service traffic requiring SNAT */ mark match 0x4d415351 Chain KUBE-NODEPORTS (1 references) pkts bytes target prot opt in out source destination 0 0 MARK tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/docker-registry-fe:tcp */ tcp dpt:31195 MARK set 0x4d415351 0 0 KUBE-SVC-XZFGDLM7GMJHZHOY tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/docker-registry-fe:tcp */ tcp dpt:31195 0 0 MARK tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* mngbox/jumpbox:ssh */ tcp dpt:30873 MARK set 0x4d415351 0 0 KUBE-SVC-GLKZVFIDXOFHLJLC tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* mngbox/jumpbox:ssh */ tcp dpt:30873 Chain KUBE-SEP-5IXMK7UWPGVTWOJ7 (1 references) pkts bytes target prot opt in out source destination 0 0 MARK all -- * * 10.116.160.8 0.0.0.0/0 /* mngbox/jumpbox:ssh */ MARK set 0x4d415351 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* mngbox/jumpbox:ssh */ tcp to:10.116.160.8:22 Chain KUBE-SEP-BNPLX5HQYOZINWEQ (1 references) pkts bytes target prot opt in out source destination 0 0 MARK all -- * * 10.116.161.6 0.0.0.0/0 /* kube-system/monitoring-influxdb:api */ MARK set 0x4d415351 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/monitoring-influxdb:api */ tcp to:10.116.161.6:8086 Chain KUBE-SEP-CJMHKLXPTJLTE3OP (1 references) pkts bytes target prot opt in out source destination 0 0 MARK all -- * * 10.116.254.2 0.0.0.0/0 /* default/kubernetes: */ MARK set 0x4d415351 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes: */ tcp to:10.116.254.2:6443 Chain KUBE-SEP-GSM3BZTEXEBWDXPN (1 references) pkts bytes target prot opt in out source destination 0 0 MARK all -- * * 10.116.160.7 0.0.0.0/0 /* kube-system/kube-dns:dns */ MARK set 0x4d415351 0 0 DNAT udp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns */ udp to:10.116.160.7:53 Chain KUBE-SEP-OAYOAJINXRPUQDA3 (1 references) pkts bytes target prot opt in out source destination 0 0 MARK all -- * * 10.116.160.7 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */ MARK set 0x4d415351 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */ tcp to:10.116.160.7:53 Chain KUBE-SEP-PJJZDQNXDGWM7MU6 (1 references) pkts bytes target prot opt in out source destination 0 0 MARK all -- * * 10.116.160.5 0.0.0.0/0 /* default/docker-registry-fe:tcp */ MARK set 0x4d415351 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/docker-registry-fe:tcp */ tcp to:10.116.160.5:443 Chain KUBE-SEP-RWODGLKOVWXGOHUR (1 references) pkts bytes target prot opt in out source destination 0 0 MARK all -- * * 10.116.161.6 0.0.0.0/0 /* kube-system/monitoring-influxdb:http */ MARK set 0x4d415351 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/monitoring-influxdb:http */ tcp to:10.116.161.6:8083 Chain KUBE-SEP-WE3Z7KMHA6KPJWKK (1 references) pkts bytes target prot opt in out source destination 0 0 MARK all -- * * 10.116.161.6 0.0.0.0/0 /* kube-system/monitoring-grafana: */ MARK set 0x4d415351 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/monitoring-grafana: */ tcp to:10.116.161.6:8080 Chain KUBE-SEP-YBQVM4LA4YMMZIWH (1 references) pkts bytes target prot opt in out source destination 0 0 MARK all -- * * 10.116.161.3 0.0.0.0/0 /* kube-system/monitoring-heapster: */ MARK set 0x4d415351 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/monitoring-heapster: */ tcp to:10.116.161.3:8082 Chain KUBE-SEP-YMZS7BLP4Y6MWTX5 (1 references) pkts bytes target prot opt in out source destination 0 0 MARK all -- * * 10.116.160.9 0.0.0.0/0 /* infra/docker-registry-backend:docker-registry-backend */ MARK set 0x4d415351 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* infra/docker-registry-backend:docker-registry-backend */ tcp to:10.116.160.9:5000 Chain KUBE-SEP-ZDOOYAKDERKR43R3 (1 references) pkts bytes target prot opt in out source destination 0 0 MARK all -- * * 10.116.160.10 0.0.0.0/0 /* default/kibana-logging: */ MARK set 0x4d415351 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kibana-logging: */ tcp to:10.116.160.10:5601 Chain KUBE-SERVICES (2 references) pkts bytes target prot opt in out source destination 0 0 KUBE-SVC-JRXTEHDDTAFMSEAS tcp -- * * 0.0.0.0/0 10.116.0.48 /* kube-system/monitoring-grafana: cluster IP */ tcp dpt:80 0 0 KUBE-SVC-CK6HVV5A27TDFNIA tcp -- * * 0.0.0.0/0 10.116.0.188 /* kube-system/monitoring-influxdb:api cluster IP */ tcp dpt:8086 0 0 KUBE-SVC-DKEW3YDJFV3YJLS2 tcp -- * * 0.0.0.0/0 10.116.0.6 /* infra/docker-registry-backend:docker-registry-backend cluster IP */ tcp dpt:5000 0 0 KUBE-SVC-TCOU7JCQXEZGVUNU udp -- * * 0.0.0.0/0 10.116.0.2 /* kube-system/kube-dns:dns cluster IP */ udp dpt:53 0 0 KUBE-SVC-WEHLQ23XZWSA5ZX3 tcp -- * * 0.0.0.0/0 10.116.0.188 /* kube-system/monitoring-influxdb:http cluster IP */ tcp dpt:8083 0 0 KUBE-SVC-XZFGDLM7GMJHZHOY tcp -- * * 0.0.0.0/0 10.116.1.142 /* default/docker-registry-fe:tcp cluster IP */ tcp dpt:443 0 0 MARK tcp -- * * 0.0.0.0/0 10.116.254.3 /* default/docker-registry-fe:tcp external IP */ tcp dpt:443 MARK set 0x4d415351 0 0 KUBE-SVC-XZFGDLM7GMJHZHOY tcp -- * * 0.0.0.0/0 10.116.254.3 /* default/docker-registry-fe:tcp external IP */ tcp dpt:443 PHYSDEV match ! --physdev-is-in ADDRTYPE match src-type !LOCAL 0 0 KUBE-SVC-XZFGDLM7GMJHZHOY tcp -- * * 0.0.0.0/0 10.116.254.3 /* default/docker-registry-fe:tcp external IP */ tcp dpt:443 ADDRTYPE match dst-type LOCAL 0 0 KUBE-SVC-ERIFXISQEP7F7OF4 tcp -- * * 0.0.0.0/0 10.116.0.2 /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:53 0 0 KUBE-SVC-7IHGTXJ4CF2KVXJZ tcp -- * * 0.0.0.0/0 10.116.1.126 /* kube-system/monitoring-heapster: cluster IP */ tcp dpt:80 0 0 KUBE-SVC-GLKZVFIDXOFHLJLC tcp -- * * 0.0.0.0/0 10.116.1.175 /* mngbox/jumpbox:ssh cluster IP */ tcp dpt:2345 0 0 MARK tcp -- * * 0.0.0.0/0 10.116.254.3 /* mngbox/jumpbox:ssh external IP */ tcp dpt:2345 MARK set 0x4d415351 0 0 KUBE-SVC-GLKZVFIDXOFHLJLC tcp -- * * 0.0.0.0/0 10.116.254.3 /* mngbox/jumpbox:ssh external IP */ tcp dpt:2345 PHYSDEV match ! --physdev-is-in ADDRTYPE match src-type !LOCAL 0 0 KUBE-SVC-GLKZVFIDXOFHLJLC tcp -- * * 0.0.0.0/0 10.116.254.3 /* mngbox/jumpbox:ssh external IP */ tcp dpt:2345 ADDRTYPE match dst-type LOCAL 0 0 KUBE-SVC-6N4SJQIF3IX3FORG tcp -- * * 0.0.0.0/0 10.116.0.1 /* default/kubernetes: cluster IP */ tcp dpt:443 0 0 KUBE-SVC-B6ZEWWY2BII6JG2L tcp -- * * 0.0.0.0/0 10.116.0.233 /* default/kibana-logging: cluster IP */ tcp dpt:8888 0 0 MARK tcp -- * * 0.0.0.0/0 10.116.254.3 /* default/kibana-logging: external IP */ tcp dpt:8888 MARK set 0x4d415351 0 0 KUBE-SVC-B6ZEWWY2BII6JG2L tcp -- * * 0.0.0.0/0 10.116.254.3 /* default/kibana-logging: external IP */ tcp dpt:8888 PHYSDEV match ! --physdev-is-in ADDRTYPE match src-type !LOCAL 0 0 KUBE-SVC-B6ZEWWY2BII6JG2L tcp -- * * 0.0.0.0/0 10.116.254.3 /* default/kibana-logging: external IP */ tcp dpt:8888 ADDRTYPE match dst-type LOCAL 0 0 KUBE-NODEPORTS all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL Chain KUBE-SVC-6N4SJQIF3IX3FORG (1 references) pkts bytes target prot opt in out source destination 0 0 KUBE-SEP-CJMHKLXPTJLTE3OP all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes: */ Chain KUBE-SVC-7IHGTXJ4CF2KVXJZ (1 references) pkts bytes target prot opt in out source destination 0 0 KUBE-SEP-YBQVM4LA4YMMZIWH all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/monitoring-heapster: */ Chain KUBE-SVC-B6ZEWWY2BII6JG2L (3 references) pkts bytes target prot opt in out source destination 0 0 KUBE-SEP-ZDOOYAKDERKR43R3 all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/kibana-logging: */ Chain KUBE-SVC-CK6HVV5A27TDFNIA (1 references) pkts bytes target prot opt in out source destination 0 0 KUBE-SEP-BNPLX5HQYOZINWEQ all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/monitoring-influxdb:api */ Chain KUBE-SVC-DKEW3YDJFV3YJLS2 (1 references) pkts bytes target prot opt in out source destination 0 0 KUBE-SEP-YMZS7BLP4Y6MWTX5 all -- * * 0.0.0.0/0 0.0.0.0/0 /* infra/docker-registry-backend:docker-registry-backend */ Chain KUBE-SVC-ERIFXISQEP7F7OF4 (1 references) pkts bytes target prot opt in out source destination 0 0 KUBE-SEP-OAYOAJINXRPUQDA3 all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns-tcp */ Chain KUBE-SVC-GLKZVFIDXOFHLJLC (4 references) pkts bytes target prot opt in out source destination 0 0 KUBE-SEP-5IXMK7UWPGVTWOJ7 all -- * * 0.0.0.0/0 0.0.0.0/0 /* mngbox/jumpbox:ssh */ Chain KUBE-SVC-JRXTEHDDTAFMSEAS (1 references) pkts bytes target prot opt in out source destination 0 0 KUBE-SEP-WE3Z7KMHA6KPJWKK all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/monitoring-grafana: */ Chain KUBE-SVC-TCOU7JCQXEZGVUNU (1 references) pkts bytes target prot opt in out source destination 0 0 KUBE-SEP-GSM3BZTEXEBWDXPN all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/kube-dns:dns */ Chain KUBE-SVC-WEHLQ23XZWSA5ZX3 (1 references) pkts bytes target prot opt in out source destination 0 0 KUBE-SEP-RWODGLKOVWXGOHUR all -- * * 0.0.0.0/0 0.0.0.0/0 /* kube-system/monitoring-influxdb:http */ Chain KUBE-SVC-XZFGDLM7GMJHZHOY (4 references) pkts bytes target prot opt in out source destination 0 0 KUBE-SEP-PJJZDQNXDGWM7MU6 all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/docker-registry-fe:tcp */ When I do request to the service ip, in my case it's 10.116.0.2 I got an error ;; connection timed out; no servers could be reached while when I do request to the 10.116.160.7 server it's working fine. I can see that traffic is not directed to kube-proxy rules at all, so there is something missing probably. I will highly appreciate any hint about missing rule EDIT Ive updated my initial request with missing information requested by thokin, he pointed to the really good way to debug the iptables rules for kube-proxy, and I could identify my problem with: for c in PREROUTING OUTPUT POSTROUTING; do iptables -t nat -I $c -d 10.116.160.7 -j LOG --log-prefix "DBG@$c: "; done for c in PREROUTING OUTPUT POSTROUTING; do iptables -t nat -I $c -d 10.116.0.2 -j LOG --log-prefix "DBG@$c: "; done Then I've executed following commands: # nslookup kubernetes.default.svc.psc01.cluster 10.116.160.7 Server: 10.116.160.7 Address: 10.116.160.7#53 Name: kubernetes.default.svc.psc01.cluster Address: 10.116.0.1 # nslookup kubernetes.default.svc.psc01.cluster 10.116.0.2 ;; connection timed out; no servers could be reached As a result I've got different "source" address and outgoing interface: [701768.263847] DBG@OUTPUT: IN= OUT=bond1.300 SRC=10.116.250.252 DST=10.116.0.2 LEN=82 TOS=0x00 PREC=0x00 TTL=64 ID=12436 PROTO=UDP SPT=54501 DPT=53 LEN=62 [702620.454211] DBG@OUTPUT: IN= OUT=docker0 SRC=10.116.176.1 DST=10.116.160.7 LEN=82 TOS=0x00 PREC=0x00 TTL=64 ID=22733 PROTO=UDP SPT=28704 DPT=53 LEN=62 [702620.454224] DBG@POSTROUTING: IN= OUT=docker0 SRC=10.116.176.1 DST=10.116.160.7 LEN=82 TOS=0x00 PREC=0x00 TTL=64 ID=22733 PROTO=UDP SPT=28704 DPT=53 LEN=62 [702626.318258] DBG@OUTPUT: IN= OUT=bond1.300 SRC=10.116.250.252 DST=10.116.0.2 LEN=82 TOS=0x00 PREC=0x00 TTL=64 ID=30608 PROTO=UDP SPT=39443 DPT=53 LEN=62 [702626.318263] DBG@OUTPUT: IN= OUT=bond1.300 SRC=10.116.250.252 DST=10.116.0.2 LEN=82 TOS=0x00 PREC=0x00 TTL=64 ID=30608 PROTO=UDP SPT=39443 DPT=53 LEN=62 [702626.318266] DBG@OUTPUT: IN= OUT=bond1.300 SRC=10.116.250.252 DST=10.116.0.2 LEN=82 TOS=0x00 PREC=0x00 TTL=64 ID=30608 PROTO=UDP SPT=39443 DPT=53 LEN=62 [702626.318270] DBG@OUTPUT: IN= OUT=bond1.300 SRC=10.116.250.252 DST=10.116.0.2 LEN=82 TOS=0x00 PREC=0x00 TTL=64 ID=30608 PROTO=UDP SPT=39443 DPT=53 LEN=62 [702626.318284] DBG@POSTROUTING: IN= OUT=docker0 SRC=10.116.250.252 DST=10.116.160.7 LEN=82 TOS=0x00 PREC=0x00 TTL=64 ID=30608 PROTO=UDP SPT=39443 DPT=53 LEN=62 So, by adding the route ip route add 10.116.0.0/23 dev docker0 Now it's working fine!
If someone will reach this question then I want to let you know that external load balancer support is available in latest kubernetes version. Link to the documentation
###Instruction: I have a kubernetes (0.15) cluster running on CoreOS instances on Amazon EC2 When I create a service that I want to be publicly accessible, I currently add some private IP addresses of the EC2 instances to the service description like so: { "kind": "Service", "apiVersion": "v1beta3", "metadata": { "name": "api" }, "spec": { "ports": [ { "name": "default", "port": 80, "targetPort": 80 } ], "publicIPs": ["172.1.1.15", "172.1.1.16"], "selector": { "app": "api" } } } Then I can add these IPs to an ELB load balancer and route traffic to those machines. But for this to work I need to have a maintain the list of all the machines in my cluster in all the services that I am running, which feels wrong. What's the currently recommended way to solve this? If I know the PortalIP of a service is there a way to make it routable in the AWS VPC infrastructure? Is it possible to assign external static (Elastic) IPs to Services and have those routed? (I know of createExternalLoadBalancer, but that does not seem to support AWS yet) ### Response: If someone will reach this question then I want to let you know that external load balancer support is available in latest kubernetes version. Link to the documentation
I have a kubernetes (0.15) cluster running on CoreOS instances on Amazon EC2 When I create a service that I want to be publicly accessible, I currently add some private IP addresses of the EC2 instances to the service description like so: { "kind": "Service", "apiVersion": "v1beta3", "metadata": { "name": "api" }, "spec": { "ports": [ { "name": "default", "port": 80, "targetPort": 80 } ], "publicIPs": ["172.1.1.15", "172.1.1.16"], "selector": { "app": "api" } } } Then I can add these IPs to an ELB load balancer and route traffic to those machines. But for this to work I need to have a maintain the list of all the machines in my cluster in all the services that I am running, which feels wrong. What's the currently recommended way to solve this? If I know the PortalIP of a service is there a way to make it routable in the AWS VPC infrastructure? Is it possible to assign external static (Elastic) IPs to Services and have those routed? (I know of createExternalLoadBalancer, but that does not seem to support AWS yet)
I just stumbled across the same thing and found out after some digging, that they actually changed the volume mount syntax. Based on that knowledge I created this PR for documentation update. See https://github.com/kubernetes/kubernetes/pull/17958 for tracking that and more info, follow the link to the bug and the original change which doesn't include the doc update. (SO prevents me from posting more than two links apparently.) If that still doesn't do the trick for you (as it does for me) it's probably because of https://stackoverflow.com/a/32960312/3212182 which will be fixed in one of the next releases I guess. At least I can't see it in the latest release notes.
###Instruction: I created a volume using the following command. aws ec2 create-volume --size 10 --region us-east-1 --availability-zone us-east-1c --volume-type gp2 Then I used the file below to create a pod that uses the volume. But when I login to the pod, I don't see the volume. Is there something that I might be doing wrong? Did I miss a step somewhere? Thanks for any insights. --- kind: "Pod" apiVersion: "v1" metadata: name: "nginx" labels: name: "nginx" spec: containers: - name: "nginx" image: "nginx" volumeMounts: - mountPath: /test-ebs name: test-volume volumes: - name: test-volume # This AWS EBS volume must already exist. awsElasticBlockStore: volumeID: aws://us-east-1c/vol-8499707e fsType: ext4 ### Response: I just stumbled across the same thing and found out after some digging, that they actually changed the volume mount syntax. Based on that knowledge I created this PR for documentation update. See https://github.com/kubernetes/kubernetes/pull/17958 for tracking that and more info, follow the link to the bug and the original change which doesn't include the doc update. (SO prevents me from posting more than two links apparently.) If that still doesn't do the trick for you (as it does for me) it's probably because of https://stackoverflow.com/a/32960312/3212182 which will be fixed in one of the next releases I guess. At least I can't see it in the latest release notes.
I created a volume using the following command. aws ec2 create-volume --size 10 --region us-east-1 --availability-zone us-east-1c --volume-type gp2 Then I used the file below to create a pod that uses the volume. But when I login to the pod, I don't see the volume. Is there something that I might be doing wrong? Did I miss a step somewhere? Thanks for any insights. --- kind: "Pod" apiVersion: "v1" metadata: name: "nginx" labels: name: "nginx" spec: containers: - name: "nginx" image: "nginx" volumeMounts: - mountPath: /test-ebs name: test-volume volumes: - name: test-volume # This AWS EBS volume must already exist. awsElasticBlockStore: volumeID: aws://us-east-1c/vol-8499707e fsType: ext4
Kubernetes is a set of daemons/binaries: kube-apiserver (AKA the master), kubelet (start/stop containers, sync conf.), kube-scheduler (resources manager) kube-controller-manager (monitor RC, and maintain the desired state) kube-proxy (expose services on each node) kubectl (CLI) The hyperkube binary is an all in one binary (in a way similar to busybox), combining all the previously separate binaries. The following command: hyperkube kubelet \ --api-servers=http://localhost:8080 \ --v=2 \ --address=0.0.0.0 \ --enable-server \ --hostname-override=127.0.0.1 \ --config=/etc/kubernetes/manifests-multi \ --cluster-dns=10.0.0.10 \ --cluster-domain=cluster.local runs the daemon kubelet.
###Instruction: I am trying to setup kubernetes in aws and following the guides at https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/docker-multinode I couldn't understand what is meant by hyperkube. Can someone please explain to me what it is and how does it work? And another question I have is while running the command sudo docker run \ --volume=/:/rootfs:ro \ --volume=/sys:/sys:ro \ --volume=/dev:/dev \ --volume=/var/lib/docker/:/var/lib/docker:rw \ --volume=/var/lib/kubelet/:/var/lib/kubelet:rw \ --volume=/var/run:/var/run:rw \ --net=host \ --privileged=true \ --pid=host \ -d \ gcr.io/google_containers/hyperkube:v${K8S_VERSION} \ /hyperkube kubelet \ --api-servers=http://localhost:8080 \ --v=2 --address=0.0.0.0 --enable-server \ --hostname-override=127.0.0.1 \ --config=/etc/kubernetes/manifests-multi \ --cluster-dns=10.0.0.10 \ --cluster-domain=cluster.local it is starting the one pod by default. From the command documentation, it looks like it is getting the pod manifest from the --config=/etc/kubernetes/manifests-multi attribute. But this directory is not present in my host. can somebody please tell me from where it is getting this pod manifest? ### Response: Kubernetes is a set of daemons/binaries: kube-apiserver (AKA the master), kubelet (start/stop containers, sync conf.), kube-scheduler (resources manager) kube-controller-manager (monitor RC, and maintain the desired state) kube-proxy (expose services on each node) kubectl (CLI) The hyperkube binary is an all in one binary (in a way similar to busybox), combining all the previously separate binaries. The following command: hyperkube kubelet \ --api-servers=http://localhost:8080 \ --v=2 \ --address=0.0.0.0 \ --enable-server \ --hostname-override=127.0.0.1 \ --config=/etc/kubernetes/manifests-multi \ --cluster-dns=10.0.0.10 \ --cluster-domain=cluster.local runs the daemon kubelet.
I am trying to setup kubernetes in aws and following the guides at https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/docker-multinode I couldn't understand what is meant by hyperkube. Can someone please explain to me what it is and how does it work? And another question I have is while running the command sudo docker run \ --volume=/:/rootfs:ro \ --volume=/sys:/sys:ro \ --volume=/dev:/dev \ --volume=/var/lib/docker/:/var/lib/docker:rw \ --volume=/var/lib/kubelet/:/var/lib/kubelet:rw \ --volume=/var/run:/var/run:rw \ --net=host \ --privileged=true \ --pid=host \ -d \ gcr.io/google_containers/hyperkube:v${K8S_VERSION} \ /hyperkube kubelet \ --api-servers=http://localhost:8080 \ --v=2 --address=0.0.0.0 --enable-server \ --hostname-override=127.0.0.1 \ --config=/etc/kubernetes/manifests-multi \ --cluster-dns=10.0.0.10 \ --cluster-domain=cluster.local it is starting the one pod by default. From the command documentation, it looks like it is getting the pod manifest from the --config=/etc/kubernetes/manifests-multi attribute. But this directory is not present in my host. can somebody please tell me from where it is getting this pod manifest?
In order to use this UI, go to the saturnism/gcp-live-k8s-visualizer GitHub repo and follow the steps, there.
###Instruction: I was going through a Kubernetes tutorial on Youtube and found the following UI which demonstrates pod and service arrangements of Kubernetes cluster.How can I install this UI in my Kubernetes setup? ### Response: In order to use this UI, go to the saturnism/gcp-live-k8s-visualizer GitHub repo and follow the steps, there.
I was going through a Kubernetes tutorial on Youtube and found the following UI which demonstrates pod and service arrangements of Kubernetes cluster.How can I install this UI in my Kubernetes setup?
You can pass the --cluster= or --context= flags to kubectl to set a single run. For example, if I have two clusters in my ~/.kube/config "foo" and "bar": $ kubectl --cluster=foo get pods NAME READY STATUS RESTARTS AGE foo-ht1qh 1/1 Running 0 3h foo-wf8f4 1/1 Running 0 3h foo-yvgpd 1/1 Running 0 3h vs $ kubectl --cluster=bar get pods NAME READY STATUS RESTARTS AGE bar-de4h7 1/1 Running 0 9h bar-c4g03 1/1 Running 0 9h bar-2sprd 1/1 Running 0 9h
###Instruction: Background We're using Jenkins to deploy a new version of a Kubernetes (k8s) replication controller to our test or prod cluster. The test and prod (k8s) clusters are located under different (google cloud platform) projects. We have configured two profiles for our gcloud SDK on Jenkins, one for test (test-profile) and one for prod (prod-profile). We have defined a managed script in Jenkins that performs the rolling update for our replication controller. The problems is that I cannot find a way to control to which project I want to target the kubectl rolling-update command (you can specify which cluster but not which project afict). So right now our script that does the rolling update to our test server looks something like this: gcloud config configurations activate test-profile && kubectl rolling-update ... While this works it could be extremely dangerous if two jobs run concurrently for different environments. Say that job 1 targets the test environment and job 2 targets prod. If job2 switches the active profile to "prod-profile" before job 1 has executed its rolling-update command job 1 will target to wrong project and in worse case update the wrong replication controller (if the clusters have the same name). Question Is there a way to specify which project that a kubectl command is targeting (for example during a rolling update) that is safe to run concurrently? ### Response: You can pass the --cluster= or --context= flags to kubectl to set a single run. For example, if I have two clusters in my ~/.kube/config "foo" and "bar": $ kubectl --cluster=foo get pods NAME READY STATUS RESTARTS AGE foo-ht1qh 1/1 Running 0 3h foo-wf8f4 1/1 Running 0 3h foo-yvgpd 1/1 Running 0 3h vs $ kubectl --cluster=bar get pods NAME READY STATUS RESTARTS AGE bar-de4h7 1/1 Running 0 9h bar-c4g03 1/1 Running 0 9h bar-2sprd 1/1 Running 0 9h
Background We're using Jenkins to deploy a new version of a Kubernetes (k8s) replication controller to our test or prod cluster. The test and prod (k8s) clusters are located under different (google cloud platform) projects. We have configured two profiles for our gcloud SDK on Jenkins, one for test (test-profile) and one for prod (prod-profile). We have defined a managed script in Jenkins that performs the rolling update for our replication controller. The problems is that I cannot find a way to control to which project I want to target the kubectl rolling-update command (you can specify which cluster but not which project afict). So right now our script that does the rolling update to our test server looks something like this: gcloud config configurations activate test-profile && kubectl rolling-update ... While this works it could be extremely dangerous if two jobs run concurrently for different environments. Say that job 1 targets the test environment and job 2 targets prod. If job2 switches the active profile to "prod-profile" before job 1 has executed its rolling-update command job 1 will target to wrong project and in worse case update the wrong replication controller (if the clusters have the same name). Question Is there a way to specify which project that a kubectl command is targeting (for example during a rolling update) that is safe to run concurrently?