input
stringclasses
1 value
output
stringlengths
30
28.3k
text
stringlengths
151
31.8k
instruction
stringlengths
49
30.3k
Run kubectl config view. It'll dump out the auth information used to access your cluster, including the basic auth username and password.
###Instruction: I know it's possible to access the static views of the api, but I can't find out the basic auth details that I need to login via the browser. Where can I find these? I'm on GCE and created a cluster. ### Response: Run kubectl config view. It'll dump out the auth information used to access your cluster, including the basic auth username and password.
I know it's possible to access the static views of the api, but I can't find out the basic auth details that I need to login via the browser. Where can I find these? I'm on GCE and created a cluster.
The shortcoming mentioned in the documentation has not changed, because Kubernetes still uses the same mechanism (iptables and a userspace proxy) for proxying traffic sent to a service IP to the pods backing the service. However, I don't believe we actually know how bad it is. A team member briefly tried testing it early this year and didn't see any impact, but didn't do anything rigorous to verify. It's possible that it'll work fine at a couple thousand services. If you try it, we'd love to hear how it goes via IRC or email.
###Instruction: Some time ago I asked about handling thousands of services in a Kubernetes cluster: Can Kubernetes handle thousands of services? At that time Kubernetes was using env vars and my question was more oriented to that. Now that Kubernetes has a DNS sounds like we don't have the problem with env vars anymore, however the docs still says it won't perform well when handling thousands of services: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/services.md#shortcomings Wanted to know if documentation is outdated or if there are still issues to scale Kubernetes to thousands of services. ### Response: The shortcoming mentioned in the documentation has not changed, because Kubernetes still uses the same mechanism (iptables and a userspace proxy) for proxying traffic sent to a service IP to the pods backing the service. However, I don't believe we actually know how bad it is. A team member briefly tried testing it early this year and didn't see any impact, but didn't do anything rigorous to verify. It's possible that it'll work fine at a couple thousand services. If you try it, we'd love to hear how it goes via IRC or email.
Some time ago I asked about handling thousands of services in a Kubernetes cluster: Can Kubernetes handle thousands of services? At that time Kubernetes was using env vars and my question was more oriented to that. Now that Kubernetes has a DNS sounds like we don't have the problem with env vars anymore, however the docs still says it won't perform well when handling thousands of services: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/services.md#shortcomings Wanted to know if documentation is outdated or if there are still issues to scale Kubernetes to thousands of services.
It also has a more general definition (at least the way we tend to use it in the documentation). A piece of logic is "level based" if it only depends on the current state. A piece of logic is "edge-based" if it depends on history/transitions in addition to the current state. "Level based" components are more resilient because if they crash, they can come back up and just look at the current state. "Edge-based" components must store the history they rely on (or depend on some other component that stores it), so that when they come back up they can look at the current state and the history. Also, if there is some kind of temporary network partition and an edge-based component misses some of the updates, then it will compute the wrong output. However, "level based" components are usually less efficient, because they may need to scan a lot of state in order to compute an output, rather than just reading deltas. Many components are a mixture of the two. Simple example: You want to build a component that reports the number of pods in READY state. A level-based implementation would fetch all the pods from etcd (or the API server) and count. An edge-based implementation would do that once at startup, and then just watch for pods entering and exiting READY state.
###Instruction: What does "level-based" and "edge-based" mean in general? I read "In other words, the system's behavior is level-based rather than edge-based" from kubernetes documentation: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/api-conventions.md with Google, I only find: http://www.keil.com/forum/9423/edge-based-vs-level-based-interrupt/ Thank you. ### Response: It also has a more general definition (at least the way we tend to use it in the documentation). A piece of logic is "level based" if it only depends on the current state. A piece of logic is "edge-based" if it depends on history/transitions in addition to the current state. "Level based" components are more resilient because if they crash, they can come back up and just look at the current state. "Edge-based" components must store the history they rely on (or depend on some other component that stores it), so that when they come back up they can look at the current state and the history. Also, if there is some kind of temporary network partition and an edge-based component misses some of the updates, then it will compute the wrong output. However, "level based" components are usually less efficient, because they may need to scan a lot of state in order to compute an output, rather than just reading deltas. Many components are a mixture of the two. Simple example: You want to build a component that reports the number of pods in READY state. A level-based implementation would fetch all the pods from etcd (or the API server) and count. An edge-based implementation would do that once at startup, and then just watch for pods entering and exiting READY state.
What does "level-based" and "edge-based" mean in general? I read "In other words, the system's behavior is level-based rather than edge-based" from kubernetes documentation: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/api-conventions.md with Google, I only find: http://www.keil.com/forum/9423/edge-based-vs-level-based-interrupt/ Thank you.
The --update-period flag tells Kubernetes how long to wait between each pod that it's rolling an update to. With the update period set to 0, Kubernetes will update all pods at once, causing a short period of unavailability while the new pods start up. You should set --update-period to be at least as long as it takes each of your pods to initialize. The default value (1 minute) should be fine for almost all cases if you don't want to have to think about it.
###Instruction: I'm using Google Container Engine with a cluster using Kubernetes 0.20.2. In this cluster, I have 1 replication controller (2 replicas) and 1 service with a spec type defined to LoadBalancer (basic setup). Everything is working fine here; Then I want to roll update to a different image using the kubectl command: kubectl rolling-update my-rc \ --image=gcr.io/project/gcloudId:my-image-updated \ --update-period=0m About what I understood, running this command should take care of having a zero-downtime. Unfortunately, I have been doing some test using the curl command in a loop, and I still have a downtime of a few seconds. Any ideas why this is happening? ### Response: The --update-period flag tells Kubernetes how long to wait between each pod that it's rolling an update to. With the update period set to 0, Kubernetes will update all pods at once, causing a short period of unavailability while the new pods start up. You should set --update-period to be at least as long as it takes each of your pods to initialize. The default value (1 minute) should be fine for almost all cases if you don't want to have to think about it.
I'm using Google Container Engine with a cluster using Kubernetes 0.20.2. In this cluster, I have 1 replication controller (2 replicas) and 1 service with a spec type defined to LoadBalancer (basic setup). Everything is working fine here; Then I want to roll update to a different image using the kubectl command: kubectl rolling-update my-rc \ --image=gcr.io/project/gcloudId:my-image-updated \ --update-period=0m About what I understood, running this command should take care of having a zero-downtime. Unfortunately, I have been doing some test using the curl command in a loop, and I still have a downtime of a few seconds. Any ideas why this is happening?
This has been a common problem for folks that aren't running on setups that use salt to automatically configure system secrets on the master node (as GCE does). This has been fixed at head and should be fixed in the next release. In the mean time, you can manually create a secret for the DNS service that contains a kubeconfig file for kube2sky to connect to the master. You can see how this is done on GCE by looking at the create-kubeconfig-secret function in kube-addons.sh (when called with the username "system:dns"). The name of the resulting secret should be token-system-dns.
###Instruction: I've just installed a kubernetes testinstallation directly on my fedora laptop using this guide. After starting kube2sky I've noticed I can't connect to the kubernetes api since the certificates are required. kubernetes-ro is deprecated and no longer available on my machine, so I get the following errors: E0627 15:58:07.145720 1 reflector.go:133] Failed to list *api.Service: Get https://10.254.0.1:443/api/v1beta3/services: x509: failed to load system roots and no roots provided E0627 15:58:07.146844 1 reflector.go:133] Failed to list *api.Endpoints: Get https://10.254.0.1:443/api/v1beta3/endpoints: x509: failed to load system roots and no roots provided How can I setup the certificates? ### Response: This has been a common problem for folks that aren't running on setups that use salt to automatically configure system secrets on the master node (as GCE does). This has been fixed at head and should be fixed in the next release. In the mean time, you can manually create a secret for the DNS service that contains a kubeconfig file for kube2sky to connect to the master. You can see how this is done on GCE by looking at the create-kubeconfig-secret function in kube-addons.sh (when called with the username "system:dns"). The name of the resulting secret should be token-system-dns.
I've just installed a kubernetes testinstallation directly on my fedora laptop using this guide. After starting kube2sky I've noticed I can't connect to the kubernetes api since the certificates are required. kubernetes-ro is deprecated and no longer available on my machine, so I get the following errors: E0627 15:58:07.145720 1 reflector.go:133] Failed to list *api.Service: Get https://10.254.0.1:443/api/v1beta3/services: x509: failed to load system roots and no roots provided E0627 15:58:07.146844 1 reflector.go:133] Failed to list *api.Endpoints: Get https://10.254.0.1:443/api/v1beta3/endpoints: x509: failed to load system roots and no roots provided How can I setup the certificates?
Why do you need to deploy all of the components together? In a micro services architecture, you would want to reduce the dependencies between each layer to a clean interface and then allow each layer to be deployed and scaled separately from the others. If you need to deploy them together (e.g. they share local disk or localhost networking) then you need to deploy them as a single pod. A single pod is an atomic scheduling unit, so it will be deployed onto a single host machine. Since it lands on a single host, this limits the scalability of your application to the size of a single host (not allowing you to scale out as your traffic increases). If your three layers are not tightly coupled, then you can run them in different pods, which allows them to be scheduled across multiple hosts (or on the same host if, for example, you are doing local development). To connect the pods together, you can define services. You should take a look at the guestbook example which illustrates how to define pods and services for a simple multi-tier web application running on Kubernetes.
###Instruction: If I have a multi - tier application (say web / logic / database), where each tier having it's own container, and I need to deploy all of these en - bloc, do they all have to go into the same pod? And if they are in the same pod, does this have any implications in terms of the maximum size of application that can be run? Or is there some higher level abstraction that I can use to start all three layers, but have them running on different minions? ### Response: Why do you need to deploy all of the components together? In a micro services architecture, you would want to reduce the dependencies between each layer to a clean interface and then allow each layer to be deployed and scaled separately from the others. If you need to deploy them together (e.g. they share local disk or localhost networking) then you need to deploy them as a single pod. A single pod is an atomic scheduling unit, so it will be deployed onto a single host machine. Since it lands on a single host, this limits the scalability of your application to the size of a single host (not allowing you to scale out as your traffic increases). If your three layers are not tightly coupled, then you can run them in different pods, which allows them to be scheduled across multiple hosts (or on the same host if, for example, you are doing local development). To connect the pods together, you can define services. You should take a look at the guestbook example which illustrates how to define pods and services for a simple multi-tier web application running on Kubernetes.
If I have a multi - tier application (say web / logic / database), where each tier having it's own container, and I need to deploy all of these en - bloc, do they all have to go into the same pod? And if they are in the same pod, does this have any implications in terms of the maximum size of application that can be run? Or is there some higher level abstraction that I can use to start all three layers, but have them running on different minions?
Our target is 5s latency from creation -> Running (assuming image is pre-pulled). The issue tracking this was https://github.com/GoogleCloudPlatform/kubernetes/issues/3954. This issue was closed a couple weeks ago, so please update to version 20.2 and give it another try.
###Instruction: We would like to spin up pods quickly on our cluster, to handle 'one-off' tasks (the idea being that each task has a new pod every time it runs). Currently, it takes about 10-15 seconds from a Pod creation API call -> completion. This is running on 3x m3 xlarge on AWS, with images that have already been cached (I presume, as I am using the same image twice on a single Node). We are running with restartPolicy = Never, as they are one off tasks. I've tried fiddling with the imagePullPolicy (= Never) and resource options with no avail. It appears that the 10 second delay happens in the 'Running' phase, after Kubernetes has handed it off to a Pod. I can confirm the operation itself is very quick: running locally on Docker only takes about 0.5s total, including the operation. Is there any way to speed this up? ### Response: Our target is 5s latency from creation -> Running (assuming image is pre-pulled). The issue tracking this was https://github.com/GoogleCloudPlatform/kubernetes/issues/3954. This issue was closed a couple weeks ago, so please update to version 20.2 and give it another try.
We would like to spin up pods quickly on our cluster, to handle 'one-off' tasks (the idea being that each task has a new pod every time it runs). Currently, it takes about 10-15 seconds from a Pod creation API call -> completion. This is running on 3x m3 xlarge on AWS, with images that have already been cached (I presume, as I am using the same image twice on a single Node). We are running with restartPolicy = Never, as they are one off tasks. I've tried fiddling with the imagePullPolicy (= Never) and resource options with no avail. It appears that the 10 second delay happens in the 'Running' phase, after Kubernetes has handed it off to a Pod. I can confirm the operation itself is very quick: running locally on Docker only takes about 0.5s total, including the operation. Is there any way to speed this up?
My previous answer was incorrect, edited with correct info The -h flag for docker run will set the hostname of the container when you create it. Test it out: docker run -h test.example.com -it ubuntu /bin/bash The docker start command does not have the same -h or --hostname argument though. It doesn't seem possible to change the hostname of an existing container, just a new one from an image. However w/r/t Kubernetes: There is an open issue on Github regarding how Kubernetes handles hostnames. It does not seem like Kubernetes exposes docker's hostname setting directly, but you might be able to influence it via your pod name
###Instruction: I want to set the hostname in a container running inside Kubernetes, but Kubernetes appears to be overriding whatever I set at runtime. I've tried both setting /etc/hostname in the docker image I'm having Kubernetes use, and including echo "host.example.com > /etc/hostname" in the CMD in the Dockerfile. There appears to be a docker flag -h to set the hostname. Is there a way for me to specify in my replication controller that it should start the container with a special flag? The container's Debian, if it helps. ### Response: My previous answer was incorrect, edited with correct info The -h flag for docker run will set the hostname of the container when you create it. Test it out: docker run -h test.example.com -it ubuntu /bin/bash The docker start command does not have the same -h or --hostname argument though. It doesn't seem possible to change the hostname of an existing container, just a new one from an image. However w/r/t Kubernetes: There is an open issue on Github regarding how Kubernetes handles hostnames. It does not seem like Kubernetes exposes docker's hostname setting directly, but you might be able to influence it via your pod name
I want to set the hostname in a container running inside Kubernetes, but Kubernetes appears to be overriding whatever I set at runtime. I've tried both setting /etc/hostname in the docker image I'm having Kubernetes use, and including echo "host.example.com > /etc/hostname" in the CMD in the Dockerfile. There appears to be a docker flag -h to set the hostname. Is there a way for me to specify in my replication controller that it should start the container with a special flag? The container's Debian, if it helps.
Because flannel is not working properly so do /root/kube/reconfDocker.sh on everynode it will restart the docker and flannel then check the ifconfig docker0 and flannel0 bridge IPs should be in same network. Then load balancing will work. It works for me.
###Instruction: We want to test kubernetes load balancing. So we create a 2 node cluster thats runs 6 replicas of our container. Container has running apache2 server and php and it will print pod name if we browse hostname.php Cluster details: 172.16.2.92 -- master and minion 172.16.2.91 -- minion RC and service details: frontend-controller.json: { "kind":"ReplicationController", "apiVersion":"v1beta3", "metadata":{ "name":"frontend", "labels":{ "name":"frontend" } }, "spec":{ "replicas":6, "selector":{ "name":"frontend" }, "template":{ "metadata":{ "labels":{ "name":"frontend" } }, "spec":{ "containers":[ { "name":"php-hostname", "image":"naresht/hostname", "ports":[ { "containerPort":80, "protocol":"TCP" } ] } ] } } } } frontend-service.json: { "kind":"Service", "apiVersion":"v1beta3", "metadata":{ "name":"frontend", "labels":{ "name":"frontend" } }, "spec":{ "createExternalLoadBalancer": true, "ports": [ { "port":3000, "targetPort":80, "protocol":"TCP" } ], "publicIPs": [ "172.16.2.92"], "selector":{ "name":"frontend" } } } Pod details: frontend-01bb8, frontend-svxfl and frontend-yki5s are running on node 172.16.2.91 frontend-65ykz , frontend-c1x0d and frontend-y925t are running on node 172.16.2.92 If we browse for 172.16.2.92:3000/hostname.php, it prints POD name. Problem: Running watch -n1 curl 172.16.2.92:3000/hostname.php on node 172.16.2.92 gives only that pods(frontend-65ykz , frontend-c1x0d and frontend-y925t ). They are not showing other node 172.16.2.91 pods. Running same command on node 172.16.2.91 gives only that pods. They are not showing other node 172.16.2.92 pods. Running same command outside of cluster showing only 172.16.2.92 pods. But we want to see all pods not specific node pods, if we run wherever. Check below details for more information and help you if anything wrong # kubectl get nodes NAME LABELS STATUS 172.16.2.91 kubernetes.io/hostname=172.16.2.91 Ready 172.16.2.92 kubernetes.io/hostname=172.16.2.92 Ready # kubectl get pods POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE frontend-01bb8 172.17.0.84 172.16.2.91/172.16.2.91 name=frontend Running About a minute php-hostname naresht/hostname Running About a minute frontend-65ykz 10.1.64.79 172.16.2.92/172.16.2.92 name=frontend Running About a minute php-hostname naresht/hostname Running About a minute frontend-c1x0d 10.1.64.77 172.16.2.92/172.16.2.92 name=frontend Running About a minute php-hostname naresht/hostname Running About a minute frontend-svxfl 172.17.0.82 172.16.2.91/172.16.2.91 name=frontend Running About a minute php-hostname naresht/hostname Running About a minute frontend-y925t 10.1.64.78 172.16.2.92/172.16.2.92 name=frontend Running About a minute php-hostname naresht/hostname Running About a minute frontend-yki5s 172.17.0.83 172.16.2.91/172.16.2.91 name=frontend Running About a minute php-hostname naresht/hostname Running About a minute kube-dns-sbgma 10.1.64.11 172.16.2.92/172.16.2.92 k8s-app=kube-dns,kubernetes.io/cluster-service=true,name=kube-dns Running 45 hours kube2sky gcr.io/google_containers/kube2sky:1.1 Running 45 hours etcd quay.io/coreos/etcd:v2.0.3 Running 45 hours skydns gcr.io/google_containers/skydns:2015-03-11-001 Running 45 hours # kubectl get services NAME LABELS SELECTOR IP(S) PORT(S) frontend name=frontend name=frontend 192.168.3.184 3000/TCP kube-dns k8s-app=kube-dns,kubernetes.io/cluster-service=true,name=kube-dns k8s-app=kube-dns 192.168.3.10 53/UDP kubernetes component=apiserver,provider=kubernetes <none> 192.168.3.2 443/TCP kubernetes-ro component=apiserver,provider=kubernetes <none> 192.168.3.1 80/TCP # iptables -t nat -L Chain KUBE-PORTALS-CONTAINER (1 references) target prot opt source destination REDIRECT tcp -- anywhere 192.168.3.184 /* default/frontend: */ tcp dpt:3000 redir ports 50734 REDIRECT tcp -- anywhere kube02 /* default/frontend: */ tcp dpt:3000 redir ports 50734 REDIRECT udp -- anywhere 192.168.3.10 /* default/kube-dns: */ udp dpt:domain redir ports 52415 REDIRECT tcp -- anywhere 192.168.3.2 /* default/kubernetes: */ tcp dpt:https redir ports 33373 REDIRECT tcp -- anywhere 192.168.3.1 /* default/kubernetes-ro: */ tcp dpt:http redir ports 60311 Chain KUBE-PORTALS-HOST (1 references) target prot opt source destination DNAT tcp -- anywhere 192.168.3.184 /* default/frontend: */ tcp dpt:3000 to:172.16.2.92:50734 DNAT tcp -- anywhere kube02 /* default/frontend: */ tcp dpt:3000 to:172.16.2.92:50734 DNAT udp -- anywhere 192.168.3.10 /* default/kube-dns: */ udp dpt:domain to:172.16.2.92:52415 DNAT tcp -- anywhere 192.168.3.2 /* default/kubernetes: */ tcp dpt:https to:172.16.2.92:33373 DNAT tcp -- anywhere 192.168.3.1 /* default/kubernetes-ro: */ tcp dpt:http to:172.16.2.92:60311 Thanks ### Response: Because flannel is not working properly so do /root/kube/reconfDocker.sh on everynode it will restart the docker and flannel then check the ifconfig docker0 and flannel0 bridge IPs should be in same network. Then load balancing will work. It works for me.
We want to test kubernetes load balancing. So we create a 2 node cluster thats runs 6 replicas of our container. Container has running apache2 server and php and it will print pod name if we browse hostname.php Cluster details: 172.16.2.92 -- master and minion 172.16.2.91 -- minion RC and service details: frontend-controller.json: { "kind":"ReplicationController", "apiVersion":"v1beta3", "metadata":{ "name":"frontend", "labels":{ "name":"frontend" } }, "spec":{ "replicas":6, "selector":{ "name":"frontend" }, "template":{ "metadata":{ "labels":{ "name":"frontend" } }, "spec":{ "containers":[ { "name":"php-hostname", "image":"naresht/hostname", "ports":[ { "containerPort":80, "protocol":"TCP" } ] } ] } } } } frontend-service.json: { "kind":"Service", "apiVersion":"v1beta3", "metadata":{ "name":"frontend", "labels":{ "name":"frontend" } }, "spec":{ "createExternalLoadBalancer": true, "ports": [ { "port":3000, "targetPort":80, "protocol":"TCP" } ], "publicIPs": [ "172.16.2.92"], "selector":{ "name":"frontend" } } } Pod details: frontend-01bb8, frontend-svxfl and frontend-yki5s are running on node 172.16.2.91 frontend-65ykz , frontend-c1x0d and frontend-y925t are running on node 172.16.2.92 If we browse for 172.16.2.92:3000/hostname.php, it prints POD name. Problem: Running watch -n1 curl 172.16.2.92:3000/hostname.php on node 172.16.2.92 gives only that pods(frontend-65ykz , frontend-c1x0d and frontend-y925t ). They are not showing other node 172.16.2.91 pods. Running same command on node 172.16.2.91 gives only that pods. They are not showing other node 172.16.2.92 pods. Running same command outside of cluster showing only 172.16.2.92 pods. But we want to see all pods not specific node pods, if we run wherever. Check below details for more information and help you if anything wrong # kubectl get nodes NAME LABELS STATUS 172.16.2.91 kubernetes.io/hostname=172.16.2.91 Ready 172.16.2.92 kubernetes.io/hostname=172.16.2.92 Ready # kubectl get pods POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE frontend-01bb8 172.17.0.84 172.16.2.91/172.16.2.91 name=frontend Running About a minute php-hostname naresht/hostname Running About a minute frontend-65ykz 10.1.64.79 172.16.2.92/172.16.2.92 name=frontend Running About a minute php-hostname naresht/hostname Running About a minute frontend-c1x0d 10.1.64.77 172.16.2.92/172.16.2.92 name=frontend Running About a minute php-hostname naresht/hostname Running About a minute frontend-svxfl 172.17.0.82 172.16.2.91/172.16.2.91 name=frontend Running About a minute php-hostname naresht/hostname Running About a minute frontend-y925t 10.1.64.78 172.16.2.92/172.16.2.92 name=frontend Running About a minute php-hostname naresht/hostname Running About a minute frontend-yki5s 172.17.0.83 172.16.2.91/172.16.2.91 name=frontend Running About a minute php-hostname naresht/hostname Running About a minute kube-dns-sbgma 10.1.64.11 172.16.2.92/172.16.2.92 k8s-app=kube-dns,kubernetes.io/cluster-service=true,name=kube-dns Running 45 hours kube2sky gcr.io/google_containers/kube2sky:1.1 Running 45 hours etcd quay.io/coreos/etcd:v2.0.3 Running 45 hours skydns gcr.io/google_containers/skydns:2015-03-11-001 Running 45 hours # kubectl get services NAME LABELS SELECTOR IP(S) PORT(S) frontend name=frontend name=frontend 192.168.3.184 3000/TCP kube-dns k8s-app=kube-dns,kubernetes.io/cluster-service=true,name=kube-dns k8s-app=kube-dns 192.168.3.10 53/UDP kubernetes component=apiserver,provider=kubernetes <none> 192.168.3.2 443/TCP kubernetes-ro component=apiserver,provider=kubernetes <none> 192.168.3.1 80/TCP # iptables -t nat -L Chain KUBE-PORTALS-CONTAINER (1 references) target prot opt source destination REDIRECT tcp -- anywhere 192.168.3.184 /* default/frontend: */ tcp dpt:3000 redir ports 50734 REDIRECT tcp -- anywhere kube02 /* default/frontend: */ tcp dpt:3000 redir ports 50734 REDIRECT udp -- anywhere 192.168.3.10 /* default/kube-dns: */ udp dpt:domain redir ports 52415 REDIRECT tcp -- anywhere 192.168.3.2 /* default/kubernetes: */ tcp dpt:https redir ports 33373 REDIRECT tcp -- anywhere 192.168.3.1 /* default/kubernetes-ro: */ tcp dpt:http redir ports 60311 Chain KUBE-PORTALS-HOST (1 references) target prot opt source destination DNAT tcp -- anywhere 192.168.3.184 /* default/frontend: */ tcp dpt:3000 to:172.16.2.92:50734 DNAT tcp -- anywhere kube02 /* default/frontend: */ tcp dpt:3000 to:172.16.2.92:50734 DNAT udp -- anywhere 192.168.3.10 /* default/kube-dns: */ udp dpt:domain to:172.16.2.92:52415 DNAT tcp -- anywhere 192.168.3.2 /* default/kubernetes: */ tcp dpt:https to:172.16.2.92:33373 DNAT tcp -- anywhere 192.168.3.1 /* default/kubernetes-ro: */ tcp dpt:http to:172.16.2.92:60311 Thanks
The PublicIP thing is changing lately and I don't know exactly where it landed. But, services are the ip address and port that you reference in your applications. In other words, if I create a database, I create it as a pod (with or without a replication controller). I don't connect to the pod, however, from another application. I connect to a service which knows about the pod (via a label selector). This is important for a number of reasons. If the database fails and is recreated on a different host, the application accessing it still references the (stationary) service ip address, and the kubernetes proxies take care of getting the request to the correct pod. The service address is known by all Kubernetes nodes. Any node can proxy the request appropriately. I think a variation of the theme applies to your problem. You might consider creating an external load balancer which forwards traffic to all of your nodes for the specific (web) service. You still need to take the node out of the balancer's targets if the node goes down, but, I think that any node will forward the traffic for any service whether or not that service is on that node. All that said, I haven't had direct experience with external (public) ip addresses load balancing to the cluster, so there are probably better techniques. The main point I was trying to make is the node will proxy the request to the appropriate pod whether or not that node has a pod. -g
###Instruction: I'm trying to understand a good way to handle Kubernetes cluster where there are several nodes and a master. I host the cluster within the cloud of my company, plain Ubuntu boxes (so no Google Cloud or AWS). Each pod contains the webapp (which is stateless) and I run any number of pods via replication controllers. I see that with Services, I can declare PublicIPs however this is confusing because after adding ip addresses of my minion nodes, each ip only exposes the pod that it runs and it doesn't do any sort of load balancing. Due to this, if a node doesn't have any active pod running (as created pods are random allocated among nodes), it simply timeouts and I end up some IP addresses that don't response. Am I understanding this wrong? How can I truly do a proper external load balancing for my web app? Should I do load balancing on Pod level instead of using Service? If so, pods are considered mortal and they may dynamically die and born, how I do track of this? ### Response: The PublicIP thing is changing lately and I don't know exactly where it landed. But, services are the ip address and port that you reference in your applications. In other words, if I create a database, I create it as a pod (with or without a replication controller). I don't connect to the pod, however, from another application. I connect to a service which knows about the pod (via a label selector). This is important for a number of reasons. If the database fails and is recreated on a different host, the application accessing it still references the (stationary) service ip address, and the kubernetes proxies take care of getting the request to the correct pod. The service address is known by all Kubernetes nodes. Any node can proxy the request appropriately. I think a variation of the theme applies to your problem. You might consider creating an external load balancer which forwards traffic to all of your nodes for the specific (web) service. You still need to take the node out of the balancer's targets if the node goes down, but, I think that any node will forward the traffic for any service whether or not that service is on that node. All that said, I haven't had direct experience with external (public) ip addresses load balancing to the cluster, so there are probably better techniques. The main point I was trying to make is the node will proxy the request to the appropriate pod whether or not that node has a pod. -g
I'm trying to understand a good way to handle Kubernetes cluster where there are several nodes and a master. I host the cluster within the cloud of my company, plain Ubuntu boxes (so no Google Cloud or AWS). Each pod contains the webapp (which is stateless) and I run any number of pods via replication controllers. I see that with Services, I can declare PublicIPs however this is confusing because after adding ip addresses of my minion nodes, each ip only exposes the pod that it runs and it doesn't do any sort of load balancing. Due to this, if a node doesn't have any active pod running (as created pods are random allocated among nodes), it simply timeouts and I end up some IP addresses that don't response. Am I understanding this wrong? How can I truly do a proper external load balancing for my web app? Should I do load balancing on Pod level instead of using Service? If so, pods are considered mortal and they may dynamically die and born, how I do track of this?
See the Compute Resources section of the kubernetes user guide. It describes how to assign cpu and memory limits on your containers and how the scheduler places them in your cluster. As you scale up the number of pods you are running, the scheduler will attempt to place them in the available space. If there is no way that the pods can be scheduled, then the pods will stay in a pending state until the scheduler can find a place to run them. You may be able to relax some constraints you placed on your pods (host ports, label selectors, etc) or you may need to increase the compute capacity of your cluster by adding additional nodes. Right now, the cluster will not automatically add new nodes when it is out of capacity. Work to add this functionality, at least for GCE, is now underway (see #11748) but does not exist in v1.0 of Kubernetes. Until that feature is implemented, you will need to manually scale your cluster. If you are running on GCE / GKE, this can be accomplished by resizing the managed instance group that contains the nodes for your cluster. On other cloud providers, you need to clone the node configuration onto a new node so that is has the proper credentials to join the cluster.
###Instruction: A few kubernetes novice questions. If I got it right when a kubernetes cluster its setup the size ita defined with the number of minions that want to be in the cluster, lets say I create a cluster with two minions. If I decide to deploy 4 pods with a php and nginx serving it. There its a way I can choose the amount of resources I want to have to each pod? In old deployments we deploy directly to servers/vm we know the amount of resources of each server/vm. Suppose I have a no functional requirement of 2gb ram 4 cps 160gb hdd. How can I do that using kubernetes. Now suppose I have those 4 pods deployed I want to scale up and the new pods need to fulfill the same no functional requirements. Do I need to resize my cluster or there ita a way kubernetes doit for me? Thanks. ### Response: See the Compute Resources section of the kubernetes user guide. It describes how to assign cpu and memory limits on your containers and how the scheduler places them in your cluster. As you scale up the number of pods you are running, the scheduler will attempt to place them in the available space. If there is no way that the pods can be scheduled, then the pods will stay in a pending state until the scheduler can find a place to run them. You may be able to relax some constraints you placed on your pods (host ports, label selectors, etc) or you may need to increase the compute capacity of your cluster by adding additional nodes. Right now, the cluster will not automatically add new nodes when it is out of capacity. Work to add this functionality, at least for GCE, is now underway (see #11748) but does not exist in v1.0 of Kubernetes. Until that feature is implemented, you will need to manually scale your cluster. If you are running on GCE / GKE, this can be accomplished by resizing the managed instance group that contains the nodes for your cluster. On other cloud providers, you need to clone the node configuration onto a new node so that is has the proper credentials to join the cluster.
A few kubernetes novice questions. If I got it right when a kubernetes cluster its setup the size ita defined with the number of minions that want to be in the cluster, lets say I create a cluster with two minions. If I decide to deploy 4 pods with a php and nginx serving it. There its a way I can choose the amount of resources I want to have to each pod? In old deployments we deploy directly to servers/vm we know the amount of resources of each server/vm. Suppose I have a no functional requirement of 2gb ram 4 cps 160gb hdd. How can I do that using kubernetes. Now suppose I have those 4 pods deployed I want to scale up and the new pods need to fulfill the same no functional requirements. Do I need to resize my cluster or there ita a way kubernetes doit for me? Thanks.
Run gcloud container clusters get-credentials my-cluster-name to update the kubeconfig file and point kubectl at a cluster on Google Container Engine.
###Instruction: I have followed the installation steps: https://cloud.google.com/container-engine/docs/tutorials/guestbook#install_gcloud_and_kubectl A Google Container Engine cluster is up and running and gcloud CLI is authenticated and works. But kubectl says: "couldn't read version from server: Get http://local host:8080/api: dial tcp 127.0.0.1:8080: connection refused" I think I need to use kubectl config set-cluster to setup the connection to my cluster on GCE. Where do I find the address of the Kubernetes master of my GCE cluster? With gcloud beta container clusters list I seemingly get the master IP of my cluster. I used that with kubectl config set-cluster. Now it says: "error: couldn't read version from server: Get http:// 104.197.49.119/api: dial tcp 104.197.49.119:80: i/o timeout" Am I on the right track with this? Additional strangeness: gcloud container or gcloud preview container doesn't work for me. Only gcloud beta container MASTER_VERSION of my cluster is 0.21.4, while the version of my kubectl client is GitVersion:"v0.20.2", even though freshly installed with gcloud. ### Response: Run gcloud container clusters get-credentials my-cluster-name to update the kubeconfig file and point kubectl at a cluster on Google Container Engine.
I have followed the installation steps: https://cloud.google.com/container-engine/docs/tutorials/guestbook#install_gcloud_and_kubectl A Google Container Engine cluster is up and running and gcloud CLI is authenticated and works. But kubectl says: "couldn't read version from server: Get http://local host:8080/api: dial tcp 127.0.0.1:8080: connection refused" I think I need to use kubectl config set-cluster to setup the connection to my cluster on GCE. Where do I find the address of the Kubernetes master of my GCE cluster? With gcloud beta container clusters list I seemingly get the master IP of my cluster. I used that with kubectl config set-cluster. Now it says: "error: couldn't read version from server: Get http:// 104.197.49.119/api: dial tcp 104.197.49.119:80: i/o timeout" Am I on the right track with this? Additional strangeness: gcloud container or gcloud preview container doesn't work for me. Only gcloud beta container MASTER_VERSION of my cluster is 0.21.4, while the version of my kubectl client is GitVersion:"v0.20.2", even though freshly installed with gcloud.
The easiest thing for you to do at the moment is to make a service of type "nodePort", and to configure your HAproxy to point at minion-1:. createExternalLoadBalancer is the old, less flexible, way to do this--it requires the cloud provider to do work. Type=nodePort doesn't require anything special from the cloud provider.
###Instruction: Assume the following stack: A dedicated server The server is running Vagrant Vagrant is running 2 virtual machines master + minion-1 (Kubernetes) minion-1 is running a pod Within the pod is 2 containers: webservice and fileservice Both webservice and fileservice should be accessible from internet i.e. from outside. Either by web.mydomain.com - file.mydomain.com or www.mydomain.com/web/ - www.mydomain.com/file/ Before using Kubernetes, I was using a remote proxy (HAproxy) and simply mapped domain names to an internal ip / port. Now with Kubernetes, I can imagine there is something dedicated to this task but I honestly have no clue from where to start. I read about "createExternalLoadBalancer", kubernetes Services and kube-proxy. Should a reverse-proxy still be put somewhere (before vagrant or within a pod ?) also is using Vagrant a good option for production (staying in the scope of this question) ? ### Response: The easiest thing for you to do at the moment is to make a service of type "nodePort", and to configure your HAproxy to point at minion-1:. createExternalLoadBalancer is the old, less flexible, way to do this--it requires the cloud provider to do work. Type=nodePort doesn't require anything special from the cloud provider.
Assume the following stack: A dedicated server The server is running Vagrant Vagrant is running 2 virtual machines master + minion-1 (Kubernetes) minion-1 is running a pod Within the pod is 2 containers: webservice and fileservice Both webservice and fileservice should be accessible from internet i.e. from outside. Either by web.mydomain.com - file.mydomain.com or www.mydomain.com/web/ - www.mydomain.com/file/ Before using Kubernetes, I was using a remote proxy (HAproxy) and simply mapped domain names to an internal ip / port. Now with Kubernetes, I can imagine there is something dedicated to this task but I honestly have no clue from where to start. I read about "createExternalLoadBalancer", kubernetes Services and kube-proxy. Should a reverse-proxy still be put somewhere (before vagrant or within a pod ?) also is using Vagrant a good option for production (staying in the scope of this question) ?
It looks like issue is gone, I've tried one more time and installation went flawlessly:
###Instruction: When I run the Kubernetes vagrant setup script: export KUBERNETES_PROVIDER=vagrant curl -sS https://get.k8s.io | bash I get: Validating master Validating minion-1 Waiting for each minion to be registered with cloud provider error: couldn't read version from server: Get https://10.245.1.2/api:dial tcp 10.245.1.2:443: connection refused Anyone know how I can fix this? ### Response: It looks like issue is gone, I've tried one more time and installation went flawlessly:
When I run the Kubernetes vagrant setup script: export KUBERNETES_PROVIDER=vagrant curl -sS https://get.k8s.io | bash I get: Validating master Validating minion-1 Waiting for each minion to be registered with cloud provider error: couldn't read version from server: Get https://10.245.1.2/api:dial tcp 10.245.1.2:443: connection refused Anyone know how I can fix this?
Update (May 2016): As of late March 2016, gcloud will now install kubectl for windows. gcloud won't currently install kubectl on windows (hence the error). You can download a recent kubectl binary directly from GCS instead (gsutil cp gs://kubernetes-release/release/v1.0.0/bin/windows/amd64/kubectl.exe kubectl.exe). Since kubectl is much less heavily tested on windows than on Mac/Linux, you may need to pass the --kubeconfig argument if it isn't finding your configuration file in the default location (gcloud container clusters get-credentials NAME should write to .kube/config in your home directory and kubectl should read from the same file).
###Instruction: I'm trying to spin up a Kubernetes (k8s) cluster on GCE. When I run gcloud components update kubectl on a Windows machine I get ERROR: (gcloud.components.update) The following components are unknown [kubectl] ### Response: Update (May 2016): As of late March 2016, gcloud will now install kubectl for windows. gcloud won't currently install kubectl on windows (hence the error). You can download a recent kubectl binary directly from GCS instead (gsutil cp gs://kubernetes-release/release/v1.0.0/bin/windows/amd64/kubectl.exe kubectl.exe). Since kubectl is much less heavily tested on windows than on Mac/Linux, you may need to pass the --kubeconfig argument if it isn't finding your configuration file in the default location (gcloud container clusters get-credentials NAME should write to .kube/config in your home directory and kubectl should read from the same file).
I'm trying to spin up a Kubernetes (k8s) cluster on GCE. When I run gcloud components update kubectl on a Windows machine I get ERROR: (gcloud.components.update) The following components are unknown [kubectl]
The recommended way to install the Cloud SDK non-interactively is to set the environment variable CLOUDSDK_CORE_DISABLE_PROMPTS: export CLOUDSDK_CORE_DISABLE_PROMPTS=1 This will make the Cloud SDK run in non-interactive mode, in which it accepts the default value for all prompts. This applies both to installation and general operation (which will be useful for scripting). To make this permanent, run gcloud config set core/disable_prompts 1 once the Cloud SDK is installed.
###Instruction: I am trying to use CircleCI to automate my push to the google container registry and initiate a rolling update of the kubernetes pods. I have a circle.yml file with dependencies like this: dependencies: cache_directories: - ~/kubernetes - ./google-cloud-sdk pre: - chmod a+x ./download-gcloud.sh ./install-kubernetes.sh ./deploy.sh - ./download-gcloud.sh - ./google-cloud-sdk/install.sh <--- This step hangs - ./google-cloud-sdk/bin/gcloud components update - ./google-cloud-sdk/bin/gcloud auth activate-service-account $GCLOUD_CLIENT_ID --key-file ./kubernetes/gcloud_pem.p12 - ./install-kubernetes.sh I can pull the tar file down and install it, but I am not sure how to select a prompt, so the build hangs: Welcome to the Google Cloud SDK! To help improve the quality of this product, we collect anonymized data on how the SDK is used. You may choose to opt out of this collection now (by choosing 'N' at the below prompt), or at any time in the future by running the following command: gcloud config set --scope=user disable_usage_reporting true Do you want to help improve the Google Cloud SDK (Y/n)? Is there a flag I can set when running the install script to disable prompts? ### Response: The recommended way to install the Cloud SDK non-interactively is to set the environment variable CLOUDSDK_CORE_DISABLE_PROMPTS: export CLOUDSDK_CORE_DISABLE_PROMPTS=1 This will make the Cloud SDK run in non-interactive mode, in which it accepts the default value for all prompts. This applies both to installation and general operation (which will be useful for scripting). To make this permanent, run gcloud config set core/disable_prompts 1 once the Cloud SDK is installed.
I am trying to use CircleCI to automate my push to the google container registry and initiate a rolling update of the kubernetes pods. I have a circle.yml file with dependencies like this: dependencies: cache_directories: - ~/kubernetes - ./google-cloud-sdk pre: - chmod a+x ./download-gcloud.sh ./install-kubernetes.sh ./deploy.sh - ./download-gcloud.sh - ./google-cloud-sdk/install.sh <--- This step hangs - ./google-cloud-sdk/bin/gcloud components update - ./google-cloud-sdk/bin/gcloud auth activate-service-account $GCLOUD_CLIENT_ID --key-file ./kubernetes/gcloud_pem.p12 - ./install-kubernetes.sh I can pull the tar file down and install it, but I am not sure how to select a prompt, so the build hangs: Welcome to the Google Cloud SDK! To help improve the quality of this product, we collect anonymized data on how the SDK is used. You may choose to opt out of this collection now (by choosing 'N' at the below prompt), or at any time in the future by running the following command: gcloud config set --scope=user disable_usage_reporting true Do you want to help improve the Google Cloud SDK (Y/n)? Is there a flag I can set when running the install script to disable prompts?
I am not sure if patch is 100% working yet, but if you are going to do this, you at least need to put apiVersion inside metadata, like so: --patch='{ metadata:{ "apiVersion":"v1" }, "spec": { "selector": { "build":"2"} } }'
###Instruction: I'm trying to update a service using : kubectl update service my-service \ --patch='{ "apiVersion":"v1", "spec": { "selector": { "build":"2"} } }' I receive the following Error : Error from server: service "\"apiVersion\":\"v1\"," not found I have tried the following : moving the service name to the end Removing the apiVersion Maybe the kubectl update is not available for service ? For now I was making my updates by simply stoping and restarting my service. But sometime, the corresponding forwarding-port changes. So it seems to not be the good choice ... PS: v0.19 api_v1 ### Response: I am not sure if patch is 100% working yet, but if you are going to do this, you at least need to put apiVersion inside metadata, like so: --patch='{ metadata:{ "apiVersion":"v1" }, "spec": { "selector": { "build":"2"} } }'
I'm trying to update a service using : kubectl update service my-service \ --patch='{ "apiVersion":"v1", "spec": { "selector": { "build":"2"} } }' I receive the following Error : Error from server: service "\"apiVersion\":\"v1\"," not found I have tried the following : moving the service name to the end Removing the apiVersion Maybe the kubectl update is not available for service ? For now I was making my updates by simply stoping and restarting my service. But sometime, the corresponding forwarding-port changes. So it seems to not be the good choice ... PS: v0.19 api_v1
By default, the master only exposes HTTPS to the public internet, not HTTP. You should be able to hit https://admin:password@master-public-ip/api/v1/pods/, where password is the generated password for the admin user. This can be found either in the .kube/config file on your machine, or in the /srv/kubernetes/known_tokens.csv file on the master. E.g. on the master VM: $ cat /srv/kubernetes/known_tokens.csv mYpASSWORD,admin,admin unused,kubelet,kubelet ... Or on your machine: $ cat ~/.kube/config ... - name: my-cluster user: client-certificate-data: ... client-key-data: ... password: mYpASSWORD username: admin ... $ curl --insecure https://admin:mYpASSWORD@master-public-ip/api/v1/pods/ ... To avoid using --insecure (i.e. actually verify the server certificate that your master is presenting), you can use the --cacert flag to specify the cluster certificate authority from your .kube/config file. $ cat ~/.kube/config ... - cluster: certificate-authority-data: bIgLoNgBaSe64eNcOdEdStRiNg server: https://master-public-ip name: my-cluster ... $ echo bIgLoNgBaSe64eNcOdEdStRiNg | base64 -d > ca.crt $ curl --cacert=ca.crt https://admin:mYpASSWORD@master-public-ip/api/v1/pods/ ...
###Instruction: I can get Pods information using http://localhost:8001/api/v1/pods from inside my cluster. Is there any way to get pod informations using http://master-public-ip:8001/api/v1/pods ? ### Response: By default, the master only exposes HTTPS to the public internet, not HTTP. You should be able to hit https://admin:password@master-public-ip/api/v1/pods/, where password is the generated password for the admin user. This can be found either in the .kube/config file on your machine, or in the /srv/kubernetes/known_tokens.csv file on the master. E.g. on the master VM: $ cat /srv/kubernetes/known_tokens.csv mYpASSWORD,admin,admin unused,kubelet,kubelet ... Or on your machine: $ cat ~/.kube/config ... - name: my-cluster user: client-certificate-data: ... client-key-data: ... password: mYpASSWORD username: admin ... $ curl --insecure https://admin:mYpASSWORD@master-public-ip/api/v1/pods/ ... To avoid using --insecure (i.e. actually verify the server certificate that your master is presenting), you can use the --cacert flag to specify the cluster certificate authority from your .kube/config file. $ cat ~/.kube/config ... - cluster: certificate-authority-data: bIgLoNgBaSe64eNcOdEdStRiNg server: https://master-public-ip name: my-cluster ... $ echo bIgLoNgBaSe64eNcOdEdStRiNg | base64 -d > ca.crt $ curl --cacert=ca.crt https://admin:mYpASSWORD@master-public-ip/api/v1/pods/ ...
I can get Pods information using http://localhost:8001/api/v1/pods from inside my cluster. Is there any way to get pod informations using http://master-public-ip:8001/api/v1/pods ?
The first two lines of the kubelet log file actually point to the underlying problem -- you aren't specifying any client credentials for the kubelet to connect to the master. The --tls-cert-file and --tls-private-key-file arguments for the kubelet are used to configure the http server on the kubelet (if not specified, the kubelet will generate a self-signed certificate for its https endpoint). This certificate / key pair are not used as the client certificate presented to the master for authentication. To specify credentials, there are two options: a kubeconfig file and a kubernetes_auth file. The later is deprecated, so I would recommend using a kubeconfig file. Inside the kubeconfig file you need to specify either a bearer token or a client certificate that the kubelet should present to the apiserver. You can also specify the CA certificate for the apiserver (if you want the connection to be secure) or tell the kubelet to skip checking the certificate presented by the apiserver. Since you have certificates for the apiserver, I'd recommend adding the CA certificate to the kubeconfig file. The kubeconfig file should look like: apiVersion: v1 kind: Config users: - name: kubelet user: client-certificate-data: <base64-encoded-cert> client-key-data: <base64-encoded-key> clusters: - name: local cluster: certificate-authority-data: <base64-encoded-ca-cert> contexts: - context: cluster: local user: kubelet name: service-account-context current-context: service-account-context To generate the base64 encoded client cert, you should be able to run something like cat /var/run/kubernetes/kubelet_36kr.pem | base64. If you don't have the CA certificate handy, you can replace the certificate-authority-data: <base64-encoded-ca-cert> with insecure-skip-tls-verify: true. If you put this file at /var/lib/kubelet/kubeconfig it should get picked up automatically. Otherwise, you can use the --kubeconfig argument to specify a custom location.
###Instruction: I deployed apiserver on master node (core01) with following conf: core01> /opt/bin/kube-apiserver \ --insecure_bind_address=127.0.0.1 \ --insecure_port=8080 \ --kubelet_port=10250 \ --etcd_servers=http://core01:2379,http://core02:2379,http://core03:2379 \ --service-cluster-ip-range=10.1.0.0/16 \ --allow_privileged=false \ --logtostderr=true \ --v=5 \ --tls-cert-file="/var/run/kubernetes/apiserver_36kr.pem" \ --tls-private-key-file="/var/run/kubernetes/apiserver_36kr.key" \ --client-ca-file="/var/run/kubernetes/cacert.pem" \ --kubelet-certificate-authority="/var/run/kubernetes/cacert.pem" \ --kubelet-client-certificate="/var/run/kubernetes/kubelet_36kr.pem" \ --kubelet-client-key="/var/run/kubernetes/kubelet_36kr.key" On minion node (core02), I can call api from HTTPS: core02> curl https://core01:6443/api/v1/nodes --cert /var/run/kubernetes/kubelet_36kr.pem --key /var/run/kubernetes/kubelet_36kr.key > GET /api/v1/nodes HTTP/1.1 > Host: core01:6443 > User-Agent: curl/7.42.1 > Accept: */* > < HTTP/1.1 200 OK < Content-Type: application/json < Date: Sat, 27 Jun 2015 15:33:50 GMT < Content-Length: 1577 < { "kind": "NodeList", "apiVersion": "v1", "metadata": { "selfLink": "/api/v1/nodes", "resourceVersion": "510078" }, .... However, I can not start kubelet on this minion. It always complain no credentials. How can I make it work? Is there any doc on master <-> minion communication authentication? Could you please give me the best practice? FYI, The command is following: core02> /opt/bin/kubelet \ --logtostderr=true \ --v=0 \ --api_servers=https://core01:6443 \ --address=127.0.0.1 \ --port=10250 \ --allow-privileged=false \ --tls-cert-file="/var/run/kubernetes/kubelet_36kr.pem" \ --tls-private-key-file="/var/run/kubernetes/kubelet_36kr.key" kubelet log is following: W0627 23:34:03.646311 3004 server.go:460] Could not load kubeconfig file /var/lib/kubelet/kubeconfig: stat /var/lib/kubelet/kubeconfig: no such file or directory. Trying auth path instead. W0627 23:34:03.646520 3004 server.go:422] Could not load kubernetes auth path /var/lib/kubelet/kubernetes_auth: stat /var/lib/kubelet/kubernetes_auth: no such file or directory. Continuing with defaults. I0627 23:34:03.646710 3004 manager.go:127] cAdvisor running in container: "/system.slice/sshd.service" I0627 23:34:03.647292 3004 fs.go:93] Filesystem partitions: map[/dev/sda9:{mountpoint:/ major:0 minor:30} /dev/sda4:{mountpoint:/usr major:8 minor:4} /dev/sda6:{mountpoint:/usr/share/oem major:8 minor:6}] I0627 23:34:03.648234 3004 manager.go:156] Machine: {NumCores:1 CpuFrequency:2399996 MemoryCapacity:1046294528 MachineID:29f94a4fad8b31668bd219ca511bdeb0 SystemUUID:4F4AF929-8BAD-6631-8BD2-19CA511BDEB0 BootID:fa1bea28-675e-4989-ad86-00797721a794 Filesystems:[{Device:/dev/sda9 Capacity:18987593728} {Device:/dev/sda4 Capacity:1031946240} {Device:/dev/sda6 Capacity:113229824}] DiskMap:map[8:0:{Name:sda Major:8 Minor:0 Size:21474836480 Scheduler:cfq} 8:16:{Name:sdb Major:8 Minor:16 Size:1073741824 Scheduler:cfq}] NetworkDevices:[{Name:eth0 MacAddress:52:54:71:f6:fc:b8 Speed:0 Mtu:1500} {Name:flannel0 MacAddress: Speed:10 Mtu:1472}] Topology:[{Id:0 Memory:1046294528 Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]}] Caches:[]}]} I0627 23:34:03.649934 3004 manager.go:163] Version: {KernelVersion:4.0.5 ContainerOsVersion:CoreOS 695.2.0 DockerVersion:1.6.2 CadvisorVersion:0.15.1} I0627 23:34:03.651758 3004 plugins.go:69] No cloud provider specified. I0627 23:34:03.651855 3004 docker.go:289] Connecting to docker on unix:///var/run/docker.sock I0627 23:34:03.652877 3004 server.go:659] Watching apiserver E0627 23:34:03.748954 3004 reflector.go:136] Failed to list *api.Pod: the server has asked for the client to provide credentials (get pods) E0627 23:34:03.750157 3004 reflector.go:136] Failed to list *api.Node: the server has asked for the client to provide credentials (get nodes) E0627 23:34:03.751666 3004 reflector.go:136] Failed to list *api.Service: the server has asked for the client to provide credentials (get services) I0627 23:34:03.758158 3004 plugins.go:56] Registering credential provider: .dockercfg I0627 23:34:03.856215 3004 server.go:621] Started kubelet E0627 23:34:03.858346 3004 kubelet.go:662] Image garbage collection failed: unable to find data for container / I0627 23:34:03.869739 3004 kubelet.go:682] Running in container "/kubelet" I0627 23:34:03.869755 3004 server.go:63] Starting to listen on 127.0.0.1:10250 E0627 23:34:03.899877 3004 event.go:185] Server rejected event '&api.Event{TypeMeta:api.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"core02.13eba23275ceda25", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:util.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*util.Time)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil)}, InvolvedObject:api.ObjectReference{Kind:"Node", Namespace:"", Name:"core02", UID:"core02", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"starting", Message:"Starting kubelet.", Source:api.EventSource{Component:"kubelet", Host:"core02"}, FirstTimestamp:util.Time{Time:time.Time{sec:63571016043, nsec:856189989, loc:(*time.Location)(0x1ba6120)}}, LastTimestamp:util.Time{Time:time.Time{sec:63571016043, nsec:856189989, loc:(*time.Location)(0x1ba6120)}}, Count:1}': 'the server has asked for the client to provide credentials (post events)' (will not retry!) I0627 23:34:04.021297 3004 factory.go:226] System is using systemd I0627 23:34:04.021790 3004 factory.go:234] Registering Docker factory I0627 23:34:04.022241 3004 factory.go:89] Registering Raw factory I0627 23:34:04.144065 3004 manager.go:946] Started watching for new ooms in manager I0627 23:34:04.144655 3004 oomparser.go:183] oomparser using systemd I0627 23:34:04.145379 3004 manager.go:243] Starting recovery of all containers I0627 23:34:04.293020 3004 manager.go:248] Recovery completed I0627 23:34:04.343829 3004 status_manager.go:56] Starting to sync pod status with apiserver I0627 23:34:04.343928 3004 kubelet.go:1683] Starting kubelet main sync loop. E0627 23:34:04.457765 3004 event.go:185] Server rejected event '&api.Event{TypeMeta:api.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"core02.13eba232995c8213", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:util.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*util.Time)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil)}, InvolvedObject:api.ObjectReference{Kind:"Node", Namespace:"", Name:"core02", UID:"core02", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeReady", Message:"Node core02 status is now: NodeReady", Source:api.EventSource{Component:"kubelet", Host:"core02"}, FirstTimestamp:util.Time{Time:time.Time{sec:63571016044, nsec:452676115, loc:(*time.Location)(0x1ba6120)}}, LastTimestamp:util.Time{Time:time.Time{sec:63571016044, nsec:452676115, loc:(*time.Location)(0x1ba6120)}}, Count:1}': 'the server has asked for the client to provide credentials (post events)' (will not retry!) E0627 23:34:04.659874 3004 event.go:185] Server rejected event '&api.Event{TypeMeta:api.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"core02.13eba232a599cf8c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:util.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*util.Time)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil)}, InvolvedObject:api.ObjectReference{Kind:"Node", Namespace:"", Name:"core02", UID:"core02", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeReady", Message:"Node core02 status is now: NodeReady", Source:api.EventSource{Component:"kubelet", Host:"core02"}, FirstTimestamp:util.Time{Time:time.Time{sec:63571016044, nsec:658020236, loc:(*time.Location)(0x1ba6120)}}, LastTimestamp:util.Time{Time:time.Time{sec:63571016044, nsec:658020236, loc:(*time.Location)(0x1ba6120)}}, Count:1}': 'the server has asked for the client to provide credentials (post events)' (will not retry!) ### Response: The first two lines of the kubelet log file actually point to the underlying problem -- you aren't specifying any client credentials for the kubelet to connect to the master. The --tls-cert-file and --tls-private-key-file arguments for the kubelet are used to configure the http server on the kubelet (if not specified, the kubelet will generate a self-signed certificate for its https endpoint). This certificate / key pair are not used as the client certificate presented to the master for authentication. To specify credentials, there are two options: a kubeconfig file and a kubernetes_auth file. The later is deprecated, so I would recommend using a kubeconfig file. Inside the kubeconfig file you need to specify either a bearer token or a client certificate that the kubelet should present to the apiserver. You can also specify the CA certificate for the apiserver (if you want the connection to be secure) or tell the kubelet to skip checking the certificate presented by the apiserver. Since you have certificates for the apiserver, I'd recommend adding the CA certificate to the kubeconfig file. The kubeconfig file should look like: apiVersion: v1 kind: Config users: - name: kubelet user: client-certificate-data: <base64-encoded-cert> client-key-data: <base64-encoded-key> clusters: - name: local cluster: certificate-authority-data: <base64-encoded-ca-cert> contexts: - context: cluster: local user: kubelet name: service-account-context current-context: service-account-context To generate the base64 encoded client cert, you should be able to run something like cat /var/run/kubernetes/kubelet_36kr.pem | base64. If you don't have the CA certificate handy, you can replace the certificate-authority-data: <base64-encoded-ca-cert> with insecure-skip-tls-verify: true. If you put this file at /var/lib/kubelet/kubeconfig it should get picked up automatically. Otherwise, you can use the --kubeconfig argument to specify a custom location.
I deployed apiserver on master node (core01) with following conf: core01> /opt/bin/kube-apiserver \ --insecure_bind_address=127.0.0.1 \ --insecure_port=8080 \ --kubelet_port=10250 \ --etcd_servers=http://core01:2379,http://core02:2379,http://core03:2379 \ --service-cluster-ip-range=10.1.0.0/16 \ --allow_privileged=false \ --logtostderr=true \ --v=5 \ --tls-cert-file="/var/run/kubernetes/apiserver_36kr.pem" \ --tls-private-key-file="/var/run/kubernetes/apiserver_36kr.key" \ --client-ca-file="/var/run/kubernetes/cacert.pem" \ --kubelet-certificate-authority="/var/run/kubernetes/cacert.pem" \ --kubelet-client-certificate="/var/run/kubernetes/kubelet_36kr.pem" \ --kubelet-client-key="/var/run/kubernetes/kubelet_36kr.key" On minion node (core02), I can call api from HTTPS: core02> curl https://core01:6443/api/v1/nodes --cert /var/run/kubernetes/kubelet_36kr.pem --key /var/run/kubernetes/kubelet_36kr.key > GET /api/v1/nodes HTTP/1.1 > Host: core01:6443 > User-Agent: curl/7.42.1 > Accept: */* > < HTTP/1.1 200 OK < Content-Type: application/json < Date: Sat, 27 Jun 2015 15:33:50 GMT < Content-Length: 1577 < { "kind": "NodeList", "apiVersion": "v1", "metadata": { "selfLink": "/api/v1/nodes", "resourceVersion": "510078" }, .... However, I can not start kubelet on this minion. It always complain no credentials. How can I make it work? Is there any doc on master <-> minion communication authentication? Could you please give me the best practice? FYI, The command is following: core02> /opt/bin/kubelet \ --logtostderr=true \ --v=0 \ --api_servers=https://core01:6443 \ --address=127.0.0.1 \ --port=10250 \ --allow-privileged=false \ --tls-cert-file="/var/run/kubernetes/kubelet_36kr.pem" \ --tls-private-key-file="/var/run/kubernetes/kubelet_36kr.key" kubelet log is following: W0627 23:34:03.646311 3004 server.go:460] Could not load kubeconfig file /var/lib/kubelet/kubeconfig: stat /var/lib/kubelet/kubeconfig: no such file or directory. Trying auth path instead. W0627 23:34:03.646520 3004 server.go:422] Could not load kubernetes auth path /var/lib/kubelet/kubernetes_auth: stat /var/lib/kubelet/kubernetes_auth: no such file or directory. Continuing with defaults. I0627 23:34:03.646710 3004 manager.go:127] cAdvisor running in container: "/system.slice/sshd.service" I0627 23:34:03.647292 3004 fs.go:93] Filesystem partitions: map[/dev/sda9:{mountpoint:/ major:0 minor:30} /dev/sda4:{mountpoint:/usr major:8 minor:4} /dev/sda6:{mountpoint:/usr/share/oem major:8 minor:6}] I0627 23:34:03.648234 3004 manager.go:156] Machine: {NumCores:1 CpuFrequency:2399996 MemoryCapacity:1046294528 MachineID:29f94a4fad8b31668bd219ca511bdeb0 SystemUUID:4F4AF929-8BAD-6631-8BD2-19CA511BDEB0 BootID:fa1bea28-675e-4989-ad86-00797721a794 Filesystems:[{Device:/dev/sda9 Capacity:18987593728} {Device:/dev/sda4 Capacity:1031946240} {Device:/dev/sda6 Capacity:113229824}] DiskMap:map[8:0:{Name:sda Major:8 Minor:0 Size:21474836480 Scheduler:cfq} 8:16:{Name:sdb Major:8 Minor:16 Size:1073741824 Scheduler:cfq}] NetworkDevices:[{Name:eth0 MacAddress:52:54:71:f6:fc:b8 Speed:0 Mtu:1500} {Name:flannel0 MacAddress: Speed:10 Mtu:1472}] Topology:[{Id:0 Memory:1046294528 Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]}] Caches:[]}]} I0627 23:34:03.649934 3004 manager.go:163] Version: {KernelVersion:4.0.5 ContainerOsVersion:CoreOS 695.2.0 DockerVersion:1.6.2 CadvisorVersion:0.15.1} I0627 23:34:03.651758 3004 plugins.go:69] No cloud provider specified. I0627 23:34:03.651855 3004 docker.go:289] Connecting to docker on unix:///var/run/docker.sock I0627 23:34:03.652877 3004 server.go:659] Watching apiserver E0627 23:34:03.748954 3004 reflector.go:136] Failed to list *api.Pod: the server has asked for the client to provide credentials (get pods) E0627 23:34:03.750157 3004 reflector.go:136] Failed to list *api.Node: the server has asked for the client to provide credentials (get nodes) E0627 23:34:03.751666 3004 reflector.go:136] Failed to list *api.Service: the server has asked for the client to provide credentials (get services) I0627 23:34:03.758158 3004 plugins.go:56] Registering credential provider: .dockercfg I0627 23:34:03.856215 3004 server.go:621] Started kubelet E0627 23:34:03.858346 3004 kubelet.go:662] Image garbage collection failed: unable to find data for container / I0627 23:34:03.869739 3004 kubelet.go:682] Running in container "/kubelet" I0627 23:34:03.869755 3004 server.go:63] Starting to listen on 127.0.0.1:10250 E0627 23:34:03.899877 3004 event.go:185] Server rejected event '&api.Event{TypeMeta:api.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"core02.13eba23275ceda25", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:util.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*util.Time)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil)}, InvolvedObject:api.ObjectReference{Kind:"Node", Namespace:"", Name:"core02", UID:"core02", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"starting", Message:"Starting kubelet.", Source:api.EventSource{Component:"kubelet", Host:"core02"}, FirstTimestamp:util.Time{Time:time.Time{sec:63571016043, nsec:856189989, loc:(*time.Location)(0x1ba6120)}}, LastTimestamp:util.Time{Time:time.Time{sec:63571016043, nsec:856189989, loc:(*time.Location)(0x1ba6120)}}, Count:1}': 'the server has asked for the client to provide credentials (post events)' (will not retry!) I0627 23:34:04.021297 3004 factory.go:226] System is using systemd I0627 23:34:04.021790 3004 factory.go:234] Registering Docker factory I0627 23:34:04.022241 3004 factory.go:89] Registering Raw factory I0627 23:34:04.144065 3004 manager.go:946] Started watching for new ooms in manager I0627 23:34:04.144655 3004 oomparser.go:183] oomparser using systemd I0627 23:34:04.145379 3004 manager.go:243] Starting recovery of all containers I0627 23:34:04.293020 3004 manager.go:248] Recovery completed I0627 23:34:04.343829 3004 status_manager.go:56] Starting to sync pod status with apiserver I0627 23:34:04.343928 3004 kubelet.go:1683] Starting kubelet main sync loop. E0627 23:34:04.457765 3004 event.go:185] Server rejected event '&api.Event{TypeMeta:api.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"core02.13eba232995c8213", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:util.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*util.Time)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil)}, InvolvedObject:api.ObjectReference{Kind:"Node", Namespace:"", Name:"core02", UID:"core02", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeReady", Message:"Node core02 status is now: NodeReady", Source:api.EventSource{Component:"kubelet", Host:"core02"}, FirstTimestamp:util.Time{Time:time.Time{sec:63571016044, nsec:452676115, loc:(*time.Location)(0x1ba6120)}}, LastTimestamp:util.Time{Time:time.Time{sec:63571016044, nsec:452676115, loc:(*time.Location)(0x1ba6120)}}, Count:1}': 'the server has asked for the client to provide credentials (post events)' (will not retry!) E0627 23:34:04.659874 3004 event.go:185] Server rejected event '&api.Event{TypeMeta:api.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"core02.13eba232a599cf8c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:util.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*util.Time)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil)}, InvolvedObject:api.ObjectReference{Kind:"Node", Namespace:"", Name:"core02", UID:"core02", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeReady", Message:"Node core02 status is now: NodeReady", Source:api.EventSource{Component:"kubelet", Host:"core02"}, FirstTimestamp:util.Time{Time:time.Time{sec:63571016044, nsec:658020236, loc:(*time.Location)(0x1ba6120)}}, LastTimestamp:util.Time{Time:time.Time{sec:63571016044, nsec:658020236, loc:(*time.Location)(0x1ba6120)}}, Count:1}': 'the server has asked for the client to provide credentials (post events)' (will not retry!)
You can definitely use kubectl without gcloud. You can download the latest official Kubernetes release from its releases page on Github, then run kubectl by untarring it and either running ./cluster/kubectl.sh or copying the kubectl binary from platforms/$OS/$ARCHITECTURE/kubectl into your shell's path. Also, setting up Kubernetes on vagrant can be as simple as running these two commands, which should get everything in place for you: export KUBERNETES_PROVIDER=vagrant curl -sS https://get.k8s.io | bash
###Instruction: As per this question, I was wondering if there are any plans to be able to use the kubectl tool without installing gcloud? The use case I have in mind is the same as the one that fellow had: running kubernetes on CoreOS locally via vagrant. ### Response: You can definitely use kubectl without gcloud. You can download the latest official Kubernetes release from its releases page on Github, then run kubectl by untarring it and either running ./cluster/kubectl.sh or copying the kubectl binary from platforms/$OS/$ARCHITECTURE/kubectl into your shell's path. Also, setting up Kubernetes on vagrant can be as simple as running these two commands, which should get everything in place for you: export KUBERNETES_PROVIDER=vagrant curl -sS https://get.k8s.io | bash
As per this question, I was wondering if there are any plans to be able to use the kubectl tool without installing gcloud? The use case I have in mind is the same as the one that fellow had: running kubernetes on CoreOS locally via vagrant.
Now that v1.0 is out, better native support for getting the batch computations is one of the team's top priorities, but it is already quite possible to run them. If you run something as a pod rather than as a replication controller, you can set the restartPolicy field on it. The OnFailure policy is probably what you'd want, meaning that kubernetes will restart a pod that exited with a non-zero exit code, but won't restart a pod that exited zero. If you're using kubectl run to start your pods, though, I'm unfortunately not aware of a way to have it create just a pod rather than a replication controller. If you'd like something like that, it'd be great if you opened an issue requesting it as an option.
###Instruction: Batch computations, Monte Carlo, using Docker image, multiple jobs running on Google cloud and managed by Kubernetes. But it (replication controller, I guess?) managed to restart same computation again and again due to default restart policy. Is there a way now to let pods die? Or maybe other workarounds to do pods garbage collection? ### Response: Now that v1.0 is out, better native support for getting the batch computations is one of the team's top priorities, but it is already quite possible to run them. If you run something as a pod rather than as a replication controller, you can set the restartPolicy field on it. The OnFailure policy is probably what you'd want, meaning that kubernetes will restart a pod that exited with a non-zero exit code, but won't restart a pod that exited zero. If you're using kubectl run to start your pods, though, I'm unfortunately not aware of a way to have it create just a pod rather than a replication controller. If you'd like something like that, it'd be great if you opened an issue requesting it as an option.
Batch computations, Monte Carlo, using Docker image, multiple jobs running on Google cloud and managed by Kubernetes. But it (replication controller, I guess?) managed to restart same computation again and again due to default restart policy. Is there a way now to let pods die? Or maybe other workarounds to do pods garbage collection?
I'm getting the same problem as well - it's due to the db initialization script wrongly assuming a non-empty volume directory means it has already been initialized. Unfortunately, a new GCE persistence disk contains a lost+found directory. I've submitted a fix for the docker image here.
###Instruction: I'm getting started with Kubernates and having problems installing Postgres using Kubernetes with a GCE Persistent disk. I can successfully install Mysql using both the Kubernates walkthroughs and also the following guide: http://amygdala.github.io/kubernetes/2015/01/13/k8s1.html However, when I try to achieve a similar thing with postgres, it seems to fail when attaching to the disk or using the disk. I've created a pod yaml based on the mysql one from the above post but substituting the postgres docker image: apiVersion: v1beta1 id: postgres desiredState: manifest: version: v1beta1 id: postgres containers: - name: postgres image: postgres env: - name: DB_PASS value: password cpu: 100 ports: - containerPort: 5432 volumeMounts: # name must match the volume name below - name: persistent-storage # mount path within the container mountPath: /var/lib/postgresql/data volumes: - name: persistent-storage source: persistentDisk: # This GCE PD must already exist and be formatted ext4 pdName: postgres-disk fsType: ext4 labels: name: postgres kind: Pod However when I create $ kubectl create -f postgres.yaml I get the following errors: $ kubectl logs postgres $ postgres cannot access the server configuration file "/var/lib/postgresql/data/postgresql.conf": No such file or directory I can see that the postgres-disk is attached to a minion server, so I'm wondering if it's related to the Volumes in the docker image that I'm using, or if I need separate mount paths for the postgresql.conf file. Now if I change the mount path (eg mountPath: /var/lib/postgresql) the pod will start ok but it doesn't appear to be using the persistent data. Inspecting the volumes in the docker container on the minion gives me: "Volumes": { "/dev/termination-log": "/var/lib/kubelet/pods/52177db4-149c-11e5-a64b-42010af06968/containers/postgres/91ecf33c939588b4165865f46b646677bf964fab81ea7ec08b598568513d4644", "/var/lib/postgresql": "/var/lib/kubelet/pods/52177db4-149c-11e5-a64b-42010af06968/volumes/kubernetes.io~gce-pd/pgdata", "/var/lib/postgresql/data": "/var/lib/docker/vfs/dir/c3ecda11de6a598d99842c06bee22097f1cb63a6e467cbe7af874d003140a4af", "/var/run/secrets/kubernetes.io/serviceaccount": "/var/lib/kubelet/pods/52177db4-149c-11e5-a64b-42010af06968/volumes/kubernetes.io~secret/default-token-b6s28" }, I've also tried using a json file with v1beta3 with similar results: { "kind": "Pod", "apiVersion": "v1beta3", "metadata": { "name": "postgres" }, "spec": { "volumes": [{ "name": "pgdata", "gcePersistentDisk": { "pdName": "postgres-disk", "fsType": "ext4" } }], "containers": [ { "name": "postgres", "image": "postgres", "ports": [ { "name": "postgres-ports", "hostPort": 5432, "containerPort": 5432 } ], "env": [ { "name": "DB_USER", "value": "postgres" }, { "name": "DB_PASS", "value": "password" } ], "volumeMounts": [ { "name": "pgdata", "readOnly": false, "mountPath": "/var/lib/postgresql/data" } ] } ] } } It's quite possible I've just missed something in the doco, but any help would be appreciated with this! ### Response: I'm getting the same problem as well - it's due to the db initialization script wrongly assuming a non-empty volume directory means it has already been initialized. Unfortunately, a new GCE persistence disk contains a lost+found directory. I've submitted a fix for the docker image here.
I'm getting started with Kubernates and having problems installing Postgres using Kubernetes with a GCE Persistent disk. I can successfully install Mysql using both the Kubernates walkthroughs and also the following guide: http://amygdala.github.io/kubernetes/2015/01/13/k8s1.html However, when I try to achieve a similar thing with postgres, it seems to fail when attaching to the disk or using the disk. I've created a pod yaml based on the mysql one from the above post but substituting the postgres docker image: apiVersion: v1beta1 id: postgres desiredState: manifest: version: v1beta1 id: postgres containers: - name: postgres image: postgres env: - name: DB_PASS value: password cpu: 100 ports: - containerPort: 5432 volumeMounts: # name must match the volume name below - name: persistent-storage # mount path within the container mountPath: /var/lib/postgresql/data volumes: - name: persistent-storage source: persistentDisk: # This GCE PD must already exist and be formatted ext4 pdName: postgres-disk fsType: ext4 labels: name: postgres kind: Pod However when I create $ kubectl create -f postgres.yaml I get the following errors: $ kubectl logs postgres $ postgres cannot access the server configuration file "/var/lib/postgresql/data/postgresql.conf": No such file or directory I can see that the postgres-disk is attached to a minion server, so I'm wondering if it's related to the Volumes in the docker image that I'm using, or if I need separate mount paths for the postgresql.conf file. Now if I change the mount path (eg mountPath: /var/lib/postgresql) the pod will start ok but it doesn't appear to be using the persistent data. Inspecting the volumes in the docker container on the minion gives me: "Volumes": { "/dev/termination-log": "/var/lib/kubelet/pods/52177db4-149c-11e5-a64b-42010af06968/containers/postgres/91ecf33c939588b4165865f46b646677bf964fab81ea7ec08b598568513d4644", "/var/lib/postgresql": "/var/lib/kubelet/pods/52177db4-149c-11e5-a64b-42010af06968/volumes/kubernetes.io~gce-pd/pgdata", "/var/lib/postgresql/data": "/var/lib/docker/vfs/dir/c3ecda11de6a598d99842c06bee22097f1cb63a6e467cbe7af874d003140a4af", "/var/run/secrets/kubernetes.io/serviceaccount": "/var/lib/kubelet/pods/52177db4-149c-11e5-a64b-42010af06968/volumes/kubernetes.io~secret/default-token-b6s28" }, I've also tried using a json file with v1beta3 with similar results: { "kind": "Pod", "apiVersion": "v1beta3", "metadata": { "name": "postgres" }, "spec": { "volumes": [{ "name": "pgdata", "gcePersistentDisk": { "pdName": "postgres-disk", "fsType": "ext4" } }], "containers": [ { "name": "postgres", "image": "postgres", "ports": [ { "name": "postgres-ports", "hostPort": 5432, "containerPort": 5432 } ], "env": [ { "name": "DB_USER", "value": "postgres" }, { "name": "DB_PASS", "value": "password" } ], "volumeMounts": [ { "name": "pgdata", "readOnly": false, "mountPath": "/var/lib/postgresql/data" } ] } ] } } It's quite possible I've just missed something in the doco, but any help would be appreciated with this!
If you just want to kick the tires with Kubernetes, I'd recommend installing boot2docker and then following the Running kubernetes locally via Docker getting started guide. Once you are comfortable interacting with the Kubernetes API and want a more complex local setup, you can then work on installing Vagrant. If the Vagrant instructions aren't working, you should also feel free to file a bug in the github repository.
###Instruction: This is my first attempt to install and use Kubernetes. I am trying to install an environment on Mac for developing my own apps and deploying them for test locally with Kubernetes. I am familiar with using Vagrant, VirtualBox and Docker for the same purpose. When I saw this page https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/vagrant.md I assumed it would be trivial. I executed these lines: export KUBERNETES_PROVIDER=vagrant curl -sS https://get.k8s.io | bash This created a master VM and a Minion, but Kubernetes seems to have failed to start on the master. On the master /var/log/salt/master is full of python Traceback errors, like this: 2015-07-17 22:14:42,629 [cherrypy.error ][INFO ][3252] [17/Jul/2015:22:14:42] ENGINE Started monitor thread '_TimeoutMonitor'. 2015-07-17 22:14:42,736 [cherrypy.error ][ERROR ][3252] [17/Jul/2015:22:14:42] ENGINE Error in HTTP server: shutting down Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/cherrypy/process/servers.py", line 187, in _start_http_thread self.httpserver.start() File "/usr/lib/python2.7/site-packages/cherrypy/wsgiserver/wsgiserver2.py", line 1824, in start raise socket.error(msg) error: No socket could be created Vagrant is version 1.7.3. VirtualBox is version 4.3.30 Have I made an obvious stupid mistake? ### Response: If you just want to kick the tires with Kubernetes, I'd recommend installing boot2docker and then following the Running kubernetes locally via Docker getting started guide. Once you are comfortable interacting with the Kubernetes API and want a more complex local setup, you can then work on installing Vagrant. If the Vagrant instructions aren't working, you should also feel free to file a bug in the github repository.
This is my first attempt to install and use Kubernetes. I am trying to install an environment on Mac for developing my own apps and deploying them for test locally with Kubernetes. I am familiar with using Vagrant, VirtualBox and Docker for the same purpose. When I saw this page https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/vagrant.md I assumed it would be trivial. I executed these lines: export KUBERNETES_PROVIDER=vagrant curl -sS https://get.k8s.io | bash This created a master VM and a Minion, but Kubernetes seems to have failed to start on the master. On the master /var/log/salt/master is full of python Traceback errors, like this: 2015-07-17 22:14:42,629 [cherrypy.error ][INFO ][3252] [17/Jul/2015:22:14:42] ENGINE Started monitor thread '_TimeoutMonitor'. 2015-07-17 22:14:42,736 [cherrypy.error ][ERROR ][3252] [17/Jul/2015:22:14:42] ENGINE Error in HTTP server: shutting down Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/cherrypy/process/servers.py", line 187, in _start_http_thread self.httpserver.start() File "/usr/lib/python2.7/site-packages/cherrypy/wsgiserver/wsgiserver2.py", line 1824, in start raise socket.error(msg) error: No socket could be created Vagrant is version 1.7.3. VirtualBox is version 4.3.30 Have I made an obvious stupid mistake?
In order to use HTTP load balancing on Google Cloud with Kubernetes, you have to take a slightly different approach than for network load balancing, due to the current lack of built-in support for HTTP balancing. I suspect you created your service in step 3 with type: LoadBalancer. This won't work properly because of how the LoadBalancer type is implemented, which causes the service to be available only on the network forwarding rule's IP address, rather than on each host's IP address. What will work, however, is using type: NodePort, which will cause the service to be reachable on the automatically-chosen node port on each host's external IP address. This plays more nicely with the HTTP load balancer. You can then pass this node port to the HTTP load balancer that you create. Once you open up a firewall on the node port, you should be good to go! If you want more concrete steps, a walkthrough of how to use HTTP load balancers with Container Engine was actually recently added to GKE's documentation. The same steps should work with normal Kubernetes. As a final note, now that version 1.0 is out the door, the team is getting back to adding some missing features, including native support for L7 load balancing. We hope to make it much easier for you soon!
###Instruction: I am trying to set up a HTTP load balancer for my Meteor app on google cloud. I have the application set up correctly, and I know this because I can visit the IP given in the Network Load Balancer. However, when I try and set up a HTTP load balancer, the health checks always say that the instances are unhealthy (even though I know they are not). I tried including a route in my application that returns a status 200, and pointing the health check towards that route. Here is exactly what I did, step by step: Create new instance template/group for the app. Upload image to google cloud. Create replication controller and service for the app. The network load balancer was created automatically. Additionally, there were two firewall rules allowing HTTP/HTTPS traffic on all IPs. Then I try and create the HTTP load balancer. I create a backend service in the load balancer with all the VMs corresponding to the meteor app. Then I create a new global forwarding rule. No matter what, the instances are labelled "unhealthy" and the IP from the global forwarding rule returns a "Server Error". ### Response: In order to use HTTP load balancing on Google Cloud with Kubernetes, you have to take a slightly different approach than for network load balancing, due to the current lack of built-in support for HTTP balancing. I suspect you created your service in step 3 with type: LoadBalancer. This won't work properly because of how the LoadBalancer type is implemented, which causes the service to be available only on the network forwarding rule's IP address, rather than on each host's IP address. What will work, however, is using type: NodePort, which will cause the service to be reachable on the automatically-chosen node port on each host's external IP address. This plays more nicely with the HTTP load balancer. You can then pass this node port to the HTTP load balancer that you create. Once you open up a firewall on the node port, you should be good to go! If you want more concrete steps, a walkthrough of how to use HTTP load balancers with Container Engine was actually recently added to GKE's documentation. The same steps should work with normal Kubernetes. As a final note, now that version 1.0 is out the door, the team is getting back to adding some missing features, including native support for L7 load balancing. We hope to make it much easier for you soon!
I am trying to set up a HTTP load balancer for my Meteor app on google cloud. I have the application set up correctly, and I know this because I can visit the IP given in the Network Load Balancer. However, when I try and set up a HTTP load balancer, the health checks always say that the instances are unhealthy (even though I know they are not). I tried including a route in my application that returns a status 200, and pointing the health check towards that route. Here is exactly what I did, step by step: Create new instance template/group for the app. Upload image to google cloud. Create replication controller and service for the app. The network load balancer was created automatically. Additionally, there were two firewall rules allowing HTTP/HTTPS traffic on all IPs. Then I try and create the HTTP load balancer. I create a backend service in the load balancer with all the VMs corresponding to the meteor app. Then I create a new global forwarding rule. No matter what, the instances are labelled "unhealthy" and the IP from the global forwarding rule returns a "Server Error".
Your pod spec file looks like invalid. According to http://kubernetes.io/v1.0/docs/user-guide/walkthrough/README.html#multiple-containers, a valid multiple containers pod spec should like this apiVersion: v1 kind: Pod metadata: name: www spec: containers: - name: nginx image: nginx volumeMounts: - mountPath: /srv/www name: www-data readOnly: true - name: git-monitor image: kubernetes/git-monitor env: - name: GIT_REPO value: http://github.com/some/repo.git volumeMounts: - mountPath: /data name: www-data volumes: - name: www-data emptyDir: {} Latest doc at http://kubernetes.io/docs/user-guide/walkthrough/#multiple-containers
###Instruction: I would like to create a kubernetes pod that contains 2 containers, both with different images, so I can start both containers together. Currently I have tried the following configuration: { "id": "podId", "desiredState": { "manifest": { "version": "v1beta1", "id": "podId", "containers": [{ "name": "type1", "image": "local/image" }, { "name": "type2", "image": "local/secondary" }] } }, "labels": { "name": "imageTest" } } However when I execute kubecfg -c app.json create /pods I get the following error: F0909 08:40:13.028433 01141 kubecfg.go:283] Got request error: request [&http.Request{Method:"POST", URL:(*url.URL)(0xc20800ee00), Proto:"HTTP/1.1", ProtoMajor:1, ProtoMinor:1, Header:http.Header{}, B ody:ioutil.nopCloser{Reader:(*bytes.Buffer)(0xc20800ed20)}, ContentLength:396, TransferEncoding:[]string(nil), Close:false, Host:"127.0.0.1:8080", Form:url.Values(nil), PostForm:url.Values(nil), Multi partForm:(*multipart.Form)(nil), Trailer:http.Header(nil), RemoteAddr:"", RequestURI:"", TLS:(*tls.ConnectionState)(nil)}] failed (500) 500 Internal Server Error: {"kind":"Status","creationTimestamp": null,"apiVersion":"v1beta1","status":"failure","message":"failed to find fit for api.Pod{JSONBase:api.JSONBase{Kind:\"\", ID:\"SSH podId\", CreationTimestamp:util.Time{Time:time.Time{sec:63545848813, nsec :0x14114e1, loc:(*time.Location)(0xb9a720)}}, SelfLink:\"\", ResourceVersion:0x0, APIVersion:\"\"}, Labels:map[string]string{\"name\":\"imageTest\"}, DesiredState:api.PodState{Manifest:api.ContainerMa nifest{Version:\"v1beta1\", ID:\"podId\", Volumes:[]api.Volume(nil), Containers:[]api.Container{api.Container{Name:\"type1\", Image:\"local/image\", Command:[]string(nil), WorkingDir:\"\", Ports:[]ap i.Port(nil), Env:[]api.EnvVar(nil), Memory:0, CPU:0, VolumeMounts:[]api.VolumeMount(nil), LivenessProbe:(*api.LivenessProbe)(nil)}, api.Container{Name:\"type2\", Image:\"local/secondary\", Command:[]string(n il), WorkingDir:\"\", Ports:[]api.Port(nil), Env:[]api.EnvVar(nil), Memory:0, CPU:0, VolumeMounts:[]api.VolumeMount(nil), LivenessProbe:(*api.LivenessProbe)(nil)}}}, Status:\"\", Host:\"\", HostIP:\"\ ", PodIP:\"\", Info:api.PodInfo(nil), RestartPolicy:api.RestartPolicy{Type:\"RestartAlways\"}}, CurrentState:api.PodState{Manifest:api.ContainerManifest{Version:\"\", ID:\"\", Volumes:[]api.Volume(nil ), Containers:[]api.Container(nil)}, Status:\"\", Host:\"\", HostIP:\"\", PodIP:\"\", Info:api.PodInfo(nil), RestartPolicy:api.RestartPolicy{Type:\"\"}}}","code":500} How can I modify the configuration accordingly? Running kubernetes on a vagrant vm (yungsang/coreos). ### Response: Your pod spec file looks like invalid. According to http://kubernetes.io/v1.0/docs/user-guide/walkthrough/README.html#multiple-containers, a valid multiple containers pod spec should like this apiVersion: v1 kind: Pod metadata: name: www spec: containers: - name: nginx image: nginx volumeMounts: - mountPath: /srv/www name: www-data readOnly: true - name: git-monitor image: kubernetes/git-monitor env: - name: GIT_REPO value: http://github.com/some/repo.git volumeMounts: - mountPath: /data name: www-data volumes: - name: www-data emptyDir: {} Latest doc at http://kubernetes.io/docs/user-guide/walkthrough/#multiple-containers
I would like to create a kubernetes pod that contains 2 containers, both with different images, so I can start both containers together. Currently I have tried the following configuration: { "id": "podId", "desiredState": { "manifest": { "version": "v1beta1", "id": "podId", "containers": [{ "name": "type1", "image": "local/image" }, { "name": "type2", "image": "local/secondary" }] } }, "labels": { "name": "imageTest" } } However when I execute kubecfg -c app.json create /pods I get the following error: F0909 08:40:13.028433 01141 kubecfg.go:283] Got request error: request [&http.Request{Method:"POST", URL:(*url.URL)(0xc20800ee00), Proto:"HTTP/1.1", ProtoMajor:1, ProtoMinor:1, Header:http.Header{}, B ody:ioutil.nopCloser{Reader:(*bytes.Buffer)(0xc20800ed20)}, ContentLength:396, TransferEncoding:[]string(nil), Close:false, Host:"127.0.0.1:8080", Form:url.Values(nil), PostForm:url.Values(nil), Multi partForm:(*multipart.Form)(nil), Trailer:http.Header(nil), RemoteAddr:"", RequestURI:"", TLS:(*tls.ConnectionState)(nil)}] failed (500) 500 Internal Server Error: {"kind":"Status","creationTimestamp": null,"apiVersion":"v1beta1","status":"failure","message":"failed to find fit for api.Pod{JSONBase:api.JSONBase{Kind:\"\", ID:\"SSH podId\", CreationTimestamp:util.Time{Time:time.Time{sec:63545848813, nsec :0x14114e1, loc:(*time.Location)(0xb9a720)}}, SelfLink:\"\", ResourceVersion:0x0, APIVersion:\"\"}, Labels:map[string]string{\"name\":\"imageTest\"}, DesiredState:api.PodState{Manifest:api.ContainerMa nifest{Version:\"v1beta1\", ID:\"podId\", Volumes:[]api.Volume(nil), Containers:[]api.Container{api.Container{Name:\"type1\", Image:\"local/image\", Command:[]string(nil), WorkingDir:\"\", Ports:[]ap i.Port(nil), Env:[]api.EnvVar(nil), Memory:0, CPU:0, VolumeMounts:[]api.VolumeMount(nil), LivenessProbe:(*api.LivenessProbe)(nil)}, api.Container{Name:\"type2\", Image:\"local/secondary\", Command:[]string(n il), WorkingDir:\"\", Ports:[]api.Port(nil), Env:[]api.EnvVar(nil), Memory:0, CPU:0, VolumeMounts:[]api.VolumeMount(nil), LivenessProbe:(*api.LivenessProbe)(nil)}}}, Status:\"\", Host:\"\", HostIP:\"\ ", PodIP:\"\", Info:api.PodInfo(nil), RestartPolicy:api.RestartPolicy{Type:\"RestartAlways\"}}, CurrentState:api.PodState{Manifest:api.ContainerManifest{Version:\"\", ID:\"\", Volumes:[]api.Volume(nil ), Containers:[]api.Container(nil)}, Status:\"\", Host:\"\", HostIP:\"\", PodIP:\"\", Info:api.PodInfo(nil), RestartPolicy:api.RestartPolicy{Type:\"\"}}}","code":500} How can I modify the configuration accordingly? Running kubernetes on a vagrant vm (yungsang/coreos).
I don't yet know the fix but I know what is going wrong since it happens to me as well: OS X 10.10.3 Vagrant 1.7.4 VirtualBox 4.3.30 Kubernetes 1.0.1 When I run the default configuration of this (which creates one "master" and one "minion" VM) I see that the static IP address is not being assigned to the "eth1" interface, and I also see that the Salt API server is sitting in what appears to be an infinite retry loop because it is trying to listen on that IP address. Also, the following message happened during boot: [vagrant@kubernetes-master ~]$ dmesg | grep eth1 [ 9.321496] IPv6: ADDRCONF(NETDEV_UP): eth1: link is not ready So basically, the static IP address didn't get assigned because eth1 wasn't ready when the system first booted, and Salt is waiting for it to get assigned. I could fix this after boot by sshing to the box using "vagrant ssh" and running the command: sudo /etc/init.d/network restart on each host. This "fixes" eth1 by assigning the static IP address, and after that Salt begins to do its thing, installs Docker, boots various containers, and so on. What I don't know is how to make this work every time without manual intervention. It appears to be some sort of a race condition between Vagrant and VirtualBox.
###Instruction: This is my first attempt to install and use Kubernetes. I am trying to install an environment on Mac for developing my own apps and deploying them for test locally with Kubernetes. I am familiar with using Vagrant, VirtualBox and Docker for the same purpose. When I saw this page https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/vagrant.md I assumed it would be trivial. I executed these lines: export KUBERNETES_PROVIDER=vagrant curl -sS https://get.k8s.io | bash This created a master VM and a Minion, but Kubernetes seems to have failed to start on the master. On the master /var/log/salt/master is full of python Traceback errors, like this: 2015-07-17 22:14:42,629 [cherrypy.error ][INFO ][3252] [17/Jul/2015:22:14:42] ENGINE Started monitor thread '_TimeoutMonitor'. 2015-07-17 22:14:42,736 [cherrypy.error ][ERROR ][3252] [17/Jul/2015:22:14:42] ENGINE Error in HTTP server: shutting down Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/cherrypy/process/servers.py", line 187, in _start_http_thread self.httpserver.start() File "/usr/lib/python2.7/site-packages/cherrypy/wsgiserver/wsgiserver2.py", line 1824, in start raise socket.error(msg) error: No socket could be created Vagrant is version 1.7.3. VirtualBox is version 4.3.30 Have I made an obvious stupid mistake? ### Response: I don't yet know the fix but I know what is going wrong since it happens to me as well: OS X 10.10.3 Vagrant 1.7.4 VirtualBox 4.3.30 Kubernetes 1.0.1 When I run the default configuration of this (which creates one "master" and one "minion" VM) I see that the static IP address is not being assigned to the "eth1" interface, and I also see that the Salt API server is sitting in what appears to be an infinite retry loop because it is trying to listen on that IP address. Also, the following message happened during boot: [vagrant@kubernetes-master ~]$ dmesg | grep eth1 [ 9.321496] IPv6: ADDRCONF(NETDEV_UP): eth1: link is not ready So basically, the static IP address didn't get assigned because eth1 wasn't ready when the system first booted, and Salt is waiting for it to get assigned. I could fix this after boot by sshing to the box using "vagrant ssh" and running the command: sudo /etc/init.d/network restart on each host. This "fixes" eth1 by assigning the static IP address, and after that Salt begins to do its thing, installs Docker, boots various containers, and so on. What I don't know is how to make this work every time without manual intervention. It appears to be some sort of a race condition between Vagrant and VirtualBox.
This is my first attempt to install and use Kubernetes. I am trying to install an environment on Mac for developing my own apps and deploying them for test locally with Kubernetes. I am familiar with using Vagrant, VirtualBox and Docker for the same purpose. When I saw this page https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/vagrant.md I assumed it would be trivial. I executed these lines: export KUBERNETES_PROVIDER=vagrant curl -sS https://get.k8s.io | bash This created a master VM and a Minion, but Kubernetes seems to have failed to start on the master. On the master /var/log/salt/master is full of python Traceback errors, like this: 2015-07-17 22:14:42,629 [cherrypy.error ][INFO ][3252] [17/Jul/2015:22:14:42] ENGINE Started monitor thread '_TimeoutMonitor'. 2015-07-17 22:14:42,736 [cherrypy.error ][ERROR ][3252] [17/Jul/2015:22:14:42] ENGINE Error in HTTP server: shutting down Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/cherrypy/process/servers.py", line 187, in _start_http_thread self.httpserver.start() File "/usr/lib/python2.7/site-packages/cherrypy/wsgiserver/wsgiserver2.py", line 1824, in start raise socket.error(msg) error: No socket could be created Vagrant is version 1.7.3. VirtualBox is version 4.3.30 Have I made an obvious stupid mistake?
It's worth noting that you can now do both Autoscaling and Resize a running Google Container Engine Cluster by using Instance Groups - https://cloud.google.com/compute/docs/instance-groups/
###Instruction: I'm very interesting in the new Google Cloud Service: Google Container Engine, namely in be a able to write systems that can scale using containers' properties. I saw the StackOverflow questions: Autoscaling in Google Container Engine Increasing the cluster size in Google Container Engine And I understood that the auto-scale (and other features) are planned, however, I didn't see any release dates. When are the referred auto-scale features/integrations be released/available? When will the Google Container Engine reach Beta (leave Alpha)? Does Google Container Engine have a roadmap with release dates that can be consulted? ### Response: It's worth noting that you can now do both Autoscaling and Resize a running Google Container Engine Cluster by using Instance Groups - https://cloud.google.com/compute/docs/instance-groups/
I'm very interesting in the new Google Cloud Service: Google Container Engine, namely in be a able to write systems that can scale using containers' properties. I saw the StackOverflow questions: Autoscaling in Google Container Engine Increasing the cluster size in Google Container Engine And I understood that the auto-scale (and other features) are planned, however, I didn't see any release dates. When are the referred auto-scale features/integrations be released/available? When will the Google Container Engine reach Beta (leave Alpha)? Does Google Container Engine have a roadmap with release dates that can be consulted?
Just for the record, after discussion with Alex, trial and error and a binary search for a good number, what worked for me was setting the CPU resource limit in the Pod JSON to: "resources": { "limits": { "cpu": "490m" } } I have no idea how and why this particular value influences the Kubernetes scheduler, but it keeps nodes churning through the jobs, with exactly one pod per node running at any given moment.
###Instruction: Batch computations, Monte Carlo, using Docker image, multiple jobs running on Google cloud and managed by Kubernetes. No Replication Controllers, just multiple pods with NoRestart policy delivering computed payloads to our server. So far so good. Problem is, I have cluster with N nodes/minions, and have M jobs to compute, where M > N. So I would like to fire M pods at once and tell Kubernetes to schedule it in such a way so that only N are running at a given time, and everything else is kept in Pending state. As soon as one pod is done, next is scheduled to run moving from Pending to Running and so on and so forth till all M pods are done. Is it possible to do so? ### Response: Just for the record, after discussion with Alex, trial and error and a binary search for a good number, what worked for me was setting the CPU resource limit in the Pod JSON to: "resources": { "limits": { "cpu": "490m" } } I have no idea how and why this particular value influences the Kubernetes scheduler, but it keeps nodes churning through the jobs, with exactly one pod per node running at any given moment.
Batch computations, Monte Carlo, using Docker image, multiple jobs running on Google cloud and managed by Kubernetes. No Replication Controllers, just multiple pods with NoRestart policy delivering computed payloads to our server. So far so good. Problem is, I have cluster with N nodes/minions, and have M jobs to compute, where M > N. So I would like to fire M pods at once and tell Kubernetes to schedule it in such a way so that only N are running at a given time, and everything else is kept in Pending state. As soon as one pod is done, next is scheduled to run moving from Pending to Running and so on and so forth till all M pods are done. Is it possible to do so?
tl;dr gcloud container get-credentials --cluster=CLUSTER_ID --zone=YOURZONE So a little background: the kubectl tool is developed by google but isn't actually integrated into google cloud directly, the google cloud just helps you get a compatible version with it when you tell it to install the component. If you're getting the Get http://localhost:8080/api: dial tcp 127.0.0.1:8080: connection refused it is likely due to the kubectl tool not being configured at all or misconfigured. What I believe it's trying to do is assuming you have kubernetes somehow setup locally only, which you don't in this case since it's all on the google cloud (hence the cryptic error). You can verify your kubectl is misconfigured by running kubectl config view. If it's correctly configured you should see things like a few entries in cluster, with ip addresses, and in users you should see a user for each project, etc. If you see nothing of the sort (ie. empty clusters, and empty users) then you are misconfigured; you will also encounter cryptic issues if you dont see entries for the specific cluster you are trying to work on. Annoyingly a lot of gcloud commands will silently auto-configure it for you, so if you follow something like a hello wordpress tutorial it will look like you dont have to do this and that somehow kubectl communicates with gcloud, but nothing of the sort happens. It's also very easy to lose that configuration. To tell gcloud to give you the kubectl config run the following: gcloud container get-credentials --cluster=CLUSTER_ID --zone=YOURZONE For cluster id run gcloud container clusters list Zone is "europe-west1-d" or whatever you've chosen.
###Instruction: I am getting the following error when trying to run kubectl locally. error: couldn't read version from server: Get http://localhost:8080/api: dial tcp 127.0.0.1:8080: connection refused I know this relates to the kubectl config but I'm at a loss in how to resolve it. 2 days ago, I was experimenting with GKE and did set the config to point to GCE. I tried deleting this config file and then getting Vagrant with CoreOS locally. This vagrant up throws a similar error complaining about not being able to connect. What is the appropriate way to instrument kubectl so it can connect to the API and return information? ### Response: tl;dr gcloud container get-credentials --cluster=CLUSTER_ID --zone=YOURZONE So a little background: the kubectl tool is developed by google but isn't actually integrated into google cloud directly, the google cloud just helps you get a compatible version with it when you tell it to install the component. If you're getting the Get http://localhost:8080/api: dial tcp 127.0.0.1:8080: connection refused it is likely due to the kubectl tool not being configured at all or misconfigured. What I believe it's trying to do is assuming you have kubernetes somehow setup locally only, which you don't in this case since it's all on the google cloud (hence the cryptic error). You can verify your kubectl is misconfigured by running kubectl config view. If it's correctly configured you should see things like a few entries in cluster, with ip addresses, and in users you should see a user for each project, etc. If you see nothing of the sort (ie. empty clusters, and empty users) then you are misconfigured; you will also encounter cryptic issues if you dont see entries for the specific cluster you are trying to work on. Annoyingly a lot of gcloud commands will silently auto-configure it for you, so if you follow something like a hello wordpress tutorial it will look like you dont have to do this and that somehow kubectl communicates with gcloud, but nothing of the sort happens. It's also very easy to lose that configuration. To tell gcloud to give you the kubectl config run the following: gcloud container get-credentials --cluster=CLUSTER_ID --zone=YOURZONE For cluster id run gcloud container clusters list Zone is "europe-west1-d" or whatever you've chosen.
I am getting the following error when trying to run kubectl locally. error: couldn't read version from server: Get http://localhost:8080/api: dial tcp 127.0.0.1:8080: connection refused I know this relates to the kubectl config but I'm at a loss in how to resolve it. 2 days ago, I was experimenting with GKE and did set the config to point to GCE. I tried deleting this config file and then getting Vagrant with CoreOS locally. This vagrant up throws a similar error complaining about not being able to connect. What is the appropriate way to instrument kubectl so it can connect to the API and return information?
The containers in same pod shares the localhost, so you need not link containers, just use localhost:containerPort.
###Instruction: I am trying to see if there are any example to create a Kubernetes POD which starts 2-3 containers and these containers are linked with each other but couldn't find any. Does anybody tried linking containers using Kubernetes config. ### Response: The containers in same pod shares the localhost, so you need not link containers, just use localhost:containerPort.
I am trying to see if there are any example to create a Kubernetes POD which starts 2-3 containers and these containers are linked with each other but couldn't find any. Does anybody tried linking containers using Kubernetes config.
Yes, you can have them all ask for a resource of which there's only one on each node, then the scheduler won't be able to schedule more than N at a time. The most common way to do this is to have each pod ask for a hostPort in the ports section of its containers spec. However, I can't say I'm completely sure why you would want to limit the system to one such pod per node. If there are enough resources available to run multiple at a time on each node, it should speed up your job to let them run.
###Instruction: Batch computations, Monte Carlo, using Docker image, multiple jobs running on Google cloud and managed by Kubernetes. No Replication Controllers, just multiple pods with NoRestart policy delivering computed payloads to our server. So far so good. Problem is, I have cluster with N nodes/minions, and have M jobs to compute, where M > N. So I would like to fire M pods at once and tell Kubernetes to schedule it in such a way so that only N are running at a given time, and everything else is kept in Pending state. As soon as one pod is done, next is scheduled to run moving from Pending to Running and so on and so forth till all M pods are done. Is it possible to do so? ### Response: Yes, you can have them all ask for a resource of which there's only one on each node, then the scheduler won't be able to schedule more than N at a time. The most common way to do this is to have each pod ask for a hostPort in the ports section of its containers spec. However, I can't say I'm completely sure why you would want to limit the system to one such pod per node. If there are enough resources available to run multiple at a time on each node, it should speed up your job to let them run.
Batch computations, Monte Carlo, using Docker image, multiple jobs running on Google cloud and managed by Kubernetes. No Replication Controllers, just multiple pods with NoRestart policy delivering computed payloads to our server. So far so good. Problem is, I have cluster with N nodes/minions, and have M jobs to compute, where M > N. So I would like to fire M pods at once and tell Kubernetes to schedule it in such a way so that only N are running at a given time, and everything else is kept in Pending state. As soon as one pod is done, next is scheduled to run moving from Pending to Running and so on and so forth till all M pods are done. Is it possible to do so?
You should use a readiness check (http://kubernetes.io/v1.0/docs/user-guide/production-pods.html#liveness-and-readiness-probes-aka-health-checks) that transitions the Pod to "not ready" after you receive a SIGTERM Once that happens, the service will remove the Pod from serving, prior to the delete. (without a readiness check the Service simply doesn't know that the pod doesn't exist, until it is actually deleted) You may also want to use a PreStop hook that sets readiness to false, and then drains all existing requests. PreStop hooks are called synchronously prior to a Pod being deleted and they are described here: https://kubernetes-v1-4.github.io/docs/user-guide/production-pods/#lifecycle-hooks-and-termination-notice
###Instruction: As part of scaling pods in kubernetes I want to ensure I gracefully serve my http connections before shutting down. To that extent I have implemented this code in go: package main import ( "fmt" "io" "net/http" "os" "os/signal" "syscall" "github.com/braintree/manners" ) func main() { shutdown := make(chan int) //create a notification channel to shutdown sigChan := make(chan os.Signal, 1) //start the http server http.HandleFunc("/", hello) server := manners.NewWithServer(&http.Server{Addr: ":80", Handler: nil}) go func() { server.ListenAndServe() shutdown <- 1 }() //register for interupt (Ctrl+C) and SIGTERM (docker) signal.Notify(sigChan, os.Interrupt, syscall.SIGTERM) go func() { <-sigChan fmt.Println("Shutting down...") server.Close() }() <-shutdown } func hello(w http.ResponseWriter, r *http.Request) { // time.Sleep(3000 * time.Millisecond) io.WriteString(w, "Hello world!") } This looks out for the docker SIGTERM and gracefully shuts down after existing requests have been served. When I run this container in kubernetes with 10 instances I can scale up and down without incident, as long as I don't scale down to a single instance. When I scale to a single instance I see a short set of http errors, then all looks fine again. I find it strange as in scaling I would assume the proxy is updated first, then containers are shut down and the code above would allow requests to be served out. In my current setup I am running 2 nodes, maybe the issue is when scaling drops below the number of nodes and there is some sort of timing issue with etcd updates? Any insight into what is going on here would be really useful ### Response: You should use a readiness check (http://kubernetes.io/v1.0/docs/user-guide/production-pods.html#liveness-and-readiness-probes-aka-health-checks) that transitions the Pod to "not ready" after you receive a SIGTERM Once that happens, the service will remove the Pod from serving, prior to the delete. (without a readiness check the Service simply doesn't know that the pod doesn't exist, until it is actually deleted) You may also want to use a PreStop hook that sets readiness to false, and then drains all existing requests. PreStop hooks are called synchronously prior to a Pod being deleted and they are described here: https://kubernetes-v1-4.github.io/docs/user-guide/production-pods/#lifecycle-hooks-and-termination-notice
As part of scaling pods in kubernetes I want to ensure I gracefully serve my http connections before shutting down. To that extent I have implemented this code in go: package main import ( "fmt" "io" "net/http" "os" "os/signal" "syscall" "github.com/braintree/manners" ) func main() { shutdown := make(chan int) //create a notification channel to shutdown sigChan := make(chan os.Signal, 1) //start the http server http.HandleFunc("/", hello) server := manners.NewWithServer(&http.Server{Addr: ":80", Handler: nil}) go func() { server.ListenAndServe() shutdown <- 1 }() //register for interupt (Ctrl+C) and SIGTERM (docker) signal.Notify(sigChan, os.Interrupt, syscall.SIGTERM) go func() { <-sigChan fmt.Println("Shutting down...") server.Close() }() <-shutdown } func hello(w http.ResponseWriter, r *http.Request) { // time.Sleep(3000 * time.Millisecond) io.WriteString(w, "Hello world!") } This looks out for the docker SIGTERM and gracefully shuts down after existing requests have been served. When I run this container in kubernetes with 10 instances I can scale up and down without incident, as long as I don't scale down to a single instance. When I scale to a single instance I see a short set of http errors, then all looks fine again. I find it strange as in scaling I would assume the proxy is updated first, then containers are shut down and the code above would allow requests to be served out. In my current setup I am running 2 nodes, maybe the issue is when scaling drops below the number of nodes and there is some sort of timing issue with etcd updates? Any insight into what is going on here would be really useful
Packer is a tool that initially was created to create AWS AMIs or base VM images in AWS. It has been extended to be used with containers, a lot of different Virtualization software such as VMware, KVM/QEMU, and other cloud/IaaS providers like DigitalOcean. It was developed by Hashicorp but it's open source. Dockramp is an alternative to using docker build it uses the same Dockerfile that docker build would use but with some additional enhancements. For example, it can use accept heredocs in the RUN command for multiple line bash commands. Docker/LXC is fairly fast but the main advantage of building images (and this applies to Virtualization images too) is that you can have a fully installed application or application stack from the get-go. This tends to work better in autoscaling environments because it takes less time for your application to start servicing traffic either in a container or VM. When you are creating your docker image all the time with your Dockerfile it needs to run a series of steps before becoming 'ready', hence it might take longer to start servicing traffic. Hope it helps.
###Instruction: Can someone please explain the advantages/disadvantages using the following when building container images, rather than using the dockerfile. Packer - tool for creating machine and container images for multiple platforms from a single source configuration Dockramp - A Client-driven Docker Container Image Builder ### Response: Packer is a tool that initially was created to create AWS AMIs or base VM images in AWS. It has been extended to be used with containers, a lot of different Virtualization software such as VMware, KVM/QEMU, and other cloud/IaaS providers like DigitalOcean. It was developed by Hashicorp but it's open source. Dockramp is an alternative to using docker build it uses the same Dockerfile that docker build would use but with some additional enhancements. For example, it can use accept heredocs in the RUN command for multiple line bash commands. Docker/LXC is fairly fast but the main advantage of building images (and this applies to Virtualization images too) is that you can have a fully installed application or application stack from the get-go. This tends to work better in autoscaling environments because it takes less time for your application to start servicing traffic either in a container or VM. When you are creating your docker image all the time with your Dockerfile it needs to run a series of steps before becoming 'ready', hence it might take longer to start servicing traffic. Hope it helps.
Can someone please explain the advantages/disadvantages using the following when building container images, rather than using the dockerfile. Packer - tool for creating machine and container images for multiple platforms from a single source configuration Dockramp - A Client-driven Docker Container Image Builder
I recently stumbled upon the same issue. Have a look at BLOCK_DEVICE_MAPPINGS in kubernetes/cluster/aws/util.sh. You can modify it to have something more appropriate for a EBS-only minion. For example: [{"DeviceName":"/dev/sda1","Ebs":{"VolumeSize":80}}] AWS docs: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html
###Instruction: I successfully deployed Kubernetes on AWS using "getting started on AWS ec2 guide" (http://kubernetes.io/v1.0/docs/getting-started-guides/aws.html), but the disk size of all the minions (kubernetes hosts) is 8gb. I would like to increase the disk size, but I haven't found a way to do it. I can change the VM size by setting MINION_SIZE (e.g. export MINION_SIZE=m3.medium) prior to installing, but the disk size is still 8gb. From the Kubernetes install instructions for other cloud providers there's an option to set MINION_DISK_SIZE to set the disk size. I tried that with AWS ec2 installation, and the variable is ignored. I also poked around the config files, but I didn't see anything obvious. Any suggestions on how to set the disk size for minions when installing Kubernetes on AWS ec2? ### Response: I recently stumbled upon the same issue. Have a look at BLOCK_DEVICE_MAPPINGS in kubernetes/cluster/aws/util.sh. You can modify it to have something more appropriate for a EBS-only minion. For example: [{"DeviceName":"/dev/sda1","Ebs":{"VolumeSize":80}}] AWS docs: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-device-mapping-concepts.html
I successfully deployed Kubernetes on AWS using "getting started on AWS ec2 guide" (http://kubernetes.io/v1.0/docs/getting-started-guides/aws.html), but the disk size of all the minions (kubernetes hosts) is 8gb. I would like to increase the disk size, but I haven't found a way to do it. I can change the VM size by setting MINION_SIZE (e.g. export MINION_SIZE=m3.medium) prior to installing, but the disk size is still 8gb. From the Kubernetes install instructions for other cloud providers there's an option to set MINION_DISK_SIZE to set the disk size. I tried that with AWS ec2 installation, and the variable is ignored. I also poked around the config files, but I didn't see anything obvious. Any suggestions on how to set the disk size for minions when installing Kubernetes on AWS ec2?
The UI that runs by default in Kubernetes doesn't currently support modifying anything running in the cluster, and there aren't any other official UIs that do. It's something that we'd definitely like to improve in the future. However, the fabric8 folks have put together a console that does allow you to change what's running in a Kubernetes cluster in addition to viewing it. I haven't tried it myself so I can't vouch for it, but it may be worth checking out. There's a video demo here, with documentation here.
###Instruction: Is there any known approach with which we can manage the Kubernetes cluster from a dashboard kind of UI? Kubernetes by default comes up with a UI which is good enough to view the details of running pods, services etc. But what is the approach if we need to modify some properties of Kubernetes cluster (like increasing the replicas on RC, change auto-scaling policies etc.) from UI rather than issuing kubectl commands? I had heard that Kismatic is working towards achieving this same goal, but i am not quite sure how to configure from their github projects. I am using Google cloud for my projects. ### Response: The UI that runs by default in Kubernetes doesn't currently support modifying anything running in the cluster, and there aren't any other official UIs that do. It's something that we'd definitely like to improve in the future. However, the fabric8 folks have put together a console that does allow you to change what's running in a Kubernetes cluster in addition to viewing it. I haven't tried it myself so I can't vouch for it, but it may be worth checking out. There's a video demo here, with documentation here.
Is there any known approach with which we can manage the Kubernetes cluster from a dashboard kind of UI? Kubernetes by default comes up with a UI which is good enough to view the details of running pods, services etc. But what is the approach if we need to modify some properties of Kubernetes cluster (like increasing the replicas on RC, change auto-scaling policies etc.) from UI rather than issuing kubectl commands? I had heard that Kismatic is working towards achieving this same goal, but i am not quite sure how to configure from their github projects. I am using Google cloud for my projects.
If there isn't a Heapster pod running in your cluster, then your cluster was created before we started enabling cluster monitoring by default. We are working on adding a way for users to retroactively turn on monitoring, but if you want metrics pushed into stack driver today you will need to create a new cluster (launching Heapster yourself isn't sufficient for the metrics to get collected).
###Instruction: I am running a Kubernetes cluster on Google container engine. My metrics are not getting pushed to Stackdriver by default. Do I need to start Heapster service explicitly or is it automatically managed by container engine itself? ### Response: If there isn't a Heapster pod running in your cluster, then your cluster was created before we started enabling cluster monitoring by default. We are working on adding a way for users to retroactively turn on monitoring, but if you want metrics pushed into stack driver today you will need to create a new cluster (launching Heapster yourself isn't sufficient for the metrics to get collected).
I am running a Kubernetes cluster on Google container engine. My metrics are not getting pushed to Stackdriver by default. Do I need to start Heapster service explicitly or is it automatically managed by container engine itself?
There is a patch out for review at the moment (#12384) that makes a configuration option to mark the nodes in the instance group as preemptible. If you are willing to build from head, this should be available as a configuration option in the next couple of days. In the meantime, you can see from the patch how easy it is to modify the GCE startup scripts to make your VMs preemptible.
###Instruction: Have Kubernetes computation cluster running on GCE, reasonable happy so far. I know if I created K-cluster, I'll get to see nodes as VM Instances and cluster as Instance group. I would like to do other way around - create instances/group and make K-cluster out of it so it could be managed by Kubernetes. Reason I want to do so is to try and make nodes preemptible, which might better fit my workload. So question - Kubernetes cluster with preemptible nodes how-to. I could do either one or another now, but not together ### Response: There is a patch out for review at the moment (#12384) that makes a configuration option to mark the nodes in the instance group as preemptible. If you are willing to build from head, this should be available as a configuration option in the next couple of days. In the meantime, you can see from the patch how easy it is to modify the GCE startup scripts to make your VMs preemptible.
Have Kubernetes computation cluster running on GCE, reasonable happy so far. I know if I created K-cluster, I'll get to see nodes as VM Instances and cluster as Instance group. I would like to do other way around - create instances/group and make K-cluster out of it so it could be managed by Kubernetes. Reason I want to do so is to try and make nodes preemptible, which might better fit my workload. So question - Kubernetes cluster with preemptible nodes how-to. I could do either one or another now, but not together
In your service definition, set its type field to LoadBalancer, and kubernetes will automatically create an AWS Elastic Load Balancer for you if you're running on AWS. This feature should work on GCE/GKE, AWS, and OpenStack. For an example, check out the guestbook-go example.
###Instruction: Kubernetes create a load balancer, for each service; automatically in GCE. How can I manage something similar on AWS? Kubernetes service basically use the kubeproxy to handle the internal traffic. But that kubeproxy ip its do not have access to the external network. There its a way to accomplish this? ### Response: In your service definition, set its type field to LoadBalancer, and kubernetes will automatically create an AWS Elastic Load Balancer for you if you're running on AWS. This feature should work on GCE/GKE, AWS, and OpenStack. For an example, check out the guestbook-go example.
Kubernetes create a load balancer, for each service; automatically in GCE. How can I manage something similar on AWS? Kubernetes service basically use the kubeproxy to handle the internal traffic. But that kubeproxy ip its do not have access to the external network. There its a way to accomplish this?
The probe frequency is controlled by the sync-frequency command line flag on the Kubelet, which defaults to syncing pod statuses once every 10 seconds. I'm not aware of any way to customize the number of failed probes needed before a pod is considered not-ready to serve traffic. If either of these features is important to you, feel free to open an issue explaining what your use case is or send us a PR! :)
###Instruction: Is probe frequency customizable in liveness/readiness probe? Also, how many times readiness probe fails before it removes the pod from service load-balancer? Is it customizable? ### Response: The probe frequency is controlled by the sync-frequency command line flag on the Kubelet, which defaults to syncing pod statuses once every 10 seconds. I'm not aware of any way to customize the number of failed probes needed before a pod is considered not-ready to serve traffic. If either of these features is important to you, feel free to open an issue explaining what your use case is or send us a PR! :)
Is probe frequency customizable in liveness/readiness probe? Also, how many times readiness probe fails before it removes the pod from service load-balancer? Is it customizable?
What Heapster does with the metrics depends on its configuration. When running as part of GKE, the metrics aren't exported as "custom" metrics, but rather as official GKE service metrics. The feature is still in an experimental, soft-launch state, but you should be able to access them at app.google.stackdriver.com/gke
###Instruction: A newly created Kubernetes cluster on GKE is not pushing its metrics to Stackdriver. Output of kubectl cluster-info is: Kubernetes master is running at https://XXX.XXX.XXX.XXX KubeDNS is running at https://XXX.XXX.XXX.XXX/api/v1/proxy/namespaces/kube-system/services/kube-dns KubeUI is running at https://XXX.XXX.XXX.XXX/api/v1/proxy/namespaces/kube-system/services/kube-ui Heapster is running at https://XXX.XXX.XXX.XXX/api/v1/proxy/namespaces/kube-system/services/monitoring-heapster When I try to create a dashboard on Stackdriver with 'Custom Metrics', it says 'No Match Found'. Metrics were supposed to be present at this location with 'kubernetes.io' prefix according to Heapster documentation. I have also enabled Cloud Monitoring API with Read Write permission while creating cluster. Is it required for pushing cluster metrics? ### Response: What Heapster does with the metrics depends on its configuration. When running as part of GKE, the metrics aren't exported as "custom" metrics, but rather as official GKE service metrics. The feature is still in an experimental, soft-launch state, but you should be able to access them at app.google.stackdriver.com/gke
A newly created Kubernetes cluster on GKE is not pushing its metrics to Stackdriver. Output of kubectl cluster-info is: Kubernetes master is running at https://XXX.XXX.XXX.XXX KubeDNS is running at https://XXX.XXX.XXX.XXX/api/v1/proxy/namespaces/kube-system/services/kube-dns KubeUI is running at https://XXX.XXX.XXX.XXX/api/v1/proxy/namespaces/kube-system/services/kube-ui Heapster is running at https://XXX.XXX.XXX.XXX/api/v1/proxy/namespaces/kube-system/services/monitoring-heapster When I try to create a dashboard on Stackdriver with 'Custom Metrics', it says 'No Match Found'. Metrics were supposed to be present at this location with 'kubernetes.io' prefix according to Heapster documentation. I have also enabled Cloud Monitoring API with Read Write permission while creating cluster. Is it required for pushing cluster metrics?
Only one secret is allowed per secret volume
###Instruction: I'm trying to mount several secrets in the same secret-volume for a kubernetes pod blueprint, but it's not accepted. Tried and can't be created : "spec": { "volumes": [ { "name": "secret-volume", "secret": [ { "secretName": "secret1" }, { "secretName": "secret2" }, { "secretName": "secret3" } ] } Like this it will just override them: "volumes": [ { "name": "secret-volume", "secret": { "secretName": "secret1" }, "secret": { "secretName": "secret2" }, "secret": { "secretName": "secret3" } } ] Is there any way to mount a list of secrets in the same volume ? Or only 1 secret / volume at the moment? Thanks ### Response: Only one secret is allowed per secret volume
I'm trying to mount several secrets in the same secret-volume for a kubernetes pod blueprint, but it's not accepted. Tried and can't be created : "spec": { "volumes": [ { "name": "secret-volume", "secret": [ { "secretName": "secret1" }, { "secretName": "secret2" }, { "secretName": "secret3" } ] } Like this it will just override them: "volumes": [ { "name": "secret-volume", "secret": { "secretName": "secret1" }, "secret": { "secretName": "secret2" }, "secret": { "secretName": "secret3" } } ] Is there any way to mount a list of secrets in the same volume ? Or only 1 secret / volume at the moment? Thanks
I can think of a couple of ways to access services across multiple clusters connected to the same GCP private network: Bastion route into k2 for all of k2's services: Find the SERVICE_CLUSTER_IP_RANGE for the k2 cluster. On GKE, it will be the servicesIpv4Cidr field in the output of cluster describe: $ gcloud beta container clusters describe k2 ... servicesIpv4Cidr: 10.143.240.0/20 ... Add an advanced routing rule to take traffic destined for that range and route it to a node in k2: $ gcloud compute routes create --destination-range 10.143.240.0/20 --next-hop-instance k2-node-0 This will cause k2-node-0 to proxy requests from the private network for any of k2's services. This has the obvious downside of giving k2-node-0 extra work, but it is simple. Install k2's kube-proxy on all nodes in k1. Take a look at the currently running kube-proxy on any node in k2: $ ps aux | grep kube-proxy ... /usr/local/bin/kube-proxy --master=https://k2-master-ip --kubeconfig=/var/lib/kube-proxy/kubeconfig --v=2 Copy k2's kubeconfig file to each node in k1 (say /var/lib/kube-proxy/kubeconfig-v2) and start a second kube-proxy on each node: $ /usr/local/bin/kube-proxy --master=https://k2-master-ip --kubeconfig=/var/lib/kube-proxy/kubeconfig-k2 --healthz-port=10247 Now, each node in k1 handles proxying to k2 locally. A little tougher to set up, but has better scaling properties. As you can see, neither solution is all that elegant. Discussions are happening about how this type of setup should ideally work in Kubernetes. You can take a look at the Cluster Federation proposal doc (specifically the Cross Cluster Service Discovery section), and join the discussion by opening up issues/sending PRs.
###Instruction: I have two service, S1 in cluster K1 and S2 in cluster K2. They have different hardware requirements. Service S1 needs to talk to S2. I don't want to expose Public IP for S2 due to security reasons. Using NodePorts on K2 cluster's compute instances with network load-balancing takes the flexibility out as I would have to add/remove K2's compute instances in target pool each time a node is added/removed in K2. Is there something like "service-selector" for automatically updating target-pool? If not, is there any other better approach for this use-case? ### Response: I can think of a couple of ways to access services across multiple clusters connected to the same GCP private network: Bastion route into k2 for all of k2's services: Find the SERVICE_CLUSTER_IP_RANGE for the k2 cluster. On GKE, it will be the servicesIpv4Cidr field in the output of cluster describe: $ gcloud beta container clusters describe k2 ... servicesIpv4Cidr: 10.143.240.0/20 ... Add an advanced routing rule to take traffic destined for that range and route it to a node in k2: $ gcloud compute routes create --destination-range 10.143.240.0/20 --next-hop-instance k2-node-0 This will cause k2-node-0 to proxy requests from the private network for any of k2's services. This has the obvious downside of giving k2-node-0 extra work, but it is simple. Install k2's kube-proxy on all nodes in k1. Take a look at the currently running kube-proxy on any node in k2: $ ps aux | grep kube-proxy ... /usr/local/bin/kube-proxy --master=https://k2-master-ip --kubeconfig=/var/lib/kube-proxy/kubeconfig --v=2 Copy k2's kubeconfig file to each node in k1 (say /var/lib/kube-proxy/kubeconfig-v2) and start a second kube-proxy on each node: $ /usr/local/bin/kube-proxy --master=https://k2-master-ip --kubeconfig=/var/lib/kube-proxy/kubeconfig-k2 --healthz-port=10247 Now, each node in k1 handles proxying to k2 locally. A little tougher to set up, but has better scaling properties. As you can see, neither solution is all that elegant. Discussions are happening about how this type of setup should ideally work in Kubernetes. You can take a look at the Cluster Federation proposal doc (specifically the Cross Cluster Service Discovery section), and join the discussion by opening up issues/sending PRs.
I have two service, S1 in cluster K1 and S2 in cluster K2. They have different hardware requirements. Service S1 needs to talk to S2. I don't want to expose Public IP for S2 due to security reasons. Using NodePorts on K2 cluster's compute instances with network load-balancing takes the flexibility out as I would have to add/remove K2's compute instances in target pool each time a node is added/removed in K2. Is there something like "service-selector" for automatically updating target-pool? If not, is there any other better approach for this use-case?
The ServiceAccount admission controller prevents pods from being created until their service account in their namespace is initialized. If the controller-manager is started with the appropriate arguments, it will automatically populate namespaces with a default service account, and auto-create the API token for that service account. It looks like that guide needs to be updated with the information from this comment: https://github.com/GoogleCloudPlatform/kubernetes/issues/11355#issuecomment-127378691
###Instruction: I followed the official Kubernetes installation guide to install Kubernetes on Fedora 22 severs. Everything works out for me during the installation . After the installation. I could see all my nodes are up-running and connected to the master. However, it kept failing while I try to create a simple pod, according to the 101 guide. $ create -f pod-nginx.yaml Error from server: error when creating "pod-nginx.yaml": Pod "nginx" is forbidden: no API token found for service account default/default, retry after the token is automatically created and added to the service account Do I need to create a API token? If yes, how? I googled the issue, but without any helpful results. Looks like I am the only one hit into the issue on this planet. Dose anyone have ideas on this? ### Response: The ServiceAccount admission controller prevents pods from being created until their service account in their namespace is initialized. If the controller-manager is started with the appropriate arguments, it will automatically populate namespaces with a default service account, and auto-create the API token for that service account. It looks like that guide needs to be updated with the information from this comment: https://github.com/GoogleCloudPlatform/kubernetes/issues/11355#issuecomment-127378691
I followed the official Kubernetes installation guide to install Kubernetes on Fedora 22 severs. Everything works out for me during the installation . After the installation. I could see all my nodes are up-running and connected to the master. However, it kept failing while I try to create a simple pod, according to the 101 guide. $ create -f pod-nginx.yaml Error from server: error when creating "pod-nginx.yaml": Pod "nginx" is forbidden: no API token found for service account default/default, retry after the token is automatically created and added to the service account Do I need to create a API token? If yes, how? I googled the issue, but without any helpful results. Looks like I am the only one hit into the issue on this planet. Dose anyone have ideas on this?
After manually removing the Forwarding Rules Under "Networking->Load Balancing->Network Load Balancing" (Or you can use gcloud compute forwarding-rules delete) I was able to get public Ip's again. It seems somehow the forwarding rules werent deleted and reached the limit. It is strange as when I ran Kubectl stop service it removed the forwarding rule for me.
###Instruction: I ran into some trouble getting an external ip address after posting the following json object (variables excluded): $json= '{ "kind": "Service", "apiVersion": "v1", "metadata": { "name": "" }, "spec": { "ports": [{ "port": 80, "targetPort": 80 }], "selector": { "app": "" }, "type": "LoadBalancer" } }'; The service is created but no external ip is ever given. Unable to determine where the issue lay, I proceeded to install a clean copy of kubernetes (and the cluster it is defined to install) using the following command provided in the documentation (V1 kubernetes/examples/simple-nginx.md): curl -sS https://get.k8s.io | bash This of course set things up automatically. I then ran the following commands to test if the LoadBalancer function was working: kubectl run my-nginx --image=nginx --replicas=2 --port=80 After running kubectl get pods to confirm that they were ready, I exposed the service: kubectl expose rc my-nginx --port=80 --type=LoadBalancer I then ran kubectl get service for the past few minutes, and no public ip is being provided.. That cant be right? EDIT kubectl get services NAME LABELS SELECTOR IP(S) PORT(S) kubernetes component=apiserver,provider=kubernetes <none> 10.0.0.1 443/TCP my-nginx run=my-nginx run=my-nginx 10.0.136.163 80/TCP kubectl get service my-nginx -o yaml apiVersion: v1 kind: Service metadata: creationTimestamp: 2015-08-11T11:44:02Z labels: run: my-nginx name: my-nginx namespace: default resourceVersion: "1795" selfLink: /api/v1/namespaces/default/services/my-nginx uid: 434751be-401e-11e5-a219-42010af0da43 spec: clusterIP: 10.x.xxx.xxx ports: - nodePort: 31146 port: 80 protocol: TCP targetPort: 80 selector: run: my-nginx sessionAffinity: None type: LoadBalancer status: loadBalancer: {} After running (Thanks GameScripting): kubectl describe service my-nginx I saw the following error: FirstSeen LastSeen Count From SubobjectPath Reason Message Tue, 11 Aug 2015 14:00:00 +0200 Tue, 11 Aug 2015 14:02:41 +0200 9 {service-controller } creating loadbalancer failed failed to create external load balancer for service default/my-nginx: googleapi: Error 403: Quota 'FORWARDING_RULES' exceeded. Limit: 15.0 ### Response: After manually removing the Forwarding Rules Under "Networking->Load Balancing->Network Load Balancing" (Or you can use gcloud compute forwarding-rules delete) I was able to get public Ip's again. It seems somehow the forwarding rules werent deleted and reached the limit. It is strange as when I ran Kubectl stop service it removed the forwarding rule for me.
I ran into some trouble getting an external ip address after posting the following json object (variables excluded): $json= '{ "kind": "Service", "apiVersion": "v1", "metadata": { "name": "" }, "spec": { "ports": [{ "port": 80, "targetPort": 80 }], "selector": { "app": "" }, "type": "LoadBalancer" } }'; The service is created but no external ip is ever given. Unable to determine where the issue lay, I proceeded to install a clean copy of kubernetes (and the cluster it is defined to install) using the following command provided in the documentation (V1 kubernetes/examples/simple-nginx.md): curl -sS https://get.k8s.io | bash This of course set things up automatically. I then ran the following commands to test if the LoadBalancer function was working: kubectl run my-nginx --image=nginx --replicas=2 --port=80 After running kubectl get pods to confirm that they were ready, I exposed the service: kubectl expose rc my-nginx --port=80 --type=LoadBalancer I then ran kubectl get service for the past few minutes, and no public ip is being provided.. That cant be right? EDIT kubectl get services NAME LABELS SELECTOR IP(S) PORT(S) kubernetes component=apiserver,provider=kubernetes <none> 10.0.0.1 443/TCP my-nginx run=my-nginx run=my-nginx 10.0.136.163 80/TCP kubectl get service my-nginx -o yaml apiVersion: v1 kind: Service metadata: creationTimestamp: 2015-08-11T11:44:02Z labels: run: my-nginx name: my-nginx namespace: default resourceVersion: "1795" selfLink: /api/v1/namespaces/default/services/my-nginx uid: 434751be-401e-11e5-a219-42010af0da43 spec: clusterIP: 10.x.xxx.xxx ports: - nodePort: 31146 port: 80 protocol: TCP targetPort: 80 selector: run: my-nginx sessionAffinity: None type: LoadBalancer status: loadBalancer: {} After running (Thanks GameScripting): kubectl describe service my-nginx I saw the following error: FirstSeen LastSeen Count From SubobjectPath Reason Message Tue, 11 Aug 2015 14:00:00 +0200 Tue, 11 Aug 2015 14:02:41 +0200 9 {service-controller } creating loadbalancer failed failed to create external load balancer for service default/my-nginx: googleapi: Error 403: Quota 'FORWARDING_RULES' exceeded. Limit: 15.0
Yeah, we wish that they could share the PID namespace, but as you say, it is not currently supported by Docker. Once we have support in Docker, we will rapidly add it to Kubernetes. This means that you can't use signal to signal other processes in the Pod. You can, however, use IPC mechanisms like pipes and shared memory.
###Instruction: On this page in the Kubernetes docs Pods, it states The context of the pod can be defined as the conjunction of several Linux namespaces: PID namespace (applications within the pod can see each other's processes) network namespace (applications within the pod have access to the same IP and port space) IPC namespace (applications within the pod can use SystemV IPC or POSIX message queues to communicate) UTS namespace (applications within the pod share a hostname) However, it then says that In terms of Docker constructs, a pod consists of a colocated group of Docker containers with shared volumes. PID namespace sharing is not yet implemented with Docker. So does this mean that pods cannot see processes in other containers or perform any kind of IPC between containers running in the same pod? How would I send a signal to a process running in another pod? ### Response: Yeah, we wish that they could share the PID namespace, but as you say, it is not currently supported by Docker. Once we have support in Docker, we will rapidly add it to Kubernetes. This means that you can't use signal to signal other processes in the Pod. You can, however, use IPC mechanisms like pipes and shared memory.
On this page in the Kubernetes docs Pods, it states The context of the pod can be defined as the conjunction of several Linux namespaces: PID namespace (applications within the pod can see each other's processes) network namespace (applications within the pod have access to the same IP and port space) IPC namespace (applications within the pod can use SystemV IPC or POSIX message queues to communicate) UTS namespace (applications within the pod share a hostname) However, it then says that In terms of Docker constructs, a pod consists of a colocated group of Docker containers with shared volumes. PID namespace sharing is not yet implemented with Docker. So does this mean that pods cannot see processes in other containers or perform any kind of IPC between containers running in the same pod? How would I send a signal to a process running in another pod?
A common misconception is that a Pod should manage a vertical, multi-tier stack: for example a web tier + DB tier together. It's interesting to read the Kubernetes design intent of Pods: they're for collecting 'helper' processes rather than composing a vertical stack. To answer your questions, I'd recommend: Define a Pod template for the web tier only. This can be scaled to any size required, using a replication controller (questions #1 and #3). Define another Pod for MySQL. Use the Service abstraction to locate these components.
###Instruction: I Have the following setup in mind: Kubernetes on Mesos (based on the kubernetes-mesos project) within a /16 network. Each pod will have its own IP and I believe this will avail 64 000 pods. The idea is to provide isolation for each app i.e. Each app gets its own mysql within the same pod - the app accesses mysql on localhost(within the pod). If an additional service were needed, I'd use kubernetes rolling updates to add the service's container to the pod, the app will be able to access this new service on localhost as well. Each application needs as much isolation as possible. Are there any defects to such an implementation? Do I have to use weave? There's an option to specify the service-ip-range while running the kubernetes-mesos install. One hole is how do I scale a service, is this really viable? Is there a better way to do this? i.e. Offering isolated services Thanks. PS//I'm obviously a noobie at this and I'm trying to get the best possible setup running. ### Response: A common misconception is that a Pod should manage a vertical, multi-tier stack: for example a web tier + DB tier together. It's interesting to read the Kubernetes design intent of Pods: they're for collecting 'helper' processes rather than composing a vertical stack. To answer your questions, I'd recommend: Define a Pod template for the web tier only. This can be scaled to any size required, using a replication controller (questions #1 and #3). Define another Pod for MySQL. Use the Service abstraction to locate these components.
I Have the following setup in mind: Kubernetes on Mesos (based on the kubernetes-mesos project) within a /16 network. Each pod will have its own IP and I believe this will avail 64 000 pods. The idea is to provide isolation for each app i.e. Each app gets its own mysql within the same pod - the app accesses mysql on localhost(within the pod). If an additional service were needed, I'd use kubernetes rolling updates to add the service's container to the pod, the app will be able to access this new service on localhost as well. Each application needs as much isolation as possible. Are there any defects to such an implementation? Do I have to use weave? There's an option to specify the service-ip-range while running the kubernetes-mesos install. One hole is how do I scale a service, is this really viable? Is there a better way to do this? i.e. Offering isolated services Thanks. PS//I'm obviously a noobie at this and I'm trying to get the best possible setup running.
The issue was on the volume definition, the hostPath.path property should hold the absolute address of the directory to mount. Example: hostPath: path: /home/bitgandtter/Documents/development/php/k8s_devel_env
###Instruction: I want to use kubernetes as my default development environment for that I set up the cluster locally with docker as explained in the official doc. I push my example to a github repository My set up steps after having a kubernetes cluster running were: * cd cluster_config/app && docker build --tag=k8s_php_dev . && cd ../.. * kubectl -s http://127.0.0.1:8080 create -f cluster_config/app/app.rc.yml * kubectl -s http://127.0.0.1:8080 create -f cluster_config/app/app.services.yml My issues comes since I want to map a local directory as a volume inside my app pod so I can share dynamically the files in there between my local host and the pod, so i can develop, change the files; and dynamically update on the service. I use a a volume with a hostPath. The pod, replication controller and service are created successfully but the pod do not share the directory not even have the file on the supposed on the mountPath. What I'm doing wrong? Thanks ### Response: The issue was on the volume definition, the hostPath.path property should hold the absolute address of the directory to mount. Example: hostPath: path: /home/bitgandtter/Documents/development/php/k8s_devel_env
I want to use kubernetes as my default development environment for that I set up the cluster locally with docker as explained in the official doc. I push my example to a github repository My set up steps after having a kubernetes cluster running were: * cd cluster_config/app && docker build --tag=k8s_php_dev . && cd ../.. * kubectl -s http://127.0.0.1:8080 create -f cluster_config/app/app.rc.yml * kubectl -s http://127.0.0.1:8080 create -f cluster_config/app/app.services.yml My issues comes since I want to map a local directory as a volume inside my app pod so I can share dynamically the files in there between my local host and the pod, so i can develop, change the files; and dynamically update on the service. I use a a volume with a hostPath. The pod, replication controller and service are created successfully but the pod do not share the directory not even have the file on the supposed on the mountPath. What I'm doing wrong? Thanks
So this regular expression: [a-z0-9]+(?:[._-][a-z0-9]+)* doesn't include any upper case letters. So you should change your image name to devopsclient
###Instruction: I am trying to build my image using this plugin: https://github.com/spotify/docker-maven-plugin#use-a-dockerfile When I run mvn clean package docker:build I get this error: [ERROR] Failed to execute goal com.spotify:docker-maven-plugin:0.2.3:build (defa ult-cli) on project demo: Exception caught: Request error: POST https://192.168. 99.100:2376/v1.12/build?t=DevOpsClient: 500: HTTP 500 Internal Server Error -> [ Help 1] When I check the docker daemon logs, I see this: Handler for POST /build returned error: repository name component must match \"[a-z0-9]+(?:[._-][a-z0-9]+)*\"" statusCode=500 Here is the doc for the naming convention: https://docs.docker.com/registry/spec/api/ Apparently you cannot have any upper case letters. I am trying to build using Spring boot my following this guide: https://spring.io/guides/gs/spring-boot-docker/ I am using a SNAPSHOT release of spring boot and I have a directory named demo-0.1.1-SNAPSHOT. I believe this may be causing the problem. Also I am working on windows and my project directory path is like: C:\Users\myname\UserRegistrationClient\git\..... etc Would this also affect the repository naming convention? And how would I change it? ### Response: So this regular expression: [a-z0-9]+(?:[._-][a-z0-9]+)* doesn't include any upper case letters. So you should change your image name to devopsclient
I am trying to build my image using this plugin: https://github.com/spotify/docker-maven-plugin#use-a-dockerfile When I run mvn clean package docker:build I get this error: [ERROR] Failed to execute goal com.spotify:docker-maven-plugin:0.2.3:build (defa ult-cli) on project demo: Exception caught: Request error: POST https://192.168. 99.100:2376/v1.12/build?t=DevOpsClient: 500: HTTP 500 Internal Server Error -> [ Help 1] When I check the docker daemon logs, I see this: Handler for POST /build returned error: repository name component must match \"[a-z0-9]+(?:[._-][a-z0-9]+)*\"" statusCode=500 Here is the doc for the naming convention: https://docs.docker.com/registry/spec/api/ Apparently you cannot have any upper case letters. I am trying to build using Spring boot my following this guide: https://spring.io/guides/gs/spring-boot-docker/ I am using a SNAPSHOT release of spring boot and I have a directory named demo-0.1.1-SNAPSHOT. I believe this may be causing the problem. Also I am working on windows and my project directory path is like: C:\Users\myname\UserRegistrationClient\git\..... etc Would this also affect the repository naming convention? And how would I change it?
I'm not positive that this won't mess with httpd's logging at all (e.g. if it tries to seek within the file), but you can set up symlinks from the log paths to /dev/stdout and /dev/stderr, like so: ln -sf /dev/stdout /path/to/access.log ln -sf /dev/stderr /path/to/error.log The entry command to the vanilla httpd container from Docker Hub could be made to be something like ln -sf /dev/stdout /path/to/access.log && ln -sf /dev/stderr /path/to/error.log && /path/to/httpd
###Instruction: I'm running Apache2 in a docker container and want to write nothing to the disk, writing logs to stdout and stderr. I've seen a few different ways to do this (Supervisord and stdout/stderr, Apache access log to stdout) but these seem like hacks. Is there no way to do this by default? To be clear, I do not want to tail the log, since that will result in things being written to the disk in the container. The "official" version checked into Docker Hub (https://hub.docker.com/_/httpd/) still write to disk. Also, what do I need to do to stop Apache from failing when it tries to roll the logs? One other thing - ideally, I'd really like to do this without another add-on. nginx can do this trivially. ### Response: I'm not positive that this won't mess with httpd's logging at all (e.g. if it tries to seek within the file), but you can set up symlinks from the log paths to /dev/stdout and /dev/stderr, like so: ln -sf /dev/stdout /path/to/access.log ln -sf /dev/stderr /path/to/error.log The entry command to the vanilla httpd container from Docker Hub could be made to be something like ln -sf /dev/stdout /path/to/access.log && ln -sf /dev/stderr /path/to/error.log && /path/to/httpd
I'm running Apache2 in a docker container and want to write nothing to the disk, writing logs to stdout and stderr. I've seen a few different ways to do this (Supervisord and stdout/stderr, Apache access log to stdout) but these seem like hacks. Is there no way to do this by default? To be clear, I do not want to tail the log, since that will result in things being written to the disk in the container. The "official" version checked into Docker Hub (https://hub.docker.com/_/httpd/) still write to disk. Also, what do I need to do to stop Apache from failing when it tries to roll the logs? One other thing - ideally, I'd really like to do this without another add-on. nginx can do this trivially.
the deployed VMs are not assigned to Availability Sets It is true indeed, as an author and maintainer of the guide, I will welcome a pull-request to enable this, which should be quite easy and probably similar how affinity groups are currently handled. Not able to specify an existing Virtual Network, Resources, location etc. This is a very good point, however it's probably best to refactor current ad-hoc JavaScript wrapping to something more streamlined with Azure Resource Manager, which hasn't been generally available at the time I implemented that integration.
###Instruction: I am planning to deploy a production deployment with 'Kubernetes + CoreOS' in Microsoft. And planning to run couple of micro services in the cluster. My plan is to have 5 nodes, I will have 5/6 pods to run each will have 3-5 instances. I was following the official documentation of Kubernetes, I found https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/getting-started-guides/coreos/azure/README.md is really helpful, the script works awesome. But I don't think that its production ready for my use case , as the deployed VMs are not assigned to Availability Sets Not able to specify an existing Virtual Network, Resources, location etc. I am a newbie in this field. Can someone help me out to let me know what all steps to be taken to make this a real production environment ? ### Response: the deployed VMs are not assigned to Availability Sets It is true indeed, as an author and maintainer of the guide, I will welcome a pull-request to enable this, which should be quite easy and probably similar how affinity groups are currently handled. Not able to specify an existing Virtual Network, Resources, location etc. This is a very good point, however it's probably best to refactor current ad-hoc JavaScript wrapping to something more streamlined with Azure Resource Manager, which hasn't been generally available at the time I implemented that integration.
I am planning to deploy a production deployment with 'Kubernetes + CoreOS' in Microsoft. And planning to run couple of micro services in the cluster. My plan is to have 5 nodes, I will have 5/6 pods to run each will have 3-5 instances. I was following the official documentation of Kubernetes, I found https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/getting-started-guides/coreos/azure/README.md is really helpful, the script works awesome. But I don't think that its production ready for my use case , as the deployed VMs are not assigned to Availability Sets Not able to specify an existing Virtual Network, Resources, location etc. I am a newbie in this field. Can someone help me out to let me know what all steps to be taken to make this a real production environment ?
It's somewhat unclear what you're asking, but a good place to start would be reading about your options for volumes in Kubernetes. The options include using local EC2 disk with a lifetime tied to the lifetime of your pod (emptyDir), local EC2 disk with lifetime tied to the lifetime of the node VM (hostDir), and an Elastic Block Store volume (awsElasticBlockStore).
###Instruction: I started to use Docker and I'm trying out Google's Kubernetes project for my container orchestration. It looks really good! The only thing I'm curious of is how I would handle the volume storage. I'm using EC2 instances and the containers do volume from the EC2 filesystem. The only thing left is the way I have to deploy my application code into all those EC2 instances, right? How can I handle this? ### Response: It's somewhat unclear what you're asking, but a good place to start would be reading about your options for volumes in Kubernetes. The options include using local EC2 disk with a lifetime tied to the lifetime of your pod (emptyDir), local EC2 disk with lifetime tied to the lifetime of the node VM (hostDir), and an Elastic Block Store volume (awsElasticBlockStore).
I started to use Docker and I'm trying out Google's Kubernetes project for my container orchestration. It looks really good! The only thing I'm curious of is how I would handle the volume storage. I'm using EC2 instances and the containers do volume from the EC2 filesystem. The only thing left is the way I have to deploy my application code into all those EC2 instances, right? How can I handle this?
The LoadBalancer should be getting created automatically. There might be IAM policy issues preventing the load balancer from being created (see Issue #10692). If that isn't the problem, looking for errors in /var/log/kube-controller-manager.log on the master VM may give you an idea of what is going wrong.
###Instruction: i try to use AWS to setup kubernetes(version 1.0.1) and deploy a few services and pods there. but i have got stuck with LoadBalancer service. According to the docs i just need to setup correct type of service and open ports in firewall But service doesn't receive external IP. (ingress is empty) Do i need to create LoadBalancer manually in AWS console? maybe some another actions? Thanks, ### Response: The LoadBalancer should be getting created automatically. There might be IAM policy issues preventing the load balancer from being created (see Issue #10692). If that isn't the problem, looking for errors in /var/log/kube-controller-manager.log on the master VM may give you an idea of what is going wrong.
i try to use AWS to setup kubernetes(version 1.0.1) and deploy a few services and pods there. but i have got stuck with LoadBalancer service. According to the docs i just need to setup correct type of service and open ports in firewall But service doesn't receive external IP. (ingress is empty) Do i need to create LoadBalancer manually in AWS console? maybe some another actions? Thanks,
The assumption of the kubernetes development team is that cross-cluster federation will be the best way to handle cross-zone workloads. The tooling for this is easy to imagine, but has not emerged yet. You can (on your own) set up regional or global load-balancers and direct traffic to different clusters based on things like GeoIP.
###Instruction: What is the best practice to get Geo distributed cluster with asynchronous network channels ? I suspect I would need to have some "load balancer" which should redirect connections "within" it's own DC, do you know anything like this already in place? Second question, should we use one HA cluster or create dedicated cluster for each of the DC ? ### Response: The assumption of the kubernetes development team is that cross-cluster federation will be the best way to handle cross-zone workloads. The tooling for this is easy to imagine, but has not emerged yet. You can (on your own) set up regional or global load-balancers and direct traffic to different clusters based on things like GeoIP.
What is the best practice to get Geo distributed cluster with asynchronous network channels ? I suspect I would need to have some "load balancer" which should redirect connections "within" it's own DC, do you know anything like this already in place? Second question, should we use one HA cluster or create dedicated cluster for each of the DC ?
With the help of Kelsey Hightower, I solved the problem. It turns out it was a Docker routing issue. I've written up the details in a blog post, but the bottom line is to alter the minions' routing table like so: $ sudo iptables -t nat -I POSTROUTING -d <RDS-IP-ADDRESS>/32 -o eth0 -j MASQUERADE
###Instruction: I have a Kubernetes cluster running in Amazon EC2 inside its own VPC, and I'm trying to get Dockerized services to connect to an RDS database (which is in a different VPC). I've figured out the peering and routing table entries so I can do this from the minion machines: ubuntu@minion1:~$ psql -h <rds-instance-name> Password: So that's all working. The problem is that when I try to make that connection from inside a Kubernetes-managed container, I get a timeout: ubuntu@pod-1234:~$ psql -h <rds-instance-name> … To get the minion to connect, I configured a peering connection, set up the routing tables from the Kubernetes VPC so that 10.0.0.0/16 (the CIDR for the RDS VPC) maps to the peering connection, and updated the RDS instance's security group to allow traffic to port 5432 from the address range 172.20.0.0/16 (the CIDR for the Kubernetes VPC). ### Response: With the help of Kelsey Hightower, I solved the problem. It turns out it was a Docker routing issue. I've written up the details in a blog post, but the bottom line is to alter the minions' routing table like so: $ sudo iptables -t nat -I POSTROUTING -d <RDS-IP-ADDRESS>/32 -o eth0 -j MASQUERADE
I have a Kubernetes cluster running in Amazon EC2 inside its own VPC, and I'm trying to get Dockerized services to connect to an RDS database (which is in a different VPC). I've figured out the peering and routing table entries so I can do this from the minion machines: ubuntu@minion1:~$ psql -h <rds-instance-name> Password: So that's all working. The problem is that when I try to make that connection from inside a Kubernetes-managed container, I get a timeout: ubuntu@pod-1234:~$ psql -h <rds-instance-name> … To get the minion to connect, I configured a peering connection, set up the routing tables from the Kubernetes VPC so that 10.0.0.0/16 (the CIDR for the RDS VPC) maps to the peering connection, and updated the RDS instance's security group to allow traffic to port 5432 from the address range 172.20.0.0/16 (the CIDR for the Kubernetes VPC).
The reason is a bug in kubernetes. It seems to be a TLS error. If you manually download kubernetes 1.0.1 you will get closer. However, there's been a bugfix that you'll need to patch into 1.0.1 to make it work properly with vagrant. Otherwise, network provisioning will not work and you'll run into this issue. So, as suggested there, apply these changes to the provision scripts of v1.0.1 and you'll be good to go. Simple, right?
###Instruction: I have been going through this to get started with Kubernetes, running on Mac OS v 10.9.5, Vagrant v 1.7.4, VirtualBox v 4.3.28, and Kubernetes v 1.0.3.: https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/vagrant.md Full install log can be found here: http://mcdonaldland.info/files/kubernetes/install-log.txt I've tried both the download and sh install versions of this. I've run this multiple times and every time it gets to the "waiting for each minion to be registered with cloud provider" step and loops forever. Eventually, I "CTRL+C" and exit the script. In looking at the source it obviously has something to do with the nodes not being found, but I can't figure out why it cannot find them. After killing the script, running 'kubectl get nodes' returns nothing. The same happens if I used the SH script. If I am in the ~/kubernetes/cluster director and run 'vagrant ssh master' or 'vagrant ssh minion-1' I can connect to them. If I run some scripts to add pods I can get them to register. Same with Replication Controllers. When I check status the pods will never start. When I dig into the logs it appears that the nodes cannot be connected to (aka found) and the minions are erroring on startup. I have been struggling for 5 days now to figure out why my nodes are not showing up / registering properly. I figure I'm missing something simple but am at a loss now. Any help is appreciated. Thanks in advance. ### Response: The reason is a bug in kubernetes. It seems to be a TLS error. If you manually download kubernetes 1.0.1 you will get closer. However, there's been a bugfix that you'll need to patch into 1.0.1 to make it work properly with vagrant. Otherwise, network provisioning will not work and you'll run into this issue. So, as suggested there, apply these changes to the provision scripts of v1.0.1 and you'll be good to go. Simple, right?
I have been going through this to get started with Kubernetes, running on Mac OS v 10.9.5, Vagrant v 1.7.4, VirtualBox v 4.3.28, and Kubernetes v 1.0.3.: https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/vagrant.md Full install log can be found here: http://mcdonaldland.info/files/kubernetes/install-log.txt I've tried both the download and sh install versions of this. I've run this multiple times and every time it gets to the "waiting for each minion to be registered with cloud provider" step and loops forever. Eventually, I "CTRL+C" and exit the script. In looking at the source it obviously has something to do with the nodes not being found, but I can't figure out why it cannot find them. After killing the script, running 'kubectl get nodes' returns nothing. The same happens if I used the SH script. If I am in the ~/kubernetes/cluster director and run 'vagrant ssh master' or 'vagrant ssh minion-1' I can connect to them. If I run some scripts to add pods I can get them to register. Same with Replication Controllers. When I check status the pods will never start. When I dig into the logs it appears that the nodes cannot be connected to (aka found) and the minions are erroring on startup. I have been struggling for 5 days now to figure out why my nodes are not showing up / registering properly. I figure I'm missing something simple but am at a loss now. Any help is appreciated. Thanks in advance.
I think so. Make sure to get service discovery, logging and virtual networking right. For the former you can check out skydns. Docker now has a few logging plugins you can use for log management. For virtual networking you can look for Flannel and Weave. You want service discovery because Kubernetes will schedule the containers the way it sees fit and you need some way of telling what IP/port your microservice will be at. Virtual networking make it so each container has it's own subnet thus preventing port clashes in case you have two containers with the same ports exposed in the same host (kubernetes won't let it clash, it will schedule containers to run until you have hosts with ports available, if you try to create more it just won't run). Also, you can try the built-in cluster tools in Docker itself, like docker service, docker network commands and Docker Swarm. Docker-machine helps in case you already have a VM infrastructure in place.
###Instruction: We are currently moving towards microservices with Docker from a monolith application running in JBoss. I want to know the platform/tools/frameworks to be used to test these Docker containers in developer environment. Also what tools should be used to deploy these containers to this developer test environment. Is it a good option to use some thing like Kubernetes with chef/puppet/vagrant? ### Response: I think so. Make sure to get service discovery, logging and virtual networking right. For the former you can check out skydns. Docker now has a few logging plugins you can use for log management. For virtual networking you can look for Flannel and Weave. You want service discovery because Kubernetes will schedule the containers the way it sees fit and you need some way of telling what IP/port your microservice will be at. Virtual networking make it so each container has it's own subnet thus preventing port clashes in case you have two containers with the same ports exposed in the same host (kubernetes won't let it clash, it will schedule containers to run until you have hosts with ports available, if you try to create more it just won't run). Also, you can try the built-in cluster tools in Docker itself, like docker service, docker network commands and Docker Swarm. Docker-machine helps in case you already have a VM infrastructure in place.
We are currently moving towards microservices with Docker from a monolith application running in JBoss. I want to know the platform/tools/frameworks to be used to test these Docker containers in developer environment. Also what tools should be used to deploy these containers to this developer test environment. Is it a good option to use some thing like Kubernetes with chef/puppet/vagrant?
The best way to do this at the moment is using kubectl describe rc <RC_NAME> There's a proposal for implementing a deployment resource to cope with this use case: https://github.com/kubernetes/kubernetes/blob/55b7500b33553a77f8dceb5404d6af1767399386/docs/proposals/deployment.md
###Instruction: Is there a way to get the status of a deployment? The concept of deployment is modeled somehow? I can't find a "global" info on failures for pods relevant to a particular and I don't know if looking for all the pods makes sense... As "failure" I mean, for example, if I misspell the docker URL of the image... of course, I could just query one pod at random... but I'm not sure it would be the best idea to spot problems that are "common" to the whole deployment. What do you think? What is your proposed approach in this scenario? Thanks in advance. ### Response: The best way to do this at the moment is using kubectl describe rc <RC_NAME> There's a proposal for implementing a deployment resource to cope with this use case: https://github.com/kubernetes/kubernetes/blob/55b7500b33553a77f8dceb5404d6af1767399386/docs/proposals/deployment.md
Is there a way to get the status of a deployment? The concept of deployment is modeled somehow? I can't find a "global" info on failures for pods relevant to a particular and I don't know if looking for all the pods makes sense... As "failure" I mean, for example, if I misspell the docker URL of the image... of course, I could just query one pod at random... but I'm not sure it would be the best idea to spot problems that are "common" to the whole deployment. What do you think? What is your proposed approach in this scenario? Thanks in advance.
How did you set up Kubernetes? I'm not aware of any installation scripts that put pod IPs into a 172 subnet. But in general, assuming Kubernetes has been set up properly (ideally using one of the provided scripts), using a service object to load balance across your 1 or more redis pods would be the standard approach.
###Instruction: I have several pods, for example a python web app and a redis(shared by other apps), so I need to place redis in a separate pod. But they are all use the same subnet from docker(172.17.0.0/16) or even the same ip address. how can app pods talk with redis pod? Maybe what I want ask is what's the best way to setup multi-host conainer networking. 7 weeks later, I get more familiar with kubernetes. I know kubernetes has network assuming that pods can access each other. so if you app service pod need to access a shared service(redis) pod, you need expose the the shared service as a kubernetes service, then you can get the shared service endpoint from app pods' environment variables or hostname. ### Response: How did you set up Kubernetes? I'm not aware of any installation scripts that put pod IPs into a 172 subnet. But in general, assuming Kubernetes has been set up properly (ideally using one of the provided scripts), using a service object to load balance across your 1 or more redis pods would be the standard approach.
I have several pods, for example a python web app and a redis(shared by other apps), so I need to place redis in a separate pod. But they are all use the same subnet from docker(172.17.0.0/16) or even the same ip address. how can app pods talk with redis pod? Maybe what I want ask is what's the best way to setup multi-host conainer networking. 7 weeks later, I get more familiar with kubernetes. I know kubernetes has network assuming that pods can access each other. so if you app service pod need to access a shared service(redis) pod, you need expose the the shared service as a kubernetes service, then you can get the shared service endpoint from app pods' environment variables or hostname.
You should be able to do so setting the MINION_ROOT_DISK_SIZE environment variable before creating the cluster. However this option was just merged yesterday, so it may not be available yet unless you use the cluster/kube-up.sh script from HEAD of the repository.
###Instruction: We are running out of disk space for containers running on our nodes. We are running k8s 1.0.1 in aws. We are also trying to do all our configuration in software instead of manually configuring things. How do we increase the disk size of the nodes? Right now they have 8gb each as created by https://get.k8s.io | bash. It's fine if we have to create a new cluster and move our services/pods to it. ### Response: You should be able to do so setting the MINION_ROOT_DISK_SIZE environment variable before creating the cluster. However this option was just merged yesterday, so it may not be available yet unless you use the cluster/kube-up.sh script from HEAD of the repository.
We are running out of disk space for containers running on our nodes. We are running k8s 1.0.1 in aws. We are also trying to do all our configuration in software instead of manually configuring things. How do we increase the disk size of the nodes? Right now they have 8gb each as created by https://get.k8s.io | bash. It's fine if we have to create a new cluster and move our services/pods to it.
By default, the kube-apiserver process tries to open a secure (https) server port on port 6443 using credentials from the directory /var/run/kubernetes. If you want to disable the secure port, you can pass --secure-port=0 which should make your error go away. Alternatively, you can manually create certificates for your cluster so that the process is able to successfully open the secure port.
###Instruction: I am trying to setup Kubernetes for the first time. I am following the Fedora Manual installation guide: http://kubernetes.io/v1.0/docs/getting-started-guides/fedora/fedora_manual_config.html I checked the logs of my API server and am getting this error: server.go:464] Unable to listen for secure (open /var/run/kubernetes/apiserver.crt: no such file or directory); will try again. I assume it needs some sort of cert but the installation guide doesnt mention anything about this. Here is what my apiserver config file looks like # The address on the local server to listen to. KUBE_API_ADDRESS="--address=0.0.0.0" # The port on the local server to listen on. KUBE_API_PORT="--port=8080" # Port node listen on KUBELET_PORT="--kubelet_port=10250" # Location of the etcd cluster #KUBE_ETCD_SERVERS="--etcd_servers=http://vagrant-master:4001" KUBE_ETCD_SERVERS="--etcd_servers=http://127.0.0.1:4001" # Address range to use for services KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" # default admission control policies KUBE_ADMISSION_CONTROL="--admission_control=NamespaceAutoProvision,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota" # Add your own! KUBE_API_ARGS="--service_account_key_file=/etc/kubernetes/certs/serviceaccount.key" Here is my service status kube-apiserver.service - Kubernetes API Server Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled) Active: active (running) since Mon 2015-08-24 15:03:07 UTC; 5min ago Docs: https://github.com/GoogleCloudPlatform/kubernetes Main PID: 13663 (kube-apiserver) CGroup: /system.slice/kube-apiserver.service └─13663 /usr/bin/kube-apiserver --logtostderr=true --v=0 --etcd_servers=http://127.0.0.1:4001 --address=0.0.0.0 --port=8080 --kubelet_port=10250 --allow_privileged=false --service-cluster-ip-range=10.254.0.0/16 --admission_control=NamespaceAutoProvision,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota --service_account_key_file=/etc/kubernetes/certs/serviceaccount.key How can i fix this error? ### Response: By default, the kube-apiserver process tries to open a secure (https) server port on port 6443 using credentials from the directory /var/run/kubernetes. If you want to disable the secure port, you can pass --secure-port=0 which should make your error go away. Alternatively, you can manually create certificates for your cluster so that the process is able to successfully open the secure port.
I am trying to setup Kubernetes for the first time. I am following the Fedora Manual installation guide: http://kubernetes.io/v1.0/docs/getting-started-guides/fedora/fedora_manual_config.html I checked the logs of my API server and am getting this error: server.go:464] Unable to listen for secure (open /var/run/kubernetes/apiserver.crt: no such file or directory); will try again. I assume it needs some sort of cert but the installation guide doesnt mention anything about this. Here is what my apiserver config file looks like # The address on the local server to listen to. KUBE_API_ADDRESS="--address=0.0.0.0" # The port on the local server to listen on. KUBE_API_PORT="--port=8080" # Port node listen on KUBELET_PORT="--kubelet_port=10250" # Location of the etcd cluster #KUBE_ETCD_SERVERS="--etcd_servers=http://vagrant-master:4001" KUBE_ETCD_SERVERS="--etcd_servers=http://127.0.0.1:4001" # Address range to use for services KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" # default admission control policies KUBE_ADMISSION_CONTROL="--admission_control=NamespaceAutoProvision,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota" # Add your own! KUBE_API_ARGS="--service_account_key_file=/etc/kubernetes/certs/serviceaccount.key" Here is my service status kube-apiserver.service - Kubernetes API Server Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled) Active: active (running) since Mon 2015-08-24 15:03:07 UTC; 5min ago Docs: https://github.com/GoogleCloudPlatform/kubernetes Main PID: 13663 (kube-apiserver) CGroup: /system.slice/kube-apiserver.service └─13663 /usr/bin/kube-apiserver --logtostderr=true --v=0 --etcd_servers=http://127.0.0.1:4001 --address=0.0.0.0 --port=8080 --kubelet_port=10250 --allow_privileged=false --service-cluster-ip-range=10.254.0.0/16 --admission_control=NamespaceAutoProvision,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota --service_account_key_file=/etc/kubernetes/certs/serviceaccount.key How can i fix this error?
According to the GCE persistent disk documentation: "if you attach a persistent disk to multiple instances, all instances must attach the persistent disk in read-only mode." The Kubernetes documentation for GCE PD volumes also explains this limitation: "A feature of PD is that they can be mounted as read-only by multiple consumers simultaneously. This means that you can pre-populate a PD with your dataset and then serve it in parallel from as many pods as you need. Unfortunately, PDs can only be mounted by a single consumer in read-write mode - no simultaneous readers allowed."
###Instruction: As previously reported here, two pods can't mount the same disk even though one of them tries to do it as read-only mode. This is supposed to be allowed from Kubernetes documentation. Mounting scheme is: UniqueCluster/PodA has successfully mounted gdeDisk1 as read-write UniqueCluster/PodB fails to start when mounting gdeDisk1 as read-only Node description: Name: gke-zupcat-cluster-8fd35d81-node-1zr4 Labels: kubernetes.io/hostname=gke-zupcat-cluster-8fd35d81-node-1zr4 CreationTimestamp: Wed, 22 Jul 2015 14:47:56 -0300 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message Ready True Thu, 23 Jul 2015 12:06:18 -0300 Wed, 22 Jul 2015 22:53:34 -0300 kubelet is posting ready status Addresses: 10.240.17.72,146.148.79.174 Capacity: cpu: 2 memory: 7679608Ki pods: 40 Version: Kernel Version: 3.16.0-0.bpo.4-amd64 OS Image: Debian GNU/Linux 7 (wheezy) Container Runtime Version: docker://Unknown Kubelet Version: v1.0.1 Kube-Proxy Version: v1.0.1 PodCIDR: 10.108.0.0/24 ExternalID: 11953122931827361742 Pods: (5 in total) Namespace Name default fastrwdiskpod-yu517 kube-system fluentd-cloud-logging-gke-zupcat-cluster-8fd35d81- node-1zr4 kube-system kube-dns-v8-i3h20 kube-system kube-ui-v1-8zdrq kube-system monitoring-heapster-v5-e1zmi No events. Products versions: Client Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.0", GitCommit:"cd821444dcf3e1e237b5f3579721440624c9c4fa", GitTreeState:"clean"} Server Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.1", GitCommit:"6a5c06e3d1eb27a6310a09270e4a5fb1afa93e74", GitTreeState:"clean"} docker version Docker version 1.7.1, build 786b29d ### Response: According to the GCE persistent disk documentation: "if you attach a persistent disk to multiple instances, all instances must attach the persistent disk in read-only mode." The Kubernetes documentation for GCE PD volumes also explains this limitation: "A feature of PD is that they can be mounted as read-only by multiple consumers simultaneously. This means that you can pre-populate a PD with your dataset and then serve it in parallel from as many pods as you need. Unfortunately, PDs can only be mounted by a single consumer in read-write mode - no simultaneous readers allowed."
As previously reported here, two pods can't mount the same disk even though one of them tries to do it as read-only mode. This is supposed to be allowed from Kubernetes documentation. Mounting scheme is: UniqueCluster/PodA has successfully mounted gdeDisk1 as read-write UniqueCluster/PodB fails to start when mounting gdeDisk1 as read-only Node description: Name: gke-zupcat-cluster-8fd35d81-node-1zr4 Labels: kubernetes.io/hostname=gke-zupcat-cluster-8fd35d81-node-1zr4 CreationTimestamp: Wed, 22 Jul 2015 14:47:56 -0300 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message Ready True Thu, 23 Jul 2015 12:06:18 -0300 Wed, 22 Jul 2015 22:53:34 -0300 kubelet is posting ready status Addresses: 10.240.17.72,146.148.79.174 Capacity: cpu: 2 memory: 7679608Ki pods: 40 Version: Kernel Version: 3.16.0-0.bpo.4-amd64 OS Image: Debian GNU/Linux 7 (wheezy) Container Runtime Version: docker://Unknown Kubelet Version: v1.0.1 Kube-Proxy Version: v1.0.1 PodCIDR: 10.108.0.0/24 ExternalID: 11953122931827361742 Pods: (5 in total) Namespace Name default fastrwdiskpod-yu517 kube-system fluentd-cloud-logging-gke-zupcat-cluster-8fd35d81- node-1zr4 kube-system kube-dns-v8-i3h20 kube-system kube-ui-v1-8zdrq kube-system monitoring-heapster-v5-e1zmi No events. Products versions: Client Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.0", GitCommit:"cd821444dcf3e1e237b5f3579721440624c9c4fa", GitTreeState:"clean"} Server Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.1", GitCommit:"6a5c06e3d1eb27a6310a09270e4a5fb1afa93e74", GitTreeState:"clean"} docker version Docker version 1.7.1, build 786b29d
You can do exactly like the command you are running already. For example: file1 GGG=/home/ppp file2 HHH=/ter/ssd Then run the Docker command: docker run -it --env-file=/Users/user/file1 --env-file=/Users/users/file2 centos:6.6 /bin/bash Then once in the container: [user@99964c311fef ~]# env HOSTNAME=99964c311fef TERM=xterm OLDPWD=/ LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.tbz=01;31:*.tbz2=01;31:*.bz=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.axa=01;36:*.oga=01;36:*.spx=01;36:*.xspf=01;36: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin PWD=/user GGG=/home/ppp # <-- Here's file1 LANG=en_US.UTF-8 SHLVL=1 HOME=/user LESSOPEN=||/usr/bin/lesspipe.sh %s HHH=/ter/ssd # <-- Here's file2 G_BROKEN_FILENAMES=1 _=/usr/bin/env [user@99964c311fef ~]#
###Instruction: I was just wondering how can i point to two env-files, say /var/app/old/file.conf and /var/market/old/db.conf , both in the command line as I find there is support for mutiple -e flags. Does Docker supports pointing to multiple env-file in a command line like the one below docker run -d --hostname=158.64.72.80 -d -p 80:80 --env-file /var/app/old/file.conf --env-file /var/market/old/db.conf ### Response: You can do exactly like the command you are running already. For example: file1 GGG=/home/ppp file2 HHH=/ter/ssd Then run the Docker command: docker run -it --env-file=/Users/user/file1 --env-file=/Users/users/file2 centos:6.6 /bin/bash Then once in the container: [user@99964c311fef ~]# env HOSTNAME=99964c311fef TERM=xterm OLDPWD=/ LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.tbz=01;31:*.tbz2=01;31:*.bz=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.axa=01;36:*.oga=01;36:*.spx=01;36:*.xspf=01;36: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin PWD=/user GGG=/home/ppp # <-- Here's file1 LANG=en_US.UTF-8 SHLVL=1 HOME=/user LESSOPEN=||/usr/bin/lesspipe.sh %s HHH=/ter/ssd # <-- Here's file2 G_BROKEN_FILENAMES=1 _=/usr/bin/env [user@99964c311fef ~]#
I was just wondering how can i point to two env-files, say /var/app/old/file.conf and /var/market/old/db.conf , both in the command line as I find there is support for mutiple -e flags. Does Docker supports pointing to multiple env-file in a command line like the one below docker run -d --hostname=158.64.72.80 -d -p 80:80 --env-file /var/app/old/file.conf --env-file /var/market/old/db.conf
My solution for this has been to use the IP as the ID: trim the dots and you get a unique ID that is also available outside of the container to other containers. With a Service you can get access to the multiple containers's IPs (see my answer here on how to do this: what's the best way to let kubenetes pods communicate with each other? so you can get their IDs too if you use IPs as the unique ID. The only issue is that IDs are not continuous or start at 0, but zookeeper / kafka don't seem to mind. EDIT 1: The follow up concerns configuring Zookeeper: Each ZK node needs to know of the other nodes. The Kubernetes discovery service knowns of nodes that are within a Service so the idea is to start a Service with the ZK nodes. This Service needs to be started BEFORE creating the ReplicationController (RC) of the Zookeeper pods. The start-up script of the ZK container will then need to: wait for the discovery service to populate the ZK Service with its nodes (that takes a few seconds, for now I just add a sleep 10 at the beginning of my startup script but more reliably you should look for the service to have at least 3 nodes in it.) look up the containers forming the Service in the discovery service: this is done by querying the API. the KUBERNETES_SERVICE_HOST environment variable is available in each container. The endpoint to find service description is then URL="http(s)://$USERNAME:$PASSWORD@${KUBERNETES_SERVICE_HOST/api/v1/namespaces/${NAMESPACE}/endpoints/${SERVICE_NAME}" where NAMESPACE is default unless you changed it, and SERVICE_NAME would be zookeeper if you named your service zookeeper. there you get the description of the containers forming the Service, with their ip in a "ip" field. You can do: curl -s $URL | grep '\"ip\"' | awk '{print $2}' | awk -F\" '{print $2}' to get the list of IPs in the Service. With that, populate the zoo.cfg on the node using the ID defined above You might need the USERNAME and PASSWORD to reach the endpoint on services like google container engine. These need to be put in a Secret volume (see doc here: http://kubernetes.io/v1.0/docs/user-guide/secrets.html ) You would also need to use curl -s --insecure on Google Container Engine unless you go through the trouble of adding the CA cert to your pods Basically add the volume to the container, and look up the values from file. (contrary to what the doc says, DO NOT put the \n at the end of the username or password when base64 encoding: it just make your life more complicated when reading those) EDIT 2: Another thing you'll need to do on the Kafka nodes is get the IP and hostnames, and put them in the /etc/hosts file. Kafka seems to need to know the nodes by hostnames, and these are not set within service nodes by default EDIT 3: After much trial and thoughts using IP as an ID may not be so great: it depends on how you configure storage. for any kind of distributed service like zookeeper, kafka, mongo, hdfs, you might want to use the emptyDir type of storage, so it is just on that node (mounting a remote storage kind of defeats the purpose of distributing these services!) emptyDir will relaod with the data on the same node, so it seems more logical to use the NODE ID (node IP) as the ID, because then a pod that restarts on the same node will have the data. That avoid potential corruption of the data (if a new node starts writing in the same dir that is not actually empty, who knows what can happen) and also with Kafka, the topics being assigned a broker.id, if the broker id changes, zookeeper does not update the topic broker.id and the topic looks like it is available BUT points to the wrong broker.id and it's a mess. So far I have yet to find how to get the node IP though, but I think it's possible to lookup in the API by looking up the service pods names and then the node they are deployed on. EDIT 4 To get the node IP, you can get the pod hostname == name from the endpoints API /api/v1/namespaces/default/endpoints/ as explained above. then you can get the node IP from the pod name with /api/v1/namespaces/default/pods/ PS: this is inspired by the example in the Kubernetes repo (example for rethinkdb here: https://github.com/kubernetes/kubernetes/tree/master/examples/rethinkdb
###Instruction: So my objective here is to set up a cluster of several kafka-brokers in a distributed fashion. But I can't see the way to make the brokers aware of each other. As far as i understand, every broker needs a separate ID in their config, which I cannot guarantee or configure if I launch the containers from kubernetes? They also need to have the same advertised_host? Are there any parameters I'm missing that would need to be changed for the nodes to discover each other? Would it be viable to do such a configuration at the end of the Dockerfile with a script? And/or a shared volume? I'm currently trying to do this with the spotify/kafka-image which has a preconfigured zookeeper+kafka combination, on vanilla Kubernetes. ### Response: My solution for this has been to use the IP as the ID: trim the dots and you get a unique ID that is also available outside of the container to other containers. With a Service you can get access to the multiple containers's IPs (see my answer here on how to do this: what's the best way to let kubenetes pods communicate with each other? so you can get their IDs too if you use IPs as the unique ID. The only issue is that IDs are not continuous or start at 0, but zookeeper / kafka don't seem to mind. EDIT 1: The follow up concerns configuring Zookeeper: Each ZK node needs to know of the other nodes. The Kubernetes discovery service knowns of nodes that are within a Service so the idea is to start a Service with the ZK nodes. This Service needs to be started BEFORE creating the ReplicationController (RC) of the Zookeeper pods. The start-up script of the ZK container will then need to: wait for the discovery service to populate the ZK Service with its nodes (that takes a few seconds, for now I just add a sleep 10 at the beginning of my startup script but more reliably you should look for the service to have at least 3 nodes in it.) look up the containers forming the Service in the discovery service: this is done by querying the API. the KUBERNETES_SERVICE_HOST environment variable is available in each container. The endpoint to find service description is then URL="http(s)://$USERNAME:$PASSWORD@${KUBERNETES_SERVICE_HOST/api/v1/namespaces/${NAMESPACE}/endpoints/${SERVICE_NAME}" where NAMESPACE is default unless you changed it, and SERVICE_NAME would be zookeeper if you named your service zookeeper. there you get the description of the containers forming the Service, with their ip in a "ip" field. You can do: curl -s $URL | grep '\"ip\"' | awk '{print $2}' | awk -F\" '{print $2}' to get the list of IPs in the Service. With that, populate the zoo.cfg on the node using the ID defined above You might need the USERNAME and PASSWORD to reach the endpoint on services like google container engine. These need to be put in a Secret volume (see doc here: http://kubernetes.io/v1.0/docs/user-guide/secrets.html ) You would also need to use curl -s --insecure on Google Container Engine unless you go through the trouble of adding the CA cert to your pods Basically add the volume to the container, and look up the values from file. (contrary to what the doc says, DO NOT put the \n at the end of the username or password when base64 encoding: it just make your life more complicated when reading those) EDIT 2: Another thing you'll need to do on the Kafka nodes is get the IP and hostnames, and put them in the /etc/hosts file. Kafka seems to need to know the nodes by hostnames, and these are not set within service nodes by default EDIT 3: After much trial and thoughts using IP as an ID may not be so great: it depends on how you configure storage. for any kind of distributed service like zookeeper, kafka, mongo, hdfs, you might want to use the emptyDir type of storage, so it is just on that node (mounting a remote storage kind of defeats the purpose of distributing these services!) emptyDir will relaod with the data on the same node, so it seems more logical to use the NODE ID (node IP) as the ID, because then a pod that restarts on the same node will have the data. That avoid potential corruption of the data (if a new node starts writing in the same dir that is not actually empty, who knows what can happen) and also with Kafka, the topics being assigned a broker.id, if the broker id changes, zookeeper does not update the topic broker.id and the topic looks like it is available BUT points to the wrong broker.id and it's a mess. So far I have yet to find how to get the node IP though, but I think it's possible to lookup in the API by looking up the service pods names and then the node they are deployed on. EDIT 4 To get the node IP, you can get the pod hostname == name from the endpoints API /api/v1/namespaces/default/endpoints/ as explained above. then you can get the node IP from the pod name with /api/v1/namespaces/default/pods/ PS: this is inspired by the example in the Kubernetes repo (example for rethinkdb here: https://github.com/kubernetes/kubernetes/tree/master/examples/rethinkdb
So my objective here is to set up a cluster of several kafka-brokers in a distributed fashion. But I can't see the way to make the brokers aware of each other. As far as i understand, every broker needs a separate ID in their config, which I cannot guarantee or configure if I launch the containers from kubernetes? They also need to have the same advertised_host? Are there any parameters I'm missing that would need to be changed for the nodes to discover each other? Would it be viable to do such a configuration at the end of the Dockerfile with a script? And/or a shared volume? I'm currently trying to do this with the spotify/kafka-image which has a preconfigured zookeeper+kafka combination, on vanilla Kubernetes.
Looks like the VPC was not correctly deleted. After deleting manually, the installation proceeded as normal.
###Instruction: I started a Kubernetes cluster on AWS using: wget -q -O - https://get.k8s.io | bash I then shutdown the cluster. I then tried restarting it and get the following error: A client error (RouteAlreadyExists) occurred when calling the CreateRoute operation: The route identified by 10.246.0.0/24 already exists. Any ideas? ### Response: Looks like the VPC was not correctly deleted. After deleting manually, the installation proceeded as normal.
I started a Kubernetes cluster on AWS using: wget -q -O - https://get.k8s.io | bash I then shutdown the cluster. I then tried restarting it and get the following error: A client error (RouteAlreadyExists) occurred when calling the CreateRoute operation: The route identified by 10.246.0.0/24 already exists. Any ideas?
As both a CloudFoundry (past) and Kubernetes (present) commiter, I'm probably uniquely qualified to answer this one. PaaS-like I like to call CloudFoundry an "Application PaaS" and Kubernetes a "Container PaaS", but the distinction is fairly subtle and fluid, given that both projects change over time to compete in the same markets. The distinction between the two is that CF has a staging layer that takes a (12-factor) user app (e.g. jar or gem) and a Heroku-style buildpack (e.g. Java+Tomcat or Ruby) and produces a droplet (analogous to a Docker image). CF doesn't expose the containerization interface to the user, but Kubernetes does. Audience CloudFoundry's primary audience is enterprise application devs who want to deploy 12-factor stateless apps using Heroku-style buildpacks. Kubernetes' audience is a little broader, including both stateless application and stateful service developers who provide their own containers. This distinction could change in the future: CloudFoundry could start to accept docker images (Lattice accepts Docker images). Kubernetes could add an image generation layer (OpenShift does something like this). Feature Comparison As both projects mature and compete, their similarities and differences will change. So take the following feature comparison with a grain of salt. Both CF and K8s share many similar features, like containerization, namespacing, authentication, Kubernetes competitive advantages: Group and scale pods of containers that share a networking stack, rather than just scaling independently Bring your own container Stateful persistance layer Larger, more active OSS community More extensible architecture with replacable components and 3rd party plugins Free web GUI CloudFoundry competitive advantages: Mature authentication, user grouping, and multi-tenancy support [x] Bring your own app Included load balancer Deployed, scaled, and kept alive by BOSH [x] Robust logging and metrics aggregation [x] Enterprise web GUI [x] [x] These features are not part of Diego or included in Lattice. Deployment One of CloudFoundry's competitive advantages is that it has a mature deployment engine, BOSH, which enables features like scaling, resurrection and monitoring of core CF components. BOSH also supports many IaaS layers with a pluggable cloud provider abstraction. Unfortunately, BOSH's learning curve and deployment configuration management are nightmarish. (As a BOSH committer, I think I can say this with accuracy.) Kubernetes' deployment abstraction is still in its infancy. Multiple target environments are available in the core repo, but they're not all working, well tested, or supported by the primary developers. This is mostly a maturity thing. One might expect this to improve over time and increase in abstraction. For example, Kubernetes on DCOS allows deploying Kubernetes to an existing DCOS cluster with a single command. Historical Context Diego is a rewrite of CF's Droplet Execution Agent. It was originally developed before Kubernetes was announced and has taken on more feature scope as the competitive landscape has evolved. Its original goal was to generate droplets (user application + CF buildpack) and run them in Warden (renamed Garden when rewritten in Go) containers. Since its inception it's also been repackaged as Lattice, which is somewhat of a CloudFoundry-lite (although that name was taken by an existing project). For that reason, Lattice is somewhat toy-like, in that it has deliberately reduced user audience and scope, explicitly missing features that would make it "enterprise-ready". Features that CF already provides. This is partly because Lattice is used to test the core components, without some of the overhead from the more complex CF, but you can also use Lattice in internal high-trust environments where security and multi-tenancy aren't as much of a concern. It's also worth mentioning that CloudFoundry and Warden (its container engine) predate Docker as well, by a couple years. Kubernetes on the other hand, is a relatively new project that was developed by Google based on years of container usage with BORG and Omega. Kubernetes could be thought of as 3rd generation container orchestration at Google, the same way Diego is 3rd generation container orchestration at Pivotal/VMware (v1 written at VMware; v2 at VMware with Pivotal Labs help; v3 at Pivotal after it took over the project).
###Instruction: The next version of CloudFoundry / Diego will offer native support for Docker containers which will be orchestrated across multible hosts [link]. This sounds very similar to Kubernetes. Of course, the problem Kubernetes is trying to solve is more a generic, where CloudFoundry is more focussed on app development. However, for me it sounds both are heading into a similar direction and CloudFoundry is adding a lot more features on top of the plain orchestration. So I'm wondering about use-cases where Kubernetes would add more value than CloudFoundry? ### Response: As both a CloudFoundry (past) and Kubernetes (present) commiter, I'm probably uniquely qualified to answer this one. PaaS-like I like to call CloudFoundry an "Application PaaS" and Kubernetes a "Container PaaS", but the distinction is fairly subtle and fluid, given that both projects change over time to compete in the same markets. The distinction between the two is that CF has a staging layer that takes a (12-factor) user app (e.g. jar or gem) and a Heroku-style buildpack (e.g. Java+Tomcat or Ruby) and produces a droplet (analogous to a Docker image). CF doesn't expose the containerization interface to the user, but Kubernetes does. Audience CloudFoundry's primary audience is enterprise application devs who want to deploy 12-factor stateless apps using Heroku-style buildpacks. Kubernetes' audience is a little broader, including both stateless application and stateful service developers who provide their own containers. This distinction could change in the future: CloudFoundry could start to accept docker images (Lattice accepts Docker images). Kubernetes could add an image generation layer (OpenShift does something like this). Feature Comparison As both projects mature and compete, their similarities and differences will change. So take the following feature comparison with a grain of salt. Both CF and K8s share many similar features, like containerization, namespacing, authentication, Kubernetes competitive advantages: Group and scale pods of containers that share a networking stack, rather than just scaling independently Bring your own container Stateful persistance layer Larger, more active OSS community More extensible architecture with replacable components and 3rd party plugins Free web GUI CloudFoundry competitive advantages: Mature authentication, user grouping, and multi-tenancy support [x] Bring your own app Included load balancer Deployed, scaled, and kept alive by BOSH [x] Robust logging and metrics aggregation [x] Enterprise web GUI [x] [x] These features are not part of Diego or included in Lattice. Deployment One of CloudFoundry's competitive advantages is that it has a mature deployment engine, BOSH, which enables features like scaling, resurrection and monitoring of core CF components. BOSH also supports many IaaS layers with a pluggable cloud provider abstraction. Unfortunately, BOSH's learning curve and deployment configuration management are nightmarish. (As a BOSH committer, I think I can say this with accuracy.) Kubernetes' deployment abstraction is still in its infancy. Multiple target environments are available in the core repo, but they're not all working, well tested, or supported by the primary developers. This is mostly a maturity thing. One might expect this to improve over time and increase in abstraction. For example, Kubernetes on DCOS allows deploying Kubernetes to an existing DCOS cluster with a single command. Historical Context Diego is a rewrite of CF's Droplet Execution Agent. It was originally developed before Kubernetes was announced and has taken on more feature scope as the competitive landscape has evolved. Its original goal was to generate droplets (user application + CF buildpack) and run them in Warden (renamed Garden when rewritten in Go) containers. Since its inception it's also been repackaged as Lattice, which is somewhat of a CloudFoundry-lite (although that name was taken by an existing project). For that reason, Lattice is somewhat toy-like, in that it has deliberately reduced user audience and scope, explicitly missing features that would make it "enterprise-ready". Features that CF already provides. This is partly because Lattice is used to test the core components, without some of the overhead from the more complex CF, but you can also use Lattice in internal high-trust environments where security and multi-tenancy aren't as much of a concern. It's also worth mentioning that CloudFoundry and Warden (its container engine) predate Docker as well, by a couple years. Kubernetes on the other hand, is a relatively new project that was developed by Google based on years of container usage with BORG and Omega. Kubernetes could be thought of as 3rd generation container orchestration at Google, the same way Diego is 3rd generation container orchestration at Pivotal/VMware (v1 written at VMware; v2 at VMware with Pivotal Labs help; v3 at Pivotal after it took over the project).
The next version of CloudFoundry / Diego will offer native support for Docker containers which will be orchestrated across multible hosts [link]. This sounds very similar to Kubernetes. Of course, the problem Kubernetes is trying to solve is more a generic, where CloudFoundry is more focussed on app development. However, for me it sounds both are heading into a similar direction and CloudFoundry is adding a lot more features on top of the plain orchestration. So I'm wondering about use-cases where Kubernetes would add more value than CloudFoundry?
There are several reasons container can fail to start: the container command itself fails and exits -> check your docker image and start up script to make sure it works. Use sudo docker ps -a to find the offending container and sudo docker logs <container> to check for failure inside the container a dependency is not there: that happens for example when one tries to mount a volume that is not present, for example Secrets that are not created yet. --> make sure the dependent volumes are created.
###Instruction: I am following the Fedora getting started guide (https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/fedora/fedora_ansible_config.md) and trying to run the pod fedoraapache. But kubectl always shows fedoraapache as pending: POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS fedoraapache fedoraapache fedora/apache 192.168.226.144/192.168.226.144 name=fedoraapache Pending Since it is pending, I cannot run kubectl log pod fedoraapache. So, I instead run kubectl describe pod fedoraapache, which shows the following errors: Fri, 20 Mar 2015 22:00:05 +0800 Fri, 20 Mar 2015 22:00:05 +0800 1 {kubelet 192.168.226.144} implicitly required container POD created Created with docker id d4877bdffd4f2a13a17d4cc93c27c1c93d5494807b39ee8a823f5d9350e404d4 Fri, 20 Mar 2015 22:00:05 +0800 Fri, 20 Mar 2015 22:00:05 +0800 1 {kubelet 192.168.226.144} failedSync Error syncing pod, skipping: API error (500): Cannot start container d4877bdffd4f2a13a17d4cc93c27c1c93d5494807b39ee8a823f5d9350e404d4: (exit status 1) Fri, 20 Mar 2015 22:00:15 +0800 Fri, 20 Mar 2015 22:00:15 +0800 1 {kubelet 192.168.226.144} implicitly required container POD created Created with docker id 1c32b4c6e1aad0e575f6a155aebefcd5dd96857b12c47a63bfd8562fba961747 Fri, 20 Mar 2015 22:00:15 +0800 Fri, 20 Mar 2015 22:00:15 +0800 1 {kubelet 192.168.226.144} implicitly required container POD failed Failed to start with docker id 1c32b4c6e1aad0e575f6a155aebefcd5dd96857b12c47a63bfd8562fba961747 with error: API error (500): Cannot start container 1c32b4c6e1aad0e575f6a155aebefcd5dd96857b12c47a63bfd8562fba961747: (exit status 1) Fri, 20 Mar 2015 22:00:15 +0800 Fri, 20 Mar 2015 22:00:15 +0800 1 {kubelet 192.168.226.144} failedSync Error syncing pod, skipping: API error (500): Cannot start container 1c32b4c6e1aad0e575f6a155aebefcd5dd96857b12c47a63bfd8562fba961747: (exit status 1) Fri, 20 Mar 2015 22:00:25 +0800 Fri, 20 Mar 2015 22:00:25 +0800 1 {kubelet 192.168.226.144} failedSync Error syncing pod, skipping: API error (500): Cannot start container 8b117ee5c6bf13f0e97b895c367ce903e2a9efbd046a663c419c389d9953c55e: (exit status 1) Fri, 20 Mar 2015 22:00:25 +0800 Fri, 20 Mar 2015 22:00:25 +0800 1 {kubelet 192.168.226.144} implicitly required container POD created Created with docker id 8b117ee5c6bf13f0e97b895c367ce903e2a9efbd046a663c419c389d9953c55e Fri, 20 Mar 2015 22:00:25 +0800 Fri, 20 Mar 2015 22:00:25 +0800 1 {kubelet 192.168.226.144} implicitly required container POD failed Failed to start with docker id 8b117ee5c6bf13f0e97b895c367ce903e2a9efbd046a663c419c389d9953c55e with error: API error (500): Cannot start container 8b117ee5c6bf13f0e97b895c367ce903e2a9efbd046a663c419c389d9953c55e: (exit status 1) Fri, 20 Mar 2015 22:00:35 +0800 Fri, 20 Mar 2015 22:00:35 +0800 1 {kubelet 192.168.226.144} implicitly required container POD failed Failed to start with docker id 4b463040842b6a45db2ab154652fd2a27550dbd2e1a897c98473cd0b66d2d614 with error: API error (500): Cannot start container 4b463040842b6a45db2ab154652fd2a27550dbd2e1a897c98473cd0b66d2d614: (exit status 1) Fri, 20 Mar 2015 22:00:35 +0800 Fri, 20 Mar 2015 22:00:35 +0800 1 {kubelet 192.168.226.144} implicitly required container POD created Created with docker id 4b463040842b6a45db2ab154652fd2a27550dbd2e1a897c98473cd0b66d2d614 Fri, 20 Mar 2015 21:42:35 +0800 Fri, 20 Mar 2015 22:00:35 +0800 109 {kubelet 192.168.226.144} implicitly required container POD pulled Successfully pulled image "kubernetes/pause:latest" Fri, 20 Mar 2015 22:00:35 +0800 Fri, 20 Mar 2015 22:00:35 +0800 1 {kubelet 192.168.226.144} failedSync Error syncing pod, skipping: API error (500): Cannot start container 4b463040842b6a45db2ab154652fd2a27550dbd2e1a897c98473cd0b66d2d614: (exit status 1) ### Response: There are several reasons container can fail to start: the container command itself fails and exits -> check your docker image and start up script to make sure it works. Use sudo docker ps -a to find the offending container and sudo docker logs <container> to check for failure inside the container a dependency is not there: that happens for example when one tries to mount a volume that is not present, for example Secrets that are not created yet. --> make sure the dependent volumes are created.
I am following the Fedora getting started guide (https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/fedora/fedora_ansible_config.md) and trying to run the pod fedoraapache. But kubectl always shows fedoraapache as pending: POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS fedoraapache fedoraapache fedora/apache 192.168.226.144/192.168.226.144 name=fedoraapache Pending Since it is pending, I cannot run kubectl log pod fedoraapache. So, I instead run kubectl describe pod fedoraapache, which shows the following errors: Fri, 20 Mar 2015 22:00:05 +0800 Fri, 20 Mar 2015 22:00:05 +0800 1 {kubelet 192.168.226.144} implicitly required container POD created Created with docker id d4877bdffd4f2a13a17d4cc93c27c1c93d5494807b39ee8a823f5d9350e404d4 Fri, 20 Mar 2015 22:00:05 +0800 Fri, 20 Mar 2015 22:00:05 +0800 1 {kubelet 192.168.226.144} failedSync Error syncing pod, skipping: API error (500): Cannot start container d4877bdffd4f2a13a17d4cc93c27c1c93d5494807b39ee8a823f5d9350e404d4: (exit status 1) Fri, 20 Mar 2015 22:00:15 +0800 Fri, 20 Mar 2015 22:00:15 +0800 1 {kubelet 192.168.226.144} implicitly required container POD created Created with docker id 1c32b4c6e1aad0e575f6a155aebefcd5dd96857b12c47a63bfd8562fba961747 Fri, 20 Mar 2015 22:00:15 +0800 Fri, 20 Mar 2015 22:00:15 +0800 1 {kubelet 192.168.226.144} implicitly required container POD failed Failed to start with docker id 1c32b4c6e1aad0e575f6a155aebefcd5dd96857b12c47a63bfd8562fba961747 with error: API error (500): Cannot start container 1c32b4c6e1aad0e575f6a155aebefcd5dd96857b12c47a63bfd8562fba961747: (exit status 1) Fri, 20 Mar 2015 22:00:15 +0800 Fri, 20 Mar 2015 22:00:15 +0800 1 {kubelet 192.168.226.144} failedSync Error syncing pod, skipping: API error (500): Cannot start container 1c32b4c6e1aad0e575f6a155aebefcd5dd96857b12c47a63bfd8562fba961747: (exit status 1) Fri, 20 Mar 2015 22:00:25 +0800 Fri, 20 Mar 2015 22:00:25 +0800 1 {kubelet 192.168.226.144} failedSync Error syncing pod, skipping: API error (500): Cannot start container 8b117ee5c6bf13f0e97b895c367ce903e2a9efbd046a663c419c389d9953c55e: (exit status 1) Fri, 20 Mar 2015 22:00:25 +0800 Fri, 20 Mar 2015 22:00:25 +0800 1 {kubelet 192.168.226.144} implicitly required container POD created Created with docker id 8b117ee5c6bf13f0e97b895c367ce903e2a9efbd046a663c419c389d9953c55e Fri, 20 Mar 2015 22:00:25 +0800 Fri, 20 Mar 2015 22:00:25 +0800 1 {kubelet 192.168.226.144} implicitly required container POD failed Failed to start with docker id 8b117ee5c6bf13f0e97b895c367ce903e2a9efbd046a663c419c389d9953c55e with error: API error (500): Cannot start container 8b117ee5c6bf13f0e97b895c367ce903e2a9efbd046a663c419c389d9953c55e: (exit status 1) Fri, 20 Mar 2015 22:00:35 +0800 Fri, 20 Mar 2015 22:00:35 +0800 1 {kubelet 192.168.226.144} implicitly required container POD failed Failed to start with docker id 4b463040842b6a45db2ab154652fd2a27550dbd2e1a897c98473cd0b66d2d614 with error: API error (500): Cannot start container 4b463040842b6a45db2ab154652fd2a27550dbd2e1a897c98473cd0b66d2d614: (exit status 1) Fri, 20 Mar 2015 22:00:35 +0800 Fri, 20 Mar 2015 22:00:35 +0800 1 {kubelet 192.168.226.144} implicitly required container POD created Created with docker id 4b463040842b6a45db2ab154652fd2a27550dbd2e1a897c98473cd0b66d2d614 Fri, 20 Mar 2015 21:42:35 +0800 Fri, 20 Mar 2015 22:00:35 +0800 109 {kubelet 192.168.226.144} implicitly required container POD pulled Successfully pulled image "kubernetes/pause:latest" Fri, 20 Mar 2015 22:00:35 +0800 Fri, 20 Mar 2015 22:00:35 +0800 1 {kubelet 192.168.226.144} failedSync Error syncing pod, skipping: API error (500): Cannot start container 4b463040842b6a45db2ab154652fd2a27550dbd2e1a897c98473cd0b66d2d614: (exit status 1)
Set an imagePullPolicy of Always on the container
###Instruction: Using Kubernetes for deployment: Considering I have a Dockerfile, I build, then push to registry. If I run a container on a host, the image gets pulled and the container is ran. Now, if i update the Dockerfile, build and push again, without changing its tag then the image is changed in the registry, but the host has the image pulled, and it doesn't seem to go look for updates. How do i force a pull to get the latest image when running a container? I can manually pull the image, but I'd like to know if there is a 'formal way' of doing this (in the pod or rc templates?) Thanks for insight. ### Response: Set an imagePullPolicy of Always on the container
Using Kubernetes for deployment: Considering I have a Dockerfile, I build, then push to registry. If I run a container on a host, the image gets pulled and the container is ran. Now, if i update the Dockerfile, build and push again, without changing its tag then the image is changed in the registry, but the host has the image pulled, and it doesn't seem to go look for updates. How do i force a pull to get the latest image when running a container? I can manually pull the image, but I'd like to know if there is a 'formal way' of doing this (in the pod or rc templates?) Thanks for insight.
When you create a service, the service will proxy the connection to the different pods. A service therefore maintains the list of IPs of the pods' containers. You can then look those up in the API they will be at http(s)://${KUBERNETES_SERVICE_HOST}/api/v1/namespaces/${NAMESPACE}/endpoints/${SERVICE_NAME} NAMESPACE is the name of the namespace. By default it is default, so if you didn't set a namespace in the pod replace with 'default' SERVICE_NAME is your service name KUBERNETES_SERVICE_HOST is an environment variable available in your container. You will get a JSON object with containers and "ip" tags. You can then pipe the answer to a grep '\"ip\"' | awk '{print $2}' | awk -F\" '{print $2}' do isolate the IPs You might also need credentials to reach the https (test it with curl) in Google Cloud, credentials can be found by looking up gcloud cluster-info <your-cluster-name> Note: even if you don't use the service to talk to your pods, it serves the purpose of gathering the IPs for your pods. However note that these may change if the pod get rescheduled somewhere else when a node fails, the Service takes care of maintaining the up-to-date list, but your app needs to pull at intervals or set a watch on the endpoints to keep it's list up to date.
###Instruction: I have several pods, for example a python web app and a redis(shared by other apps), so I need to place redis in a separate pod. But they are all use the same subnet from docker(172.17.0.0/16) or even the same ip address. how can app pods talk with redis pod? Maybe what I want ask is what's the best way to setup multi-host conainer networking. 7 weeks later, I get more familiar with kubernetes. I know kubernetes has network assuming that pods can access each other. so if you app service pod need to access a shared service(redis) pod, you need expose the the shared service as a kubernetes service, then you can get the shared service endpoint from app pods' environment variables or hostname. ### Response: When you create a service, the service will proxy the connection to the different pods. A service therefore maintains the list of IPs of the pods' containers. You can then look those up in the API they will be at http(s)://${KUBERNETES_SERVICE_HOST}/api/v1/namespaces/${NAMESPACE}/endpoints/${SERVICE_NAME} NAMESPACE is the name of the namespace. By default it is default, so if you didn't set a namespace in the pod replace with 'default' SERVICE_NAME is your service name KUBERNETES_SERVICE_HOST is an environment variable available in your container. You will get a JSON object with containers and "ip" tags. You can then pipe the answer to a grep '\"ip\"' | awk '{print $2}' | awk -F\" '{print $2}' do isolate the IPs You might also need credentials to reach the https (test it with curl) in Google Cloud, credentials can be found by looking up gcloud cluster-info <your-cluster-name> Note: even if you don't use the service to talk to your pods, it serves the purpose of gathering the IPs for your pods. However note that these may change if the pod get rescheduled somewhere else when a node fails, the Service takes care of maintaining the up-to-date list, but your app needs to pull at intervals or set a watch on the endpoints to keep it's list up to date.
I have several pods, for example a python web app and a redis(shared by other apps), so I need to place redis in a separate pod. But they are all use the same subnet from docker(172.17.0.0/16) or even the same ip address. how can app pods talk with redis pod? Maybe what I want ask is what's the best way to setup multi-host conainer networking. 7 weeks later, I get more familiar with kubernetes. I know kubernetes has network assuming that pods can access each other. so if you app service pod need to access a shared service(redis) pod, you need expose the the shared service as a kubernetes service, then you can get the shared service endpoint from app pods' environment variables or hostname.
You are right in saying that kubectl rolling-update frontend --image=image:v2 does not give you a way to provide more details about a container when updating a pod that has more than one container. It gives you an error Image update is not supported for multi-container pods But, it certainly gives you 2 variants kubectl rolling-update frontend-v1 -f frontend-v2.json cat frontend-v2.json | kubectl rolling-update frontend-v1 -f - where v1 and v2 can be your build versions. This way, you would also have the advantage of your pod names adhering to your build versions too.
###Instruction: Is there any way to do a rolling-update with a replication controller that has 2 or more containers? For example, I have Jenkins setup to automatically do a rolling update on a rep controller in our dev environment once a successful build takes place using the --image flag specifying the new container's image stored in GCR. This method doesn't work when there are two containers in the same pod and there is no "-c" flag to specify the container you wish to update on the rolling-update command as there is on other commands such as "exec" or "logs". The reason I'm looking to have multiple pods is to implement a logging sidecar as in: https://github.com/kubernetes/contrib/tree/master/logging/fluentd-sidecar-es The only alternative I can think of is to bake the fluentd config into each container, which feels decidedly 'un-kubernetes' to me. ### Response: You are right in saying that kubectl rolling-update frontend --image=image:v2 does not give you a way to provide more details about a container when updating a pod that has more than one container. It gives you an error Image update is not supported for multi-container pods But, it certainly gives you 2 variants kubectl rolling-update frontend-v1 -f frontend-v2.json cat frontend-v2.json | kubectl rolling-update frontend-v1 -f - where v1 and v2 can be your build versions. This way, you would also have the advantage of your pod names adhering to your build versions too.
Is there any way to do a rolling-update with a replication controller that has 2 or more containers? For example, I have Jenkins setup to automatically do a rolling update on a rep controller in our dev environment once a successful build takes place using the --image flag specifying the new container's image stored in GCR. This method doesn't work when there are two containers in the same pod and there is no "-c" flag to specify the container you wish to update on the rolling-update command as there is on other commands such as "exec" or "logs". The reason I'm looking to have multiple pods is to implement a logging sidecar as in: https://github.com/kubernetes/contrib/tree/master/logging/fluentd-sidecar-es The only alternative I can think of is to bake the fluentd config into each container, which feels decidedly 'un-kubernetes' to me.
UPDATE: gitRepo is now deprecated https://github.com/kubernetes/kubernetes/issues/60999 ORIGINAL ANSWER: going through the code this is what i figured: - name: data gitRepo: repository: "git repo url" revision: "hash of the commit to use" after fixing typos in my mountPath, it works fine.
###Instruction: Can someone give an example of how to use the gitRepo type of volume in Kubernetes? The doc says it's a plugin, not sure what that means. Could not find an example anywhere and i don't know the proper syntax. especially is there parameters to pull a specific branch, use credentials (username, password, or SSH key) etc... EDIT: Going through the Kubernetes code this is what I figured so far: - name: data gitRepo: repository: "git repo url" revision: "hash of the commit to use" But can't seen to make it work, and not sure how to troubleshoot this issue ### Response: UPDATE: gitRepo is now deprecated https://github.com/kubernetes/kubernetes/issues/60999 ORIGINAL ANSWER: going through the code this is what i figured: - name: data gitRepo: repository: "git repo url" revision: "hash of the commit to use" after fixing typos in my mountPath, it works fine.
Can someone give an example of how to use the gitRepo type of volume in Kubernetes? The doc says it's a plugin, not sure what that means. Could not find an example anywhere and i don't know the proper syntax. especially is there parameters to pull a specific branch, use credentials (username, password, or SSH key) etc... EDIT: Going through the Kubernetes code this is what I figured so far: - name: data gitRepo: repository: "git repo url" revision: "hash of the commit to use" But can't seen to make it work, and not sure how to troubleshoot this issue
The ID is guaranteed to be unique at any single point in time, since Kubernetes doesn't allow two pods in the same namespace to have the same name. There aren't any longer-term guarantees though, since they're just generated as a random string of 5 alphanumeric characters. However, given that there are more than 60 million such random strings, conflicts across time are also unlikely in most environments. Yes, you can pull in the pod's namespace and name as environment variables using what's called the "Downward API", adding a field on the container like env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name
###Instruction: So I'm researching how to use Kubernetes for my case. I installed it and played a bit. The question is when the replication controller starts couple of replicas they have something like an id in their name: How unique is this id? Is it uniqueness for the lifetime of kubernetes? Is it unique across different kubernetes runs (i.e. if I restart kubernetes)? How to pass this id to the app in the container? Can I specify some kind of template in the yaml so for example the id will be assigned to environment variable or something similar? Alternatively is there a way for the app in the container to ask for this id? More explanation of the use case. I have an application that writes some session files inside a directory. I want to guarantee unique for the session ids in the system. This means if one app instance is running on VM1 and another instance on VM2, I want to prepend some kind of identifier to the ids like app-1-dajk4l and app-2-dajk4l, where app is the name of the app and 1, 2 is the instance identifier, which should come from the replication controller because it is dynamic and can not be configured manually. dajk4l is some identifier like the current timestamp or similar. Thanks. ### Response: The ID is guaranteed to be unique at any single point in time, since Kubernetes doesn't allow two pods in the same namespace to have the same name. There aren't any longer-term guarantees though, since they're just generated as a random string of 5 alphanumeric characters. However, given that there are more than 60 million such random strings, conflicts across time are also unlikely in most environments. Yes, you can pull in the pod's namespace and name as environment variables using what's called the "Downward API", adding a field on the container like env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name
So I'm researching how to use Kubernetes for my case. I installed it and played a bit. The question is when the replication controller starts couple of replicas they have something like an id in their name: How unique is this id? Is it uniqueness for the lifetime of kubernetes? Is it unique across different kubernetes runs (i.e. if I restart kubernetes)? How to pass this id to the app in the container? Can I specify some kind of template in the yaml so for example the id will be assigned to environment variable or something similar? Alternatively is there a way for the app in the container to ask for this id? More explanation of the use case. I have an application that writes some session files inside a directory. I want to guarantee unique for the session ids in the system. This means if one app instance is running on VM1 and another instance on VM2, I want to prepend some kind of identifier to the ids like app-1-dajk4l and app-2-dajk4l, where app is the name of the app and 1, 2 is the instance identifier, which should come from the replication controller because it is dynamic and can not be configured manually. dajk4l is some identifier like the current timestamp or similar. Thanks.
You can use a PullSecret to tell Kubernetes what registry to get your containers from. Please see: http://releases.k8s.io/release-1.0/docs/user-guide/images.md#specifying-imagepullsecrets-on-a-pod
###Instruction: I know how to run a registry mirror docker run -p 5000:5000 \ -e STANDALONE=false \ -e MIRROR_SOURCE=https://registry-1.docker.io \ -e MIRROR_SOURCE_INDEX=https://index.docker.io \ registry and how to use it docker --registry-mirror=http://10.0.0.2:5000 -d But how can I use multiple registry mirror. This is what I need: Docker hub mirror Google container registry mirror for k8s Private registry So I have to make tow registry mirror and a private registry.I want to docker run registry mirror 1st and 2nd registry, and one more docker run registry hold my private registry. The client will use three of these registry. I have no clue of how to do this,I think this is a common use case, please help, thanks. ### Response: You can use a PullSecret to tell Kubernetes what registry to get your containers from. Please see: http://releases.k8s.io/release-1.0/docs/user-guide/images.md#specifying-imagepullsecrets-on-a-pod
I know how to run a registry mirror docker run -p 5000:5000 \ -e STANDALONE=false \ -e MIRROR_SOURCE=https://registry-1.docker.io \ -e MIRROR_SOURCE_INDEX=https://index.docker.io \ registry and how to use it docker --registry-mirror=http://10.0.0.2:5000 -d But how can I use multiple registry mirror. This is what I need: Docker hub mirror Google container registry mirror for k8s Private registry So I have to make tow registry mirror and a private registry.I want to docker run registry mirror 1st and 2nd registry, and one more docker run registry hold my private registry. The client will use three of these registry. I have no clue of how to do this,I think this is a common use case, please help, thanks.
Unless you have many megabytes of config, this system sounds unnecessarily complex. The intended usage is for you to just put each config into a secret, and the pods needing the config can mount that secret as a volume. You can then use any of a variety of mechanisms to pass that config to your task, e.g. if it's environment variables source secret/config.sh; ./mybinary is a simple way. I don't think you gain any extra security by storing a private key as a secret.
###Instruction: I have a pod that runs containers which require access to sensitive information like API keys and DB passwords. Right now, these sensitive values are embedded in the controller definitions like so: env: - name: DB_PASSWORD value: password which are then available inside the Docker container as the $DB_PASSWORD environment variable. All fairly easy. But reading their documentation on Secrets, they explicitly say that putting sensitive configuration values into your definition breaches best practice and is potentially a security issue. The only other strategy I can think of is the following: create an OpenPGP key per user community or namespace use crypt to set the configuration value into etcd (which is encrypted using the private key) create a kubernetes secret containing the private key, like so associate that secret with the container (meaning that the private key will be accessible as a volume mount), like so when the container is launched, it will access the file inside the volume mount for the private key, and use it to decrypt the conf values returned from etcd this can then be incorporated into confd, which populates local files according to a template definition (such as Apache or WordPress config files) This seems fairly complicated, but more secure and flexible, since the values will no longer be static and stored in plaintext. So my question, and I know it's not an entirely objective one, is whether this is completely necessary or not? Only admins will be able to view and execute the RC definitions in the first place; so if somebody's breached the kubernetes master, you have other problems to worry about. The only benefit I see is that there's no danger of secrets being committed to the filesystem in plaintext... Are there any other ways to populate Docker containers with secret information in a secure way? ### Response: Unless you have many megabytes of config, this system sounds unnecessarily complex. The intended usage is for you to just put each config into a secret, and the pods needing the config can mount that secret as a volume. You can then use any of a variety of mechanisms to pass that config to your task, e.g. if it's environment variables source secret/config.sh; ./mybinary is a simple way. I don't think you gain any extra security by storing a private key as a secret.
I have a pod that runs containers which require access to sensitive information like API keys and DB passwords. Right now, these sensitive values are embedded in the controller definitions like so: env: - name: DB_PASSWORD value: password which are then available inside the Docker container as the $DB_PASSWORD environment variable. All fairly easy. But reading their documentation on Secrets, they explicitly say that putting sensitive configuration values into your definition breaches best practice and is potentially a security issue. The only other strategy I can think of is the following: create an OpenPGP key per user community or namespace use crypt to set the configuration value into etcd (which is encrypted using the private key) create a kubernetes secret containing the private key, like so associate that secret with the container (meaning that the private key will be accessible as a volume mount), like so when the container is launched, it will access the file inside the volume mount for the private key, and use it to decrypt the conf values returned from etcd this can then be incorporated into confd, which populates local files according to a template definition (such as Apache or WordPress config files) This seems fairly complicated, but more secure and flexible, since the values will no longer be static and stored in plaintext. So my question, and I know it's not an entirely objective one, is whether this is completely necessary or not? Only admins will be able to view and execute the RC definitions in the first place; so if somebody's breached the kubernetes master, you have other problems to worry about. The only benefit I see is that there's no danger of secrets being committed to the filesystem in plaintext... Are there any other ways to populate Docker containers with secret information in a secure way?
GitHub allows cloning repositories using an OAuth token in https URLs as such: $ git clone https://$GH_TOKEN@github.com/owner/repo.git see https://help.github.com/articles/creating-an-access-token-for-command-line-use/
###Instruction: Has anyone mounted a private GitHub repository into a kubernetes pod volume? What is the best way to achieve this? I thought of two possible ways: Using user / password in HTTPS repository URL Using private SSH key on the machine I like the second better, but I couldn't figure which user is pulling the repository to puts the appropriate SSH configuration for it. Any thoughts? ### Response: GitHub allows cloning repositories using an OAuth token in https URLs as such: $ git clone https://$GH_TOKEN@github.com/owner/repo.git see https://help.github.com/articles/creating-an-access-token-for-command-line-use/
Has anyone mounted a private GitHub repository into a kubernetes pod volume? What is the best way to achieve this? I thought of two possible ways: Using user / password in HTTPS repository URL Using private SSH key on the machine I like the second better, but I couldn't figure which user is pulling the repository to puts the appropriate SSH configuration for it. Any thoughts?
Did you do a sudo -s on the node ? You have to be an admin to connect to the docker daemon and do queries using docker command line client.
###Instruction: I try to Install Stratos with Kubernetes in a Testing Environment to build Stratos.I downloading the Kubernetes binaries and provisioned a Docker registry to VAGRANT_KUBERNETES_SETUP folder (in 2.c. i in page).But it gives 3 Failed Units(docker.service,setup-network-environment.service and docker.socket) When I Log into the master node.So I can't view Docker images by using 'docker images' command.when I view docker images it give this error-"FATA[0000] Cannot connect to the Docker daemon. Is 'docker -d' running on this host?" how can i fixed this problem?do i need to install in different way to work with vagrant? ### Response: Did you do a sudo -s on the node ? You have to be an admin to connect to the docker daemon and do queries using docker command line client.
I try to Install Stratos with Kubernetes in a Testing Environment to build Stratos.I downloading the Kubernetes binaries and provisioned a Docker registry to VAGRANT_KUBERNETES_SETUP folder (in 2.c. i in page).But it gives 3 Failed Units(docker.service,setup-network-environment.service and docker.socket) When I Log into the master node.So I can't view Docker images by using 'docker images' command.when I view docker images it give this error-"FATA[0000] Cannot connect to the Docker daemon. Is 'docker -d' running on this host?" how can i fixed this problem?do i need to install in different way to work with vagrant?
Hello am kinda new to kubernetes but I have similar questions and concerns. Will try to answer some of them or redirect you to the relevant sections of the user guide. In case you are deploying Kubernetes on a non cloud enabled providers like for example vagrant /local, etc then some features are not currently offered or automated by the platform for u. One of those things is the 'LoadBalancer' type of Service. The automatic provision and assignment of a PUBLIC IP to the service (acting a L.B) happens currently only in platforms like Google Container engine. See issue here and here. The official documentation states On cloud providers which support external load balancers, setting the type field to "LoadBalancer" will provision a load balancer for your Service. Currently an alternative is being developed and documented, see here using HAProxy. Maybe in the near future, kubernetes will eventually support that kind of feature in all the available platforms that can be deployed and operate, so always check their updated features. What you are referring as performance degrade is most probably due to the fact, PublicIP (NodePort from version 1.0 and onwards) feature is working. Meaning that with the use of NodePort service type, kubernetes assigns a port on ALL nodes of the cluster for this kind of service. Then the kube-proxy intercepts the calls to this ports to the actual service etc. An example on using HaProxy trying to solve the very same problem can be found here. Hope that helped a bit.
###Instruction: I have encountered a scalability problem when trying out the kubernetes cluster. To simplify the topology in my test machine, NodePort type is used to expose the individual service externally. The baremetal to host the node and master is a RHEL 7 with 24 CPUs and 32G RAM; I don't yet have a dedicated load balancer, or a cloud provider like infrastructure. A snippet of the service definition looks just like below "spec": { "ports": [{ "port": 10443, "targetPort": 10443, "protocol": "TCP", "nodePort": 30443 } ], "type": "NodePort", Through this way the application can be accessible via https://[node_machine]:30443/[a_service] Such service is only backed by one Pod. Ideally I would want to have several services deployed on the same node (but using different NodePort's), and and running concurrently. Things were working well until it became evident that for a similar workload, increasing the number of services deployed (therefore backend pods as well) makes the applications degrade in performance. Surprisingly, when breaking down the service loading time, I noticed there's dramatic degradation in 'Connection Time' which seems to indicate there is a slowdown somewhere in the 'network' layer. Please note that the load isn't high enough to drive much of the CPU on the node yet. I read about the shortcomings in the doc, but not sure if what I hit is exactly the limitation of the kube-proxy/Service described there. The questions are: Is there any suggestion on how to make it more scalable? I.e. to be able to support more services/Pods without scarifying the applications' performance? The NodePort type is the easiest way to setup the 'public' address for our services but is there any limitation for scalability or performance if all services and Pods are setting up this way? Would there be any difference if we change the type to LoadBalancer? "type": "LoadBalancer" Further more, is there a benefit to have a dedicated LoadBalancer or reverse proxy to improve the scalability, e.g. HAProxy or alike, that routes traffic from external to the backend Pods (or Services)? I noticed there's some work done for Nginx darkgaro/kubernetes-reverseproxy - unfortunately the doc seems incomplete and there's no concrete example. In some of the other threads folks talked about Vulcan - is it the recommended LB tool for kubernetes? Your recommendation and help are highly appreciated! ### Response: Hello am kinda new to kubernetes but I have similar questions and concerns. Will try to answer some of them or redirect you to the relevant sections of the user guide. In case you are deploying Kubernetes on a non cloud enabled providers like for example vagrant /local, etc then some features are not currently offered or automated by the platform for u. One of those things is the 'LoadBalancer' type of Service. The automatic provision and assignment of a PUBLIC IP to the service (acting a L.B) happens currently only in platforms like Google Container engine. See issue here and here. The official documentation states On cloud providers which support external load balancers, setting the type field to "LoadBalancer" will provision a load balancer for your Service. Currently an alternative is being developed and documented, see here using HAProxy. Maybe in the near future, kubernetes will eventually support that kind of feature in all the available platforms that can be deployed and operate, so always check their updated features. What you are referring as performance degrade is most probably due to the fact, PublicIP (NodePort from version 1.0 and onwards) feature is working. Meaning that with the use of NodePort service type, kubernetes assigns a port on ALL nodes of the cluster for this kind of service. Then the kube-proxy intercepts the calls to this ports to the actual service etc. An example on using HaProxy trying to solve the very same problem can be found here. Hope that helped a bit.
I have encountered a scalability problem when trying out the kubernetes cluster. To simplify the topology in my test machine, NodePort type is used to expose the individual service externally. The baremetal to host the node and master is a RHEL 7 with 24 CPUs and 32G RAM; I don't yet have a dedicated load balancer, or a cloud provider like infrastructure. A snippet of the service definition looks just like below "spec": { "ports": [{ "port": 10443, "targetPort": 10443, "protocol": "TCP", "nodePort": 30443 } ], "type": "NodePort", Through this way the application can be accessible via https://[node_machine]:30443/[a_service] Such service is only backed by one Pod. Ideally I would want to have several services deployed on the same node (but using different NodePort's), and and running concurrently. Things were working well until it became evident that for a similar workload, increasing the number of services deployed (therefore backend pods as well) makes the applications degrade in performance. Surprisingly, when breaking down the service loading time, I noticed there's dramatic degradation in 'Connection Time' which seems to indicate there is a slowdown somewhere in the 'network' layer. Please note that the load isn't high enough to drive much of the CPU on the node yet. I read about the shortcomings in the doc, but not sure if what I hit is exactly the limitation of the kube-proxy/Service described there. The questions are: Is there any suggestion on how to make it more scalable? I.e. to be able to support more services/Pods without scarifying the applications' performance? The NodePort type is the easiest way to setup the 'public' address for our services but is there any limitation for scalability or performance if all services and Pods are setting up this way? Would there be any difference if we change the type to LoadBalancer? "type": "LoadBalancer" Further more, is there a benefit to have a dedicated LoadBalancer or reverse proxy to improve the scalability, e.g. HAProxy or alike, that routes traffic from external to the backend Pods (or Services)? I noticed there's some work done for Nginx darkgaro/kubernetes-reverseproxy - unfortunately the doc seems incomplete and there's no concrete example. In some of the other threads folks talked about Vulcan - is it the recommended LB tool for kubernetes? Your recommendation and help are highly appreciated!
Yes, you can attach to a container in a pod. Using Kubernetes 1.0 issue the following command: Do: kubectl get po to get the POD name kubectl describe po POD-NAME to find container name Then: kubectl exec -it POD-NAME -c CONTAINER-NAME bash Assuming you have bash Its similar to docker exec -it CONTAINER-NAME WHAT_EVER_LOCAL_COMMAND
###Instruction: I have a containerized app running on a VM. It consists of two docker containers. The first contains the WebSphere Liberty server and the web app. The second contains PostgreSQL and the app's DB. On my local VM, I just use docker run to start the two containers and then I use docker attach to attach to the web server container so I can edit the server.xml file to specify the public host IP for the DB and then start the web server in the container. The app runs fine. Now I'm trying to deploy the app on Google Cloud Platform. I set up my gcloud configuration (project, compute/zone). I created a cluster. I created a JSON pod config file which specifies both containers. I created the pod. I opened the firewall for the port specified in the pod config file. At this point: I look at the pod (gcloud preview container kubectl get pods), it shows both containers are running. I SSH to the cluster (gcloud compute ssh xxx-mycluster-node-1) and issue sudo docker ps and it shows the database container running, but not the web server container. With sudo docker ps -l I can see the web server container that is not running, but it keeps trying to start and exiting every 10 seconds or so. So now I need to update the server.xml and start the Liberty server, but I have no idea how to do that in this realm. Can I attach to the web server container like I do in my local VM? Any help would be greatly appreciated. Thanks. ### Response: Yes, you can attach to a container in a pod. Using Kubernetes 1.0 issue the following command: Do: kubectl get po to get the POD name kubectl describe po POD-NAME to find container name Then: kubectl exec -it POD-NAME -c CONTAINER-NAME bash Assuming you have bash Its similar to docker exec -it CONTAINER-NAME WHAT_EVER_LOCAL_COMMAND
I have a containerized app running on a VM. It consists of two docker containers. The first contains the WebSphere Liberty server and the web app. The second contains PostgreSQL and the app's DB. On my local VM, I just use docker run to start the two containers and then I use docker attach to attach to the web server container so I can edit the server.xml file to specify the public host IP for the DB and then start the web server in the container. The app runs fine. Now I'm trying to deploy the app on Google Cloud Platform. I set up my gcloud configuration (project, compute/zone). I created a cluster. I created a JSON pod config file which specifies both containers. I created the pod. I opened the firewall for the port specified in the pod config file. At this point: I look at the pod (gcloud preview container kubectl get pods), it shows both containers are running. I SSH to the cluster (gcloud compute ssh xxx-mycluster-node-1) and issue sudo docker ps and it shows the database container running, but not the web server container. With sudo docker ps -l I can see the web server container that is not running, but it keeps trying to start and exiting every 10 seconds or so. So now I need to update the server.xml and start the Liberty server, but I have no idea how to do that in this realm. Can I attach to the web server container like I do in my local VM? Any help would be greatly appreciated. Thanks.
The way that Kubernetes is designed has Services decoupled from Pods. You can not inject a Secret or an env var into a running Pod. What you want is to configure the Pod to use the env var or Secret.
###Instruction: Is there a way to pass environment variables through the services in Kubernetes? I tried passing it in to my service yaml like this: apiVersion: v1 kind: Service metadata: labels: name: kafka name: kafka spec: ports: - port: 9092 selector: name: kafka env: - name: BROKER_ID value: "1" The service is being consumed by kubectl, and is created. I've confirmed the service is connected to my container through env | grep KAFKA and the output of variables greatly increase, as expected when my service is up. However, I would like to pass in custom environment-variables that have to be different depending on which instance of the container it is in. Is this possible? ### Response: The way that Kubernetes is designed has Services decoupled from Pods. You can not inject a Secret or an env var into a running Pod. What you want is to configure the Pod to use the env var or Secret.
Is there a way to pass environment variables through the services in Kubernetes? I tried passing it in to my service yaml like this: apiVersion: v1 kind: Service metadata: labels: name: kafka name: kafka spec: ports: - port: 9092 selector: name: kafka env: - name: BROKER_ID value: "1" The service is being consumed by kubectl, and is created. I've confirmed the service is connected to my container through env | grep KAFKA and the output of variables greatly increase, as expected when my service is up. However, I would like to pass in custom environment-variables that have to be different depending on which instance of the container it is in. Is this possible?
The second script you reference (gce/upgrade.sh) only works if your cluster is running on GCE. There isn't (yet) an equivalent script for AWS, but you could look at the script and follow the steps (or write them into a script) to get the same behavior. The main different between upgrade.sh and kube-push.sh is that the former does a replacement upgrade (remove a node, create a new node to replace it) whereas the later does an "in place" upgrade. Removing and replacing the master node only works if the persistent data (etcd database, server certificates, authorized bearer tokens, etc) reside on a persistent disk separate from the boot disk of the master (this is how it is configured by default in GCE). Remove and replacing nodes should be fine on AWS (but keep in mind that any pods not under a replication controller won't be restarted). Doing an in-place upgrade doesn't require any special configuration, but that code path isn't as thoroughly tested as the remove and replace option. You shouldn't need to entirely replace your cluster when upgrading to a new version, unless you are using pre-release versions (e.g. alpha or beta releases) which can sometimes have breaking changes between them.
###Instruction: What is the recommended way to upgrade a kubernetes cluster as new versions are released? I heard here it may be https://github.com/kubernetes/kubernetes/blob/master/cluster/kube-push.sh. If that is the case how does kube-push.sh relate to https://github.com/GoogleCloudPlatform/kubernetes/blob/master/cluster/gce/upgrade.sh? I've also heard here that we should instead create a new cluster, copy/move the pods, replication controllers, and services from the first cluster to the new one and then turn off the first cluster. I'm running my cluster on aws if that is relevant. ### Response: The second script you reference (gce/upgrade.sh) only works if your cluster is running on GCE. There isn't (yet) an equivalent script for AWS, but you could look at the script and follow the steps (or write them into a script) to get the same behavior. The main different between upgrade.sh and kube-push.sh is that the former does a replacement upgrade (remove a node, create a new node to replace it) whereas the later does an "in place" upgrade. Removing and replacing the master node only works if the persistent data (etcd database, server certificates, authorized bearer tokens, etc) reside on a persistent disk separate from the boot disk of the master (this is how it is configured by default in GCE). Remove and replacing nodes should be fine on AWS (but keep in mind that any pods not under a replication controller won't be restarted). Doing an in-place upgrade doesn't require any special configuration, but that code path isn't as thoroughly tested as the remove and replace option. You shouldn't need to entirely replace your cluster when upgrading to a new version, unless you are using pre-release versions (e.g. alpha or beta releases) which can sometimes have breaking changes between them.
What is the recommended way to upgrade a kubernetes cluster as new versions are released? I heard here it may be https://github.com/kubernetes/kubernetes/blob/master/cluster/kube-push.sh. If that is the case how does kube-push.sh relate to https://github.com/GoogleCloudPlatform/kubernetes/blob/master/cluster/gce/upgrade.sh? I've also heard here that we should instead create a new cluster, copy/move the pods, replication controllers, and services from the first cluster to the new one and then turn off the first cluster. I'm running my cluster on aws if that is relevant.
if you're using HTTPS, you must have created a self-signed certificate (with your own CA authority) or you have a CA signed certificate. If so, you need to install this CA cert on the machine you're calling FROM put your CA cert in /etc/ssl/certs and run update-ca-certificates sometimes I have had to put it also in /usr/local/share/ca-certificates/ (in both cases your CA file EXTENSION should be .pem For Docker you may also need to put a file in /etc/docker/certs.d/<--your-site-url--->/ca.crt and the file must be named ca.crt (same file file as the .pem file but named ca.crt)
###Instruction: So I have deployed a Kubernetes cluster and installed a private Docker registry. Here is my registry controller: --- apiVersion: v1 kind: ReplicationController metadata: name: registry-master labels: name: registry-master spec: replicas: 1 selector: name: registry-master template: metadata: labels: name: registry-master spec: containers: - name: registry-master image: registry ports: - containerPort: 5000 command: ["docker-registry"] And the service: --- apiVersion: v1 kind: Service metadata: name: registry-master labels: name: registry-master spec: ports: # the port that this service should serve on - port: 5000 targetPort: 5000 selector: name: registry-master Now I sshed to one of Kubernetes' nodes and built a Ruby app container: cd /tmp git clone https://github.com/RichardKnop/sinatra-redis-blog.git cd sinatra-redis-blog docker build -t ruby-redis-app When I try to tag it and push it to the registry: docker tag ruby-redis-app registry-master/ruby-redis-app docker push 10.100.129.115:5000/registry-master/ruby-redis-app I am getting this error: Error response from daemon: invalid registry endpoint https://10.100.129.115:5000/v0/: unable to ping registry endpoint https://10.100.129.115:5000/v0/ v2 ping attempt failed with error: Get https://10.100.129.115:5000/v2/: read tcp 10.100.129.115:5000: connection reset by peer v1 ping attempt failed with error: Get https://10.100.129.115:5000/v1/_ping: read tcp 10.100.129.115:5000: connection reset by peer. If this private registry supports only HTTP or HTTPS with an unknown CA certificate, please add `--insecure-registry 10.100.129.115:5000` to the daemon's arguments. In the case of HTTPS, if you have access to the registry's CA certificate, no need for the flag; simply place the CA certificate at /etc/docker/certs.d/10.100.129.115:5000/ca.crt Any idea how to solve it? I have been struggling with this for several hours. Richard ### Response: if you're using HTTPS, you must have created a self-signed certificate (with your own CA authority) or you have a CA signed certificate. If so, you need to install this CA cert on the machine you're calling FROM put your CA cert in /etc/ssl/certs and run update-ca-certificates sometimes I have had to put it also in /usr/local/share/ca-certificates/ (in both cases your CA file EXTENSION should be .pem For Docker you may also need to put a file in /etc/docker/certs.d/<--your-site-url--->/ca.crt and the file must be named ca.crt (same file file as the .pem file but named ca.crt)
So I have deployed a Kubernetes cluster and installed a private Docker registry. Here is my registry controller: --- apiVersion: v1 kind: ReplicationController metadata: name: registry-master labels: name: registry-master spec: replicas: 1 selector: name: registry-master template: metadata: labels: name: registry-master spec: containers: - name: registry-master image: registry ports: - containerPort: 5000 command: ["docker-registry"] And the service: --- apiVersion: v1 kind: Service metadata: name: registry-master labels: name: registry-master spec: ports: # the port that this service should serve on - port: 5000 targetPort: 5000 selector: name: registry-master Now I sshed to one of Kubernetes' nodes and built a Ruby app container: cd /tmp git clone https://github.com/RichardKnop/sinatra-redis-blog.git cd sinatra-redis-blog docker build -t ruby-redis-app When I try to tag it and push it to the registry: docker tag ruby-redis-app registry-master/ruby-redis-app docker push 10.100.129.115:5000/registry-master/ruby-redis-app I am getting this error: Error response from daemon: invalid registry endpoint https://10.100.129.115:5000/v0/: unable to ping registry endpoint https://10.100.129.115:5000/v0/ v2 ping attempt failed with error: Get https://10.100.129.115:5000/v2/: read tcp 10.100.129.115:5000: connection reset by peer v1 ping attempt failed with error: Get https://10.100.129.115:5000/v1/_ping: read tcp 10.100.129.115:5000: connection reset by peer. If this private registry supports only HTTP or HTTPS with an unknown CA certificate, please add `--insecure-registry 10.100.129.115:5000` to the daemon's arguments. In the case of HTTPS, if you have access to the registry's CA certificate, no need for the flag; simply place the CA certificate at /etc/docker/certs.d/10.100.129.115:5000/ca.crt Any idea how to solve it? I have been struggling with this for several hours. Richard
There is a quite long default timeout for detecting unreachable nodes and for re-scheduling pods, maybe you did not wait long enough? You can adjust the timeouts with several flags: node-status-update-frequency on the kubelet (http://kubernetes.io/v1.0/docs/admin/kubelet.html) node-monitor-grace-period and pod_eviction_timeout on the kube-controller-manager (http://kubernetes.io/v1.0/docs/admin/kube-controller-manager.html)
###Instruction: I'm currently doing some tests on a kubernetes cluster. I was wondering why the pods aren't rescheduled in some cases : When the node is unreachable When the remote kubelet doesn't answer Actually the only case when a pod got rescheduled is when the kubelet notify the master. Is it on purpose ? Why ? If i shut down a server where there's a rc with a unique pod running, my service is down. Maybe there's something i misunderstood. Regards, Smana ### Response: There is a quite long default timeout for detecting unreachable nodes and for re-scheduling pods, maybe you did not wait long enough? You can adjust the timeouts with several flags: node-status-update-frequency on the kubelet (http://kubernetes.io/v1.0/docs/admin/kubelet.html) node-monitor-grace-period and pod_eviction_timeout on the kube-controller-manager (http://kubernetes.io/v1.0/docs/admin/kube-controller-manager.html)
I'm currently doing some tests on a kubernetes cluster. I was wondering why the pods aren't rescheduled in some cases : When the node is unreachable When the remote kubelet doesn't answer Actually the only case when a pod got rescheduled is when the kubelet notify the master. Is it on purpose ? Why ? If i shut down a server where there's a rc with a unique pod running, my service is down. Maybe there's something i misunderstood. Regards, Smana
This is a sample application I used: { "kind": "ReplicationController", "apiVersion": "v1", "metadata": { "name": "tess.io", "labels": { "name": "tess.io" } }, "spec": { "replicas": 3, "selector": { "name": "tess.io" }, "template": { "metadata": { "labels": { "name": "tess.io" } }, "spec": { "containers": [ { "image": "tess/tessio:0.0.3", "name": "tessio", "ports": [ { "containerPort": 80, "protocol": "TCP" } ], "volumeMounts": [ { "mountPath": "/tess", "name": "tess" } ] } ], "volumes": [ { "name": "tess", "gitRepo": { "repository": "https://<TOKEN>:x-oauth-basic@github.com/tess/tess.io" } } ] } } } } And you can use the revision too. PS: The repo above does not exist anymore.
###Instruction: Can someone give an example of how to use the gitRepo type of volume in Kubernetes? The doc says it's a plugin, not sure what that means. Could not find an example anywhere and i don't know the proper syntax. especially is there parameters to pull a specific branch, use credentials (username, password, or SSH key) etc... EDIT: Going through the Kubernetes code this is what I figured so far: - name: data gitRepo: repository: "git repo url" revision: "hash of the commit to use" But can't seen to make it work, and not sure how to troubleshoot this issue ### Response: This is a sample application I used: { "kind": "ReplicationController", "apiVersion": "v1", "metadata": { "name": "tess.io", "labels": { "name": "tess.io" } }, "spec": { "replicas": 3, "selector": { "name": "tess.io" }, "template": { "metadata": { "labels": { "name": "tess.io" } }, "spec": { "containers": [ { "image": "tess/tessio:0.0.3", "name": "tessio", "ports": [ { "containerPort": 80, "protocol": "TCP" } ], "volumeMounts": [ { "mountPath": "/tess", "name": "tess" } ] } ], "volumes": [ { "name": "tess", "gitRepo": { "repository": "https://<TOKEN>:x-oauth-basic@github.com/tess/tess.io" } } ] } } } } And you can use the revision too. PS: The repo above does not exist anymore.
Can someone give an example of how to use the gitRepo type of volume in Kubernetes? The doc says it's a plugin, not sure what that means. Could not find an example anywhere and i don't know the proper syntax. especially is there parameters to pull a specific branch, use credentials (username, password, or SSH key) etc... EDIT: Going through the Kubernetes code this is what I figured so far: - name: data gitRepo: repository: "git repo url" revision: "hash of the commit to use" But can't seen to make it work, and not sure how to troubleshoot this issue
PVs, like nodes, are not scoped to any namespace. However, as you noted PVCs, are.
###Instruction: I am using GKE with kubectl installed from gcloud components. I have created a pv (gcePersistentDisk) with namespace scope using kubectl. apiVersion: v1 kind: PersistentVolume metadata: name: cstor-cs-a-disk-david namespace: ns-david spec: gcePersistentDisk: pdName: cstor-cs-a-disk-david fsType: ext4 partition: 0 readOnly: false accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain capacity: storage: 200Gi This says that specifying namespace with create pv is/was valid: http://kubernetes.io/third_party/swagger-ui/#!/api%2Fv1/createNamespacedPersistentVolume When I run 'kubectl get pv' I see the pv. $ kubectl get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON cstor-cs-a-disk-david <none> 214748364800 RWO Available I did not expect this because the pv wasn't created with the default namespace scope. The same happens if I specify a namespace argument (valid or not). $ kubectl get namespaces NAME LABELS STATUS default <none> Active kube-system <none> Active ns-david <none> Active $ kubectl get pv --namespace=demo NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON cstor-cs-a-disk-david <none> 214748364800 RWO Available If I create a claim against this pv and query it with 'kubectl get pvc' then the claim is not found but is found when I specify the correct namespace. apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cstor-cs-a-disk-claim-david namespace: ns-david spec: accessModes: - ReadWriteOnce resources: requests: storage: 200Gi volumeName: cstor-cs-a-disk-david $ kubectl get pvc NAME LABELS STATUS VOLUME $ kubectl get pvc --namespace=ns-david NAME LABELS STATUS VOLUME cstor-cs-a-disk-claim-david map[] Bound cstor-cs-a-disk-david Do pv have namespace scope or are they global? ### Response: PVs, like nodes, are not scoped to any namespace. However, as you noted PVCs, are.
I am using GKE with kubectl installed from gcloud components. I have created a pv (gcePersistentDisk) with namespace scope using kubectl. apiVersion: v1 kind: PersistentVolume metadata: name: cstor-cs-a-disk-david namespace: ns-david spec: gcePersistentDisk: pdName: cstor-cs-a-disk-david fsType: ext4 partition: 0 readOnly: false accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain capacity: storage: 200Gi This says that specifying namespace with create pv is/was valid: http://kubernetes.io/third_party/swagger-ui/#!/api%2Fv1/createNamespacedPersistentVolume When I run 'kubectl get pv' I see the pv. $ kubectl get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON cstor-cs-a-disk-david <none> 214748364800 RWO Available I did not expect this because the pv wasn't created with the default namespace scope. The same happens if I specify a namespace argument (valid or not). $ kubectl get namespaces NAME LABELS STATUS default <none> Active kube-system <none> Active ns-david <none> Active $ kubectl get pv --namespace=demo NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON cstor-cs-a-disk-david <none> 214748364800 RWO Available If I create a claim against this pv and query it with 'kubectl get pvc' then the claim is not found but is found when I specify the correct namespace. apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cstor-cs-a-disk-claim-david namespace: ns-david spec: accessModes: - ReadWriteOnce resources: requests: storage: 200Gi volumeName: cstor-cs-a-disk-david $ kubectl get pvc NAME LABELS STATUS VOLUME $ kubectl get pvc --namespace=ns-david NAME LABELS STATUS VOLUME cstor-cs-a-disk-claim-david map[] Bound cstor-cs-a-disk-david Do pv have namespace scope or are they global?
I think you may have the exact same issue that I just solved. I believe it was you who asked a similar question on #google-containers and something that user "vishh" said fixed the problem for me. Make sure your master IP/hostname is in the certificate that you use for your api server under the cert's Subject Alternative Name: section. $ openssl x509 -in kube-apiserver-server.pem -text -noout Certificate: Data: Version: 3 (0x2) Serial Number: 1d:60:b0:98:70:95:23:f8 Signature Algorithm: sha256WithRSAEncryption ... ... ... X509v3 Subject Alternative Name: DNS:*.kubestack.io, DNS:*.c.kubestack.internal, IP Address:127.0.0.1, IP Address:192.168.10.50 I have very little experience with certs and keys so I used the guide here to generate mine. You shouldn't need to use any other auth flags (token/basic), it's done through the certs as you assumed. You need to use https when specifying the server. I'm unsure about your admission-control question. Hope this helps.
###Instruction: I have a working insecure K8S cluster setup: CoreOS alpha image + Vagrant (custom solution follow K8S getting start guide of scratch setup). Now I want to setup the authentication for K8s Cluster Admins who can access API via kubectl cluster-info command etc. I want to setup something similar like design doc - Simple profile. Then I followed authentication docs, I picked the Client certificate authentication for authentication plugins. I prepared certs, saved /srv/kubernetes/ca.crt, /srv/kubernetes/server.crt, /srv/kubernetes/server.key on the Master Node. I also setup the kubeconfig file by following the guide. kubectl config set-cluster $CLUSTER_NAME --certificate-authority=$CA_CERT --embed-certs=true --server=https://$MASTER_IP kubectl config set-credentials $CLUSTER_NAME --client-certificate=$CLI_CERT --client-key=$CLI_KEY --embed-certs=true --token=$TOKEN kubectl config set-context $CLUSTER_NAME --cluster=$CLUSTER_NAME --user=admin kubectl config use-context $CONTEXT --cluster=$CONTEXT When api-server starts, it also use the same value. see $CA_CERT, $CLI_CERT, $CLI_KEY. Q1: are those vlaues in the right place? /kube-apiserver \ --allow_privileged=true \ --bind_address=0.0.0.0 \ --secure_port=6443 \ --kubelet_https=true \ --service-cluster-ip-range=${SERVICE_CLUSTER_IP_RANGE} \ --etcd_servers=$ETCD_SERVER \ --service-node-port-range=${SERVICE_NODE_PORT_RANGE} \ --cluster-name=$CLUSTER_NAME \ --client-ca-file=$CA_CERT \ --tls-cert-file=$CLI_CERT \ --tls-private-key-file=$CLI_KEY \ --admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota \ --logtostderr=true Logs are below Aug 30 06:31:30 kube-master docker[3706]: E0830 06:31:30.373083 1 reflector.go:136] Failed to list *api.ResourceQuota: Get http://127.0.0.1:8080/api/v1/resourcequotas: dial tcp 127.0.0.1:8080: connection refused Aug 30 06:31:30 kube-master docker[3706]: E0830 06:31:30.373523 1 reflector.go:136] Failed to list *api.Secret: Get http://127.0.0.1:8080/api/v1/secrets?fieldSelector=type%3Dkubernetes.io%2Fservice-account-token: dial tcp 127.0.0.1:8080: connection refused Aug 30 06:31:30 kube-master docker[3706]: E0830 06:31:30.373631 1 reflector.go:136] Failed to list *api.ServiceAccount: Get http://127.0.0.1:8080/api/v1/serviceaccounts: dial tcp 127.0.0.1:8080: connection refused Aug 30 06:31:30 kube-master docker[3706]: E0830 06:31:30.373695 1 reflector.go:136] Failed to list *api.LimitRange: Get http://127.0.0.1:8080/api/v1/limitranges: dial tcp 127.0.0.1:8080: connection refused Aug 30 06:31:30 kube-master docker[3706]: E0830 06:31:30.373748 1 reflector.go:136] Failed to list *api.Namespace: Get http://127.0.0.1:8080/api/v1/namespaces: dial tcp 127.0.0.1:8080: connection refused Aug 30 06:31:30 kube-master docker[3706]: E0830 06:31:30.373788 1 reflector.go:136] Failed to list *api.Namespace: Get http://127.0.0.1:8080/api/v1/namespaces: dial tcp 127.0.0.1:8080: connection refused Aug 30 06:31:30 kube-master docker[3706]: [restful] 2015/08/30 06:31:30 log.go:30: [restful/swagger] listing is available at https://10.0.2.15:6443/swaggerapi/ Aug 30 06:31:30 kube-master docker[3706]: [restful] 2015/08/30 06:31:30 log.go:30: [restful/swagger] https://10.0.2.15:6443/swaggerui/ is mapped to folder /swagger-ui/ Aug 30 06:31:30 kube-master docker[3706]: I0830 06:31:30.398612 1 server.go:441] Serving securely on 0.0.0.0:6443 Aug 30 06:31:30 kube-master docker[3706]: I0830 06:31:30.399042 1 server.go:483] Serving insecurely on 127.0.0.1:8080 On my MacOS machine, I want to connect kubectl to my $CLUSTER_NAME cluster. export KUBERNETES_MASTER=http://172.17.8.100:6443 kubectl cluster-info Terminal outputs: ➜ kubectl cluster-info error: couldn't read version from server: Get http://172.17.8.100:6443/api: malformed HTTP response "\x15\x03\x01\x00\x02\x02" Here is my kubeconfig file on MacOS machine ~/.kube/config ➜ kubectl config view apiVersion: v1 clusters: - cluster: certificate-authority-data: REDACTED server: http://172.17.8.100:6443 name: kube-01 contexts: - context: cluster: kube-01 user: admin name: kube current-context: kube kind: Config preferences: {} users: - name: admin user: client-certificate-data: REDACTED client-key-data: REDACTED token: cxKranwtWI2nyASebbF1HV3p1EWJbNcE Q: How could my kubectl on MacOS to access my K8S cluster securely? since I never add user admin on my api-server, I assume that all authentication is being done by ca-file? Q: Once I fix the secure login issue, how could I update the admission-control plugins api error issue like ServiceAccount connection refuse above? Q: Do I use http or https? I prefer to use http://IP:6443, not sure it is the problem? Q: Do I need to apply --token-auth-file= or --basic-auth-file? By reading the Docs, I think I could pick one of the method for authentication. I would prefer to do it in ca which is more secure, right? I used see function create-certs in cluster/gce/util.sh to generate my certs files. I am not too familiar to certs and keys, so that I post them here. Well, it is really a dummy certs and keys for testing. It is not being used anywhere. Simply posted here to varify if I did something wrong here. ca.crt -----BEGIN CERTIFICATE----- MIIDWTCCAkGgAwIBAgIJAMbTBaUcQSbGMA0GCSqGSIb3DQEBCwUAMCIxIDAeBgNV BAMMFzE3Mi4xNy44LjEwMEAxNDQwNzgwMjgxMB4XDTE1MDgyODE2NDQ0MVoXDTI1 MDgyNTE2NDQ0MVowIjEgMB4GA1UEAwwXMTcyLjE3LjguMTAwQDE0NDA3ODAyODEw ggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDNmT0O8sBXTd2Htbb+hnsq P/YvUNYTXzLy6+T/d9/KRrxq1JWO70E7L2hFOvOdGF0gZuoAefki5ymkFYfwoZsK NEXvA1AxBMtQnMCdUOp7m5XW+c9uFepW+jzvb4PRBoUHZjW5HhxT6UZ21FiEvwHP NBnCL9gp1NIcNOaUIZvFI7hpko0tfAPFYY0NkHRo6mLpvzaGTippzySMSLyQ7cs4 IcUrFGJbsTNISCSsCG//+A6I62sQAURr0hjeW9FmGHxwYW+0wdyyTtlFPTKrVrC4 ETc5WeQoJeZhjoH7Dkj8l6QBvv2cDtZwnY2oCUGXf63c3hoRaEkeFis1RWQcQKoT AgMBAAGjgZEwgY4wHQYDVR0OBBYEFONIYbWt3l9D5j9VvJADUQfmIBpQMFIGA1Ud IwRLMEmAFONIYbWt3l9D5j9VvJADUQfmIBpQoSakJDAiMSAwHgYDVQQDDBcxNzIu MTcuOC4xMDBAMTQ0MDc4MDI4MYIJAMbTBaUcQSbGMAwGA1UdEwQFMAMBAf8wCwYD VR0PBAQDAgEGMA0GCSqGSIb3DQEBCwUAA4IBAQCJtrf1Mf+pHwCsMG8HPcuR4oij ugYkzawEF2FSCe2VbFMDxwmHbHw2N9ZOwRLyeSuR0JAY5aN31pqIzYCmmKf2otKU +mtTaK5YIsZU2IdxoR6VHaHT83zSGq9RhteqDdM8tuMvNsV5I9pJCu+Bkv3MsJpN 0PIc+GFs52A+bQC3cjWqLkgJeYEqolNnJpeex9G3ovqbTzavgM8q5gjdTyz8tDIo Dc4RKcuwyrAnkiJ93HdWLwkKcEXzrX/lU9NYsvmycBVbkRaIh7md82HCUiwkmmJC Xz3+xVrghzMo0DgoInzxcPFRWPc00CZcb5P5VRepa2rPwEyNgEp3BsQLXFIt -----END CERTIFICATE----- server.crt Certificate: Data: Version: 3 (0x2) Serial Number: 1 (0x1) Signature Algorithm: sha256WithRSAEncryption Issuer: CN=172.17.8.100@1440780281 Validity Not Before: Aug 28 16:44:41 2015 GMT Not After : Aug 25 16:44:41 2025 GMT Subject: CN=kube-master Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public Key: (2048 bit) Modulus (2048 bit): 00:ab:3f:cf:95:50:3d:7f:b4:82:ba:72:7a:88:2e: 41:79:67:7d:9a:4a:22:27:5f:fd:5c:78:6f:3d:ad: 57:4c:fd:37:9e:b5:35:f1:88:59:c1:e9:10:38:3e: de:7f:57:cf:e9:fc:fd:d7:b5:a8:7a:0e:5f:e4:16: 6f:2a:66:98:28:6c:42:a8:5f:95:3d:0b:02:f2:ec: ab:aa:19:40:60:b3:e5:7a:64:7d:5b:f2:9c:84:d5: bb:06:79:e7:00:2f:2c:a0:0a:88:f4:b0:c5:31:de: 7d:30:d6:b3:4d:ea:64:85:bb:f9:89:5a:f5:22:41: 92:35:d4:a4:7d:80:64:65:d9:1d:c9:30:39:af:34: 57:cd:d5:56:5d:9f:35:5d:ee:a3:07:ed:f1:c5:68: db:db:12:65:31:e6:6c:1e:77:44:3e:7c:03:bc:89: f0:4c:14:a6:41:39:22:a3:a3:a0:8d:20:eb:69:7a: c5:de:b0:2f:94:67:68:ab:8c:8a:24:59:38:a4:57: 19:2d:c2:0e:37:c8:73:98:ae:d8:0a:a4:e2:72:22: 49:9a:55:58:ad:8e:c3:eb:42:b5:41:02:c9:40:27: d1:77:41:ab:4f:0b:2a:6b:b2:b6:38:7f:a0:ce:cf: 9f:cd:7c:54:72:c6:43:cd:1d:5b:60:b9:45:eb:10: ab:ad Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Basic Constraints: CA:FALSE X509v3 Subject Key Identifier: B2:46:5F:5A:68:3E:08:78:25:8C:AE:5E:EB:F1:3B:7B:CF:9D:A6:F3 X509v3 Authority Key Identifier: keyid:E3:48:61:B5:AD:DE:5F:43:E6:3F:55:BC:90:03:51:07:E6:20:1A:50 DirName:/CN=172.17.8.100@1440780281 serial:C6:D3:05:A5:1C:41:26:C6 X509v3 Extended Key Usage: TLS Web Server Authentication X509v3 Key Usage: Digital Signature, Key Encipherment X509v3 Subject Alternative Name: IP Address:172.17.8.100, IP Address:10.100.0.1, DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local, DNS:kube-master Signature Algorithm: sha256WithRSAEncryption 58:b1:63:41:3e:94:ed:3d:bd:3c:e8:0c:78:30:54:c1:6d:33: 00:42:74:c8:7a:64:cc:fd:9a:70:ab:38:5b:1c:92:7c:9b:56: 1a:d7:fd:38:51:07:cf:5a:b5:0a:11:85:01:3d:52:86:96:ad: 16:be:ea:9c:2c:ee:3c:14:c9:5b:58:d7:ab:45:ae:d8:e0:2d: 70:7c:55:40:44:b8:98:ad:1b:d4:66:35:c5:78:13:4c:e7:5a: de:82:15:43:cb:bb:83:3a:09:04:fa:5e:6f:d9:ca:17:b8:40: 00:b0:ba:06:ed:73:ed:c8:c7:5a:53:aa:d3:43:a2:f1:c2:cf: 14:9b:c2:7b:b7:c0:2a:56:a0:53:2e:af:2d:07:65:c0:70:c1: 92:86:34:05:39:3c:ed:3f:6e:f9:31:7f:de:5a:ed:9b:c8:83: e0:f4:9c:de:c7:9c:04:be:d2:6e:8d:5e:3e:ad:46:d4:82:70: 9d:79:b9:c3:dd:b4:c0:6e:1b:23:d0:45:be:26:c6:7e:4c:ec: c5:c3:c9:ee:1e:93:d4:a5:11:e9:6a:1d:e1:ee:af:eb:83:e6: dd:ec:13:7b:45:60:18:f5:05:3f:61:7b:3c:2b:b1:28:c4:92: 5e:bc:67:c0:02:22:a9:aa:69:d5:e9:0e:75:80:36:b2:66:84: fe:05:c2:75 -----BEGIN CERTIFICATE----- MIID3DCCAsSgAwIBAgIBATANBgkqhkiG9w0BAQsFADAiMSAwHgYDVQQDDBcxNzIu MTcuOC4xMDBAMTQ0MDc4MDI4MTAeFw0xNTA4MjgxNjQ0NDFaFw0yNTA4MjUxNjQ0 NDFaMBYxFDASBgNVBAMMC2t1YmUtbWFzdGVyMIIBIjANBgkqhkiG9w0BAQEFAAOC AQ8AMIIBCgKCAQEAqz/PlVA9f7SCunJ6iC5BeWd9mkoiJ1/9XHhvPa1XTP03nrU1 8YhZwekQOD7ef1fP6fz917Woeg5f5BZvKmaYKGxCqF+VPQsC8uyrqhlAYLPlemR9 W/KchNW7BnnnAC8soAqI9LDFMd59MNazTepkhbv5iVr1IkGSNdSkfYBkZdkdyTA5 rzRXzdVWXZ81Xe6jB+3xxWjb2xJlMeZsHndEPnwDvInwTBSmQTkio6OgjSDraXrF 3rAvlGdoq4yKJFk4pFcZLcION8hzmK7YCqTiciJJmlVYrY7D60K1QQLJQCfRd0Gr Twsqa7K2OH+gzs+fzXxUcsZDzR1bYLlF6xCrrQIDAQABo4IBJzCCASMwCQYDVR0T BAIwADAdBgNVHQ4EFgQUskZfWmg+CHgljK5e6/E7e8+dpvMwUgYDVR0jBEswSYAU 40hhta3eX0PmP1W8kANRB+YgGlChJqQkMCIxIDAeBgNVBAMMFzE3Mi4xNy44LjEw MEAxNDQwNzgwMjgxggkAxtMFpRxBJsYwEwYDVR0lBAwwCgYIKwYBBQUHAwEwCwYD VR0PBAQDAgWgMIGABgNVHREEeTB3hwSsEQhkhwQKZAABggprdWJlcm5ldGVzghJr dWJlcm5ldGVzLmRlZmF1bHSCFmt1YmVybmV0ZXMuZGVmYXVsdC5zdmOCJGt1YmVy bmV0ZXMuZGVmYXVsdC5zdmMuY2x1c3Rlci5sb2NhbIILa3ViZS1tYXN0ZXIwDQYJ KoZIhvcNAQELBQADggEBAFixY0E+lO09vTzoDHgwVMFtMwBCdMh6ZMz9mnCrOFsc knybVhrX/ThRB89atQoRhQE9UoaWrRa+6pws7jwUyVtY16tFrtjgLXB8VUBEuJit G9RmNcV4E0znWt6CFUPLu4M6CQT6Xm/Zyhe4QACwugbtc+3Ix1pTqtNDovHCzxSb wnu3wCpWoFMury0HZcBwwZKGNAU5PO0/bvkxf95a7ZvIg+D0nN7HnAS+0m6NXj6t RtSCcJ15ucPdtMBuGyPQRb4mxn5M7MXDye4ek9SlEelqHeHur+uD5t3sE3tFYBj1 BT9hezwrsSjEkl68Z8ACIqmqadXpDnWANrJmhP4FwnU= -----END CERTIFICATE----- server.key -----BEGIN RSA PRIVATE KEY----- MIIEpAIBAAKCAQEAqz/PlVA9f7SCunJ6iC5BeWd9mkoiJ1/9XHhvPa1XTP03nrU1 8YhZwekQOD7ef1fP6fz917Woeg5f5BZvKmaYKGxCqF+VPQsC8uyrqhlAYLPlemR9 W/KchNW7BnnnAC8soAqI9LDFMd59MNazTepkhbv5iVr1IkGSNdSkfYBkZdkdyTA5 rzRXzdVWXZ81Xe6jB+3xxWjb2xJlMeZsHndEPnwDvInwTBSmQTkio6OgjSDraXrF 3rAvlGdoq4yKJFk4pFcZLcION8hzmK7YCqTiciJJmlVYrY7D60K1QQLJQCfRd0Gr Twsqa7K2OH+gzs+fzXxUcsZDzR1bYLlF6xCrrQIDAQABAoIBAAtfMWm46lyQoB3B fGGOsMpfFPgp9BqpRSne1YRC/okeR5NCdVKUu2ElGO6jPiM2sZfYNQMeDRIN4lBD LR6jsXb9uW906XQkRw3aqYuiIaRKTfLSuYBhnAM2LjU/4xcgCtaV3IJjOrUVETst Brsl1YcL9IYqhBzCPfNVK5cp74DTzleBjl7ng1y8ijGOTcp5JwUbrrQQZ0U9uqjS nCAjB63e8x7JswXx1jo4pDeumJzyJ1eHNA0oXwSbgZ/q/oUHHYykUrFkPYIIAMKu lZO/Lh2tRNdDf8lXupWmhfcwDO9DYcRr4v37hnDqknWWHEdgR9hborc6vZYAMpPB 0LrIfAECgYEA0rT7bFDCCBmk5yDw2cOl1CHT1BTq7Elw2cjAGgjAygx0puGKuBnr qBYeAQqx3ZZHlMsiT3gSbRP9CLws+QgSUf87deM0kBoiWG6m+KgSxmBIMRJCdo+S c+3QZwWLBFHQLaJCDRN4XNr1HuHzcKYO4th/SpDZ3lQc9wO7S3dBHpsCgYEA0A+B ogw30zf1rIaIv8rRMOItqA6pgR6DbspAYexZyEKUexsvHOw6KMDRz7IwzZRVUkjI uPfEkq3qAhYpEgzi/BIsnj/Ku91THkzkkDBolpuJAa068GupQgbLCLhKWa1h7qrI mAFOxy+9ZIFWbmy4UDaqgT5O78gw1CFwibYXn1cCgYEAlDPX5AepcikXY7o3rfN+ 4AYrCDDuS+QcDBK3i5g8geDg68AX4gXZSxDDadgr4r+g+XcnWt4Jl89HWq2AtGiI +kObfv+gKPs4zRqHNr6A9icin+FH/jxdtky/GLc9YHxrAK3v52KadjVL07z5jXI/ Zi8A2WGo3EgtV1C4nAv1MaECgYAp0GP6IEB754wtLyB+gxFFpL8OPlwcgfhiJK2J wIlOsOrMTutKAcOyewXvmt0qA7yd+9izK8BKxj74SmHYqdRYWoKzDxj8Zn+U4Fkz DTeHxRxkxN7KgKiUh274gqkWmrzKzXHg8qpVZ6fFciTfrmPgYwwjS1Vr5SzDBTFr y7e1owKBgQDMKHPuEE9LT3ljiZFIoU6yxbWU/+rMaJwqmV5bEXbfrL06PjTw7kp/ UnLHJ3TVdCXnY2J4Si39cYAhL5Wr5JiubviaW5zCjjOXbrE3ck16kkJsS8DOXjHT nHNGV48GE51THWl/NbuRQz/rD9McsCwixNm66C2EiakKuKLuv3tI3Q== -----END RSA PRIVATE KEY----- ### Response: I think you may have the exact same issue that I just solved. I believe it was you who asked a similar question on #google-containers and something that user "vishh" said fixed the problem for me. Make sure your master IP/hostname is in the certificate that you use for your api server under the cert's Subject Alternative Name: section. $ openssl x509 -in kube-apiserver-server.pem -text -noout Certificate: Data: Version: 3 (0x2) Serial Number: 1d:60:b0:98:70:95:23:f8 Signature Algorithm: sha256WithRSAEncryption ... ... ... X509v3 Subject Alternative Name: DNS:*.kubestack.io, DNS:*.c.kubestack.internal, IP Address:127.0.0.1, IP Address:192.168.10.50 I have very little experience with certs and keys so I used the guide here to generate mine. You shouldn't need to use any other auth flags (token/basic), it's done through the certs as you assumed. You need to use https when specifying the server. I'm unsure about your admission-control question. Hope this helps.
I have a working insecure K8S cluster setup: CoreOS alpha image + Vagrant (custom solution follow K8S getting start guide of scratch setup). Now I want to setup the authentication for K8s Cluster Admins who can access API via kubectl cluster-info command etc. I want to setup something similar like design doc - Simple profile. Then I followed authentication docs, I picked the Client certificate authentication for authentication plugins. I prepared certs, saved /srv/kubernetes/ca.crt, /srv/kubernetes/server.crt, /srv/kubernetes/server.key on the Master Node. I also setup the kubeconfig file by following the guide. kubectl config set-cluster $CLUSTER_NAME --certificate-authority=$CA_CERT --embed-certs=true --server=https://$MASTER_IP kubectl config set-credentials $CLUSTER_NAME --client-certificate=$CLI_CERT --client-key=$CLI_KEY --embed-certs=true --token=$TOKEN kubectl config set-context $CLUSTER_NAME --cluster=$CLUSTER_NAME --user=admin kubectl config use-context $CONTEXT --cluster=$CONTEXT When api-server starts, it also use the same value. see $CA_CERT, $CLI_CERT, $CLI_KEY. Q1: are those vlaues in the right place? /kube-apiserver \ --allow_privileged=true \ --bind_address=0.0.0.0 \ --secure_port=6443 \ --kubelet_https=true \ --service-cluster-ip-range=${SERVICE_CLUSTER_IP_RANGE} \ --etcd_servers=$ETCD_SERVER \ --service-node-port-range=${SERVICE_NODE_PORT_RANGE} \ --cluster-name=$CLUSTER_NAME \ --client-ca-file=$CA_CERT \ --tls-cert-file=$CLI_CERT \ --tls-private-key-file=$CLI_KEY \ --admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota \ --logtostderr=true Logs are below Aug 30 06:31:30 kube-master docker[3706]: E0830 06:31:30.373083 1 reflector.go:136] Failed to list *api.ResourceQuota: Get http://127.0.0.1:8080/api/v1/resourcequotas: dial tcp 127.0.0.1:8080: connection refused Aug 30 06:31:30 kube-master docker[3706]: E0830 06:31:30.373523 1 reflector.go:136] Failed to list *api.Secret: Get http://127.0.0.1:8080/api/v1/secrets?fieldSelector=type%3Dkubernetes.io%2Fservice-account-token: dial tcp 127.0.0.1:8080: connection refused Aug 30 06:31:30 kube-master docker[3706]: E0830 06:31:30.373631 1 reflector.go:136] Failed to list *api.ServiceAccount: Get http://127.0.0.1:8080/api/v1/serviceaccounts: dial tcp 127.0.0.1:8080: connection refused Aug 30 06:31:30 kube-master docker[3706]: E0830 06:31:30.373695 1 reflector.go:136] Failed to list *api.LimitRange: Get http://127.0.0.1:8080/api/v1/limitranges: dial tcp 127.0.0.1:8080: connection refused Aug 30 06:31:30 kube-master docker[3706]: E0830 06:31:30.373748 1 reflector.go:136] Failed to list *api.Namespace: Get http://127.0.0.1:8080/api/v1/namespaces: dial tcp 127.0.0.1:8080: connection refused Aug 30 06:31:30 kube-master docker[3706]: E0830 06:31:30.373788 1 reflector.go:136] Failed to list *api.Namespace: Get http://127.0.0.1:8080/api/v1/namespaces: dial tcp 127.0.0.1:8080: connection refused Aug 30 06:31:30 kube-master docker[3706]: [restful] 2015/08/30 06:31:30 log.go:30: [restful/swagger] listing is available at https://10.0.2.15:6443/swaggerapi/ Aug 30 06:31:30 kube-master docker[3706]: [restful] 2015/08/30 06:31:30 log.go:30: [restful/swagger] https://10.0.2.15:6443/swaggerui/ is mapped to folder /swagger-ui/ Aug 30 06:31:30 kube-master docker[3706]: I0830 06:31:30.398612 1 server.go:441] Serving securely on 0.0.0.0:6443 Aug 30 06:31:30 kube-master docker[3706]: I0830 06:31:30.399042 1 server.go:483] Serving insecurely on 127.0.0.1:8080 On my MacOS machine, I want to connect kubectl to my $CLUSTER_NAME cluster. export KUBERNETES_MASTER=http://172.17.8.100:6443 kubectl cluster-info Terminal outputs: ➜ kubectl cluster-info error: couldn't read version from server: Get http://172.17.8.100:6443/api: malformed HTTP response "\x15\x03\x01\x00\x02\x02" Here is my kubeconfig file on MacOS machine ~/.kube/config ➜ kubectl config view apiVersion: v1 clusters: - cluster: certificate-authority-data: REDACTED server: http://172.17.8.100:6443 name: kube-01 contexts: - context: cluster: kube-01 user: admin name: kube current-context: kube kind: Config preferences: {} users: - name: admin user: client-certificate-data: REDACTED client-key-data: REDACTED token: cxKranwtWI2nyASebbF1HV3p1EWJbNcE Q: How could my kubectl on MacOS to access my K8S cluster securely? since I never add user admin on my api-server, I assume that all authentication is being done by ca-file? Q: Once I fix the secure login issue, how could I update the admission-control plugins api error issue like ServiceAccount connection refuse above? Q: Do I use http or https? I prefer to use http://IP:6443, not sure it is the problem? Q: Do I need to apply --token-auth-file= or --basic-auth-file? By reading the Docs, I think I could pick one of the method for authentication. I would prefer to do it in ca which is more secure, right? I used see function create-certs in cluster/gce/util.sh to generate my certs files. I am not too familiar to certs and keys, so that I post them here. Well, it is really a dummy certs and keys for testing. It is not being used anywhere. Simply posted here to varify if I did something wrong here. ca.crt -----BEGIN CERTIFICATE----- MIIDWTCCAkGgAwIBAgIJAMbTBaUcQSbGMA0GCSqGSIb3DQEBCwUAMCIxIDAeBgNV BAMMFzE3Mi4xNy44LjEwMEAxNDQwNzgwMjgxMB4XDTE1MDgyODE2NDQ0MVoXDTI1 MDgyNTE2NDQ0MVowIjEgMB4GA1UEAwwXMTcyLjE3LjguMTAwQDE0NDA3ODAyODEw ggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDNmT0O8sBXTd2Htbb+hnsq P/YvUNYTXzLy6+T/d9/KRrxq1JWO70E7L2hFOvOdGF0gZuoAefki5ymkFYfwoZsK NEXvA1AxBMtQnMCdUOp7m5XW+c9uFepW+jzvb4PRBoUHZjW5HhxT6UZ21FiEvwHP NBnCL9gp1NIcNOaUIZvFI7hpko0tfAPFYY0NkHRo6mLpvzaGTippzySMSLyQ7cs4 IcUrFGJbsTNISCSsCG//+A6I62sQAURr0hjeW9FmGHxwYW+0wdyyTtlFPTKrVrC4 ETc5WeQoJeZhjoH7Dkj8l6QBvv2cDtZwnY2oCUGXf63c3hoRaEkeFis1RWQcQKoT AgMBAAGjgZEwgY4wHQYDVR0OBBYEFONIYbWt3l9D5j9VvJADUQfmIBpQMFIGA1Ud IwRLMEmAFONIYbWt3l9D5j9VvJADUQfmIBpQoSakJDAiMSAwHgYDVQQDDBcxNzIu MTcuOC4xMDBAMTQ0MDc4MDI4MYIJAMbTBaUcQSbGMAwGA1UdEwQFMAMBAf8wCwYD VR0PBAQDAgEGMA0GCSqGSIb3DQEBCwUAA4IBAQCJtrf1Mf+pHwCsMG8HPcuR4oij ugYkzawEF2FSCe2VbFMDxwmHbHw2N9ZOwRLyeSuR0JAY5aN31pqIzYCmmKf2otKU +mtTaK5YIsZU2IdxoR6VHaHT83zSGq9RhteqDdM8tuMvNsV5I9pJCu+Bkv3MsJpN 0PIc+GFs52A+bQC3cjWqLkgJeYEqolNnJpeex9G3ovqbTzavgM8q5gjdTyz8tDIo Dc4RKcuwyrAnkiJ93HdWLwkKcEXzrX/lU9NYsvmycBVbkRaIh7md82HCUiwkmmJC Xz3+xVrghzMo0DgoInzxcPFRWPc00CZcb5P5VRepa2rPwEyNgEp3BsQLXFIt -----END CERTIFICATE----- server.crt Certificate: Data: Version: 3 (0x2) Serial Number: 1 (0x1) Signature Algorithm: sha256WithRSAEncryption Issuer: CN=172.17.8.100@1440780281 Validity Not Before: Aug 28 16:44:41 2015 GMT Not After : Aug 25 16:44:41 2025 GMT Subject: CN=kube-master Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public Key: (2048 bit) Modulus (2048 bit): 00:ab:3f:cf:95:50:3d:7f:b4:82:ba:72:7a:88:2e: 41:79:67:7d:9a:4a:22:27:5f:fd:5c:78:6f:3d:ad: 57:4c:fd:37:9e:b5:35:f1:88:59:c1:e9:10:38:3e: de:7f:57:cf:e9:fc:fd:d7:b5:a8:7a:0e:5f:e4:16: 6f:2a:66:98:28:6c:42:a8:5f:95:3d:0b:02:f2:ec: ab:aa:19:40:60:b3:e5:7a:64:7d:5b:f2:9c:84:d5: bb:06:79:e7:00:2f:2c:a0:0a:88:f4:b0:c5:31:de: 7d:30:d6:b3:4d:ea:64:85:bb:f9:89:5a:f5:22:41: 92:35:d4:a4:7d:80:64:65:d9:1d:c9:30:39:af:34: 57:cd:d5:56:5d:9f:35:5d:ee:a3:07:ed:f1:c5:68: db:db:12:65:31:e6:6c:1e:77:44:3e:7c:03:bc:89: f0:4c:14:a6:41:39:22:a3:a3:a0:8d:20:eb:69:7a: c5:de:b0:2f:94:67:68:ab:8c:8a:24:59:38:a4:57: 19:2d:c2:0e:37:c8:73:98:ae:d8:0a:a4:e2:72:22: 49:9a:55:58:ad:8e:c3:eb:42:b5:41:02:c9:40:27: d1:77:41:ab:4f:0b:2a:6b:b2:b6:38:7f:a0:ce:cf: 9f:cd:7c:54:72:c6:43:cd:1d:5b:60:b9:45:eb:10: ab:ad Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Basic Constraints: CA:FALSE X509v3 Subject Key Identifier: B2:46:5F:5A:68:3E:08:78:25:8C:AE:5E:EB:F1:3B:7B:CF:9D:A6:F3 X509v3 Authority Key Identifier: keyid:E3:48:61:B5:AD:DE:5F:43:E6:3F:55:BC:90:03:51:07:E6:20:1A:50 DirName:/CN=172.17.8.100@1440780281 serial:C6:D3:05:A5:1C:41:26:C6 X509v3 Extended Key Usage: TLS Web Server Authentication X509v3 Key Usage: Digital Signature, Key Encipherment X509v3 Subject Alternative Name: IP Address:172.17.8.100, IP Address:10.100.0.1, DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local, DNS:kube-master Signature Algorithm: sha256WithRSAEncryption 58:b1:63:41:3e:94:ed:3d:bd:3c:e8:0c:78:30:54:c1:6d:33: 00:42:74:c8:7a:64:cc:fd:9a:70:ab:38:5b:1c:92:7c:9b:56: 1a:d7:fd:38:51:07:cf:5a:b5:0a:11:85:01:3d:52:86:96:ad: 16:be:ea:9c:2c:ee:3c:14:c9:5b:58:d7:ab:45:ae:d8:e0:2d: 70:7c:55:40:44:b8:98:ad:1b:d4:66:35:c5:78:13:4c:e7:5a: de:82:15:43:cb:bb:83:3a:09:04:fa:5e:6f:d9:ca:17:b8:40: 00:b0:ba:06:ed:73:ed:c8:c7:5a:53:aa:d3:43:a2:f1:c2:cf: 14:9b:c2:7b:b7:c0:2a:56:a0:53:2e:af:2d:07:65:c0:70:c1: 92:86:34:05:39:3c:ed:3f:6e:f9:31:7f:de:5a:ed:9b:c8:83: e0:f4:9c:de:c7:9c:04:be:d2:6e:8d:5e:3e:ad:46:d4:82:70: 9d:79:b9:c3:dd:b4:c0:6e:1b:23:d0:45:be:26:c6:7e:4c:ec: c5:c3:c9:ee:1e:93:d4:a5:11:e9:6a:1d:e1:ee:af:eb:83:e6: dd:ec:13:7b:45:60:18:f5:05:3f:61:7b:3c:2b:b1:28:c4:92: 5e:bc:67:c0:02:22:a9:aa:69:d5:e9:0e:75:80:36:b2:66:84: fe:05:c2:75 -----BEGIN CERTIFICATE----- MIID3DCCAsSgAwIBAgIBATANBgkqhkiG9w0BAQsFADAiMSAwHgYDVQQDDBcxNzIu MTcuOC4xMDBAMTQ0MDc4MDI4MTAeFw0xNTA4MjgxNjQ0NDFaFw0yNTA4MjUxNjQ0 NDFaMBYxFDASBgNVBAMMC2t1YmUtbWFzdGVyMIIBIjANBgkqhkiG9w0BAQEFAAOC AQ8AMIIBCgKCAQEAqz/PlVA9f7SCunJ6iC5BeWd9mkoiJ1/9XHhvPa1XTP03nrU1 8YhZwekQOD7ef1fP6fz917Woeg5f5BZvKmaYKGxCqF+VPQsC8uyrqhlAYLPlemR9 W/KchNW7BnnnAC8soAqI9LDFMd59MNazTepkhbv5iVr1IkGSNdSkfYBkZdkdyTA5 rzRXzdVWXZ81Xe6jB+3xxWjb2xJlMeZsHndEPnwDvInwTBSmQTkio6OgjSDraXrF 3rAvlGdoq4yKJFk4pFcZLcION8hzmK7YCqTiciJJmlVYrY7D60K1QQLJQCfRd0Gr Twsqa7K2OH+gzs+fzXxUcsZDzR1bYLlF6xCrrQIDAQABo4IBJzCCASMwCQYDVR0T BAIwADAdBgNVHQ4EFgQUskZfWmg+CHgljK5e6/E7e8+dpvMwUgYDVR0jBEswSYAU 40hhta3eX0PmP1W8kANRB+YgGlChJqQkMCIxIDAeBgNVBAMMFzE3Mi4xNy44LjEw MEAxNDQwNzgwMjgxggkAxtMFpRxBJsYwEwYDVR0lBAwwCgYIKwYBBQUHAwEwCwYD VR0PBAQDAgWgMIGABgNVHREEeTB3hwSsEQhkhwQKZAABggprdWJlcm5ldGVzghJr dWJlcm5ldGVzLmRlZmF1bHSCFmt1YmVybmV0ZXMuZGVmYXVsdC5zdmOCJGt1YmVy bmV0ZXMuZGVmYXVsdC5zdmMuY2x1c3Rlci5sb2NhbIILa3ViZS1tYXN0ZXIwDQYJ KoZIhvcNAQELBQADggEBAFixY0E+lO09vTzoDHgwVMFtMwBCdMh6ZMz9mnCrOFsc knybVhrX/ThRB89atQoRhQE9UoaWrRa+6pws7jwUyVtY16tFrtjgLXB8VUBEuJit G9RmNcV4E0znWt6CFUPLu4M6CQT6Xm/Zyhe4QACwugbtc+3Ix1pTqtNDovHCzxSb wnu3wCpWoFMury0HZcBwwZKGNAU5PO0/bvkxf95a7ZvIg+D0nN7HnAS+0m6NXj6t RtSCcJ15ucPdtMBuGyPQRb4mxn5M7MXDye4ek9SlEelqHeHur+uD5t3sE3tFYBj1 BT9hezwrsSjEkl68Z8ACIqmqadXpDnWANrJmhP4FwnU= -----END CERTIFICATE----- server.key -----BEGIN RSA PRIVATE KEY----- MIIEpAIBAAKCAQEAqz/PlVA9f7SCunJ6iC5BeWd9mkoiJ1/9XHhvPa1XTP03nrU1 8YhZwekQOD7ef1fP6fz917Woeg5f5BZvKmaYKGxCqF+VPQsC8uyrqhlAYLPlemR9 W/KchNW7BnnnAC8soAqI9LDFMd59MNazTepkhbv5iVr1IkGSNdSkfYBkZdkdyTA5 rzRXzdVWXZ81Xe6jB+3xxWjb2xJlMeZsHndEPnwDvInwTBSmQTkio6OgjSDraXrF 3rAvlGdoq4yKJFk4pFcZLcION8hzmK7YCqTiciJJmlVYrY7D60K1QQLJQCfRd0Gr Twsqa7K2OH+gzs+fzXxUcsZDzR1bYLlF6xCrrQIDAQABAoIBAAtfMWm46lyQoB3B fGGOsMpfFPgp9BqpRSne1YRC/okeR5NCdVKUu2ElGO6jPiM2sZfYNQMeDRIN4lBD LR6jsXb9uW906XQkRw3aqYuiIaRKTfLSuYBhnAM2LjU/4xcgCtaV3IJjOrUVETst Brsl1YcL9IYqhBzCPfNVK5cp74DTzleBjl7ng1y8ijGOTcp5JwUbrrQQZ0U9uqjS nCAjB63e8x7JswXx1jo4pDeumJzyJ1eHNA0oXwSbgZ/q/oUHHYykUrFkPYIIAMKu lZO/Lh2tRNdDf8lXupWmhfcwDO9DYcRr4v37hnDqknWWHEdgR9hborc6vZYAMpPB 0LrIfAECgYEA0rT7bFDCCBmk5yDw2cOl1CHT1BTq7Elw2cjAGgjAygx0puGKuBnr qBYeAQqx3ZZHlMsiT3gSbRP9CLws+QgSUf87deM0kBoiWG6m+KgSxmBIMRJCdo+S c+3QZwWLBFHQLaJCDRN4XNr1HuHzcKYO4th/SpDZ3lQc9wO7S3dBHpsCgYEA0A+B ogw30zf1rIaIv8rRMOItqA6pgR6DbspAYexZyEKUexsvHOw6KMDRz7IwzZRVUkjI uPfEkq3qAhYpEgzi/BIsnj/Ku91THkzkkDBolpuJAa068GupQgbLCLhKWa1h7qrI mAFOxy+9ZIFWbmy4UDaqgT5O78gw1CFwibYXn1cCgYEAlDPX5AepcikXY7o3rfN+ 4AYrCDDuS+QcDBK3i5g8geDg68AX4gXZSxDDadgr4r+g+XcnWt4Jl89HWq2AtGiI +kObfv+gKPs4zRqHNr6A9icin+FH/jxdtky/GLc9YHxrAK3v52KadjVL07z5jXI/ Zi8A2WGo3EgtV1C4nAv1MaECgYAp0GP6IEB754wtLyB+gxFFpL8OPlwcgfhiJK2J wIlOsOrMTutKAcOyewXvmt0qA7yd+9izK8BKxj74SmHYqdRYWoKzDxj8Zn+U4Fkz DTeHxRxkxN7KgKiUh274gqkWmrzKzXHg8qpVZ6fFciTfrmPgYwwjS1Vr5SzDBTFr y7e1owKBgQDMKHPuEE9LT3ljiZFIoU6yxbWU/+rMaJwqmV5bEXbfrL06PjTw7kp/ UnLHJ3TVdCXnY2J4Si39cYAhL5Wr5JiubviaW5zCjjOXbrE3ck16kkJsS8DOXjHT nHNGV48GE51THWl/NbuRQz/rD9McsCwixNm66C2EiakKuKLuv3tI3Q== -----END RSA PRIVATE KEY-----
A patch was provided to fix the issue with bash completion for kubectl. You'll need to use the patch for now if you are on osx until the PR is merged and source the bash completion again, you will also need to install bash-completion as described in the question above. kubectl patch kubernetes pull request Edit: this was now merged into master
###Instruction: Cannot get kubectl bash completion to work on Mac. I installed bash-completion with brew i.e. brew install bash-completion then as per the install notes, added it to the ~/.bash_profile: if [ -f $(brew --prefix)/etc/bash_completion ]; then . $(brew --prefix)/etc/bash_completion fi But still getting the error kubectl -bash: _init_completion: command not found after re-sourcing the profile. ### Response: A patch was provided to fix the issue with bash completion for kubectl. You'll need to use the patch for now if you are on osx until the PR is merged and source the bash completion again, you will also need to install bash-completion as described in the question above. kubectl patch kubernetes pull request Edit: this was now merged into master
Cannot get kubectl bash completion to work on Mac. I installed bash-completion with brew i.e. brew install bash-completion then as per the install notes, added it to the ~/.bash_profile: if [ -f $(brew --prefix)/etc/bash_completion ]; then . $(brew --prefix)/etc/bash_completion fi But still getting the error kubectl -bash: _init_completion: command not found after re-sourcing the profile.
If the same docker image is working via kubectl run but not working in a pod, then something is wrong with the pod spec. Compare the full output of the pod as created from spec and as created by rc to see what differs by running kubectl get pods <name> -o yaml for both. Shot in the dark: is it possible the env vars specified in the pod spec are causing it to crash on startup?
###Instruction: I'm trying to get a ghost blog deployed on GKE, working off of the persistent disks with WordPress tutorial. I have a working container that runs fine manually on a GKE node: docker run -d --name my-ghost-blog -p 2368:2368 -d us.gcr.io/my_project_id/my-ghost-blog I can also correctly create a pod using the following method from another tutorial: kubectl run ghost --image=us.gcr.io/my_project_id/my-ghost-blog --port=2368 When I do that I can curl the blog on the internal IP from within the cluster, and get the following output from kubectl get pod: Name: ghosty-nqgt0 Namespace: default Image(s): us.gcr.io/my_project_id/my-ghost-blog Node: very-long-node-name/10.240.51.18 Labels: run=ghost Status: Running Reason: Message: IP: 10.216.0.9 Replication Controllers: ghost (1/1 replicas created) Containers: ghosty: Image: us.gcr.io/my_project_id/my-ghost-blog Limits: cpu: 100m State: Running Started: Fri, 04 Sep 2015 12:18:44 -0400 Ready: True Restart Count: 0 Conditions: Type Status Ready True Events: ... The problem arises when I instead try to create the pod from a yaml file, per the Wordpress tutorial. Here's the yaml: metadata: name: ghost labels: name: ghost spec: containers: - image: us.gcr.io/my_project_id/my-ghost-blog name: ghost env: - name: NODE_ENV value: production - name: VIRTUAL_HOST value: myghostblog.com ports: - containerPort: 2368 When I run kubectl create -f ghost.yaml, the pod is created, but is never ready: > kubectl get pod ghost NAME READY STATUS RESTARTS AGE ghost 0/1 Running 11 3m The pod continuously restarts, as confirmed by the output of kubectl describe pod ghost: Name: ghost Namespace: default Image(s): us.gcr.io/my_project_id/my-ghost-blog Node: very-long-node-name/10.240.51.18 Labels: name=ghost Status: Running Reason: Message: IP: 10.216.0.12 Replication Controllers: <none> Containers: ghost: Image: us.gcr.io/my_project_id/my-ghost-blog Limits: cpu: 100m State: Running Started: Fri, 04 Sep 2015 14:08:20 -0400 Ready: False Restart Count: 10 Conditions: Type Status Ready False Events: FirstSeen LastSeen Count From SubobjectPath Reason Message Fri, 04 Sep 2015 14:03:20 -0400 Fri, 04 Sep 2015 14:03:20 -0400 1 {scheduler } scheduled Successfully assigned ghost to very-long-node-name Fri, 04 Sep 2015 14:03:27 -0400 Fri, 04 Sep 2015 14:03:27 -0400 1 {kubelet very-long-node-name} implicitly required container POD created Created with docker id dbbc27b4d280 Fri, 04 Sep 2015 14:03:27 -0400 Fri, 04 Sep 2015 14:03:27 -0400 1 {kubelet very-long-node-name} implicitly required container POD started Started with docker id dbbc27b4d280 Fri, 04 Sep 2015 14:03:27 -0400 Fri, 04 Sep 2015 14:03:27 -0400 1 {kubelet very-long-node-name} spec.containers{ghost} created Created with docker id ceb14ba72929 Fri, 04 Sep 2015 14:03:27 -0400 Fri, 04 Sep 2015 14:03:27 -0400 1 {kubelet very-long-node-name} spec.containers{ghost} started Started with docker id ceb14ba72929 Fri, 04 Sep 2015 14:03:27 -0400 Fri, 04 Sep 2015 14:03:27 -0400 1 {kubelet very-long-node-name} implicitly required container POD pulled Pod container image "gcr.io/google_containers/pause:0.8.0" already present on machine Fri, 04 Sep 2015 14:03:30 -0400 Fri, 04 Sep 2015 14:03:30 -0400 1 {kubelet very-long-node-name} spec.containers{ghost} started Started with docker id 0b8957fe9b61 Fri, 04 Sep 2015 14:03:30 -0400 Fri, 04 Sep 2015 14:03:30 -0400 1 {kubelet very-long-node-name} spec.containers{ghost} created Created with docker id 0b8957fe9b61 Fri, 04 Sep 2015 14:03:40 -0400 Fri, 04 Sep 2015 14:03:40 -0400 1 {kubelet very-long-node-name} spec.containers{ghost} created Created with docker id edaf0df38c01 Fri, 04 Sep 2015 14:03:40 -0400 Fri, 04 Sep 2015 14:03:40 -0400 1 {kubelet very-long-node-name} spec.containers{ghost} started Started with docker id edaf0df38c01 Fri, 04 Sep 2015 14:03:50 -0400 Fri, 04 Sep 2015 14:03:50 -0400 1 {kubelet very-long-node-name} spec.containers{ghost} started Started with docker id d33f5e5a9637 ... This cycle of created/started goes on forever, if I don't kill the pod. The only difference from the successful pod is the lack of a replication controller. I don't expect this is the problem because the tutorial mentions nothing about rc. Why is this happening? How can I create a successful pod from config file? And where would I find more verbose logs about what is going on? ### Response: If the same docker image is working via kubectl run but not working in a pod, then something is wrong with the pod spec. Compare the full output of the pod as created from spec and as created by rc to see what differs by running kubectl get pods <name> -o yaml for both. Shot in the dark: is it possible the env vars specified in the pod spec are causing it to crash on startup?
I'm trying to get a ghost blog deployed on GKE, working off of the persistent disks with WordPress tutorial. I have a working container that runs fine manually on a GKE node: docker run -d --name my-ghost-blog -p 2368:2368 -d us.gcr.io/my_project_id/my-ghost-blog I can also correctly create a pod using the following method from another tutorial: kubectl run ghost --image=us.gcr.io/my_project_id/my-ghost-blog --port=2368 When I do that I can curl the blog on the internal IP from within the cluster, and get the following output from kubectl get pod: Name: ghosty-nqgt0 Namespace: default Image(s): us.gcr.io/my_project_id/my-ghost-blog Node: very-long-node-name/10.240.51.18 Labels: run=ghost Status: Running Reason: Message: IP: 10.216.0.9 Replication Controllers: ghost (1/1 replicas created) Containers: ghosty: Image: us.gcr.io/my_project_id/my-ghost-blog Limits: cpu: 100m State: Running Started: Fri, 04 Sep 2015 12:18:44 -0400 Ready: True Restart Count: 0 Conditions: Type Status Ready True Events: ... The problem arises when I instead try to create the pod from a yaml file, per the Wordpress tutorial. Here's the yaml: metadata: name: ghost labels: name: ghost spec: containers: - image: us.gcr.io/my_project_id/my-ghost-blog name: ghost env: - name: NODE_ENV value: production - name: VIRTUAL_HOST value: myghostblog.com ports: - containerPort: 2368 When I run kubectl create -f ghost.yaml, the pod is created, but is never ready: > kubectl get pod ghost NAME READY STATUS RESTARTS AGE ghost 0/1 Running 11 3m The pod continuously restarts, as confirmed by the output of kubectl describe pod ghost: Name: ghost Namespace: default Image(s): us.gcr.io/my_project_id/my-ghost-blog Node: very-long-node-name/10.240.51.18 Labels: name=ghost Status: Running Reason: Message: IP: 10.216.0.12 Replication Controllers: <none> Containers: ghost: Image: us.gcr.io/my_project_id/my-ghost-blog Limits: cpu: 100m State: Running Started: Fri, 04 Sep 2015 14:08:20 -0400 Ready: False Restart Count: 10 Conditions: Type Status Ready False Events: FirstSeen LastSeen Count From SubobjectPath Reason Message Fri, 04 Sep 2015 14:03:20 -0400 Fri, 04 Sep 2015 14:03:20 -0400 1 {scheduler } scheduled Successfully assigned ghost to very-long-node-name Fri, 04 Sep 2015 14:03:27 -0400 Fri, 04 Sep 2015 14:03:27 -0400 1 {kubelet very-long-node-name} implicitly required container POD created Created with docker id dbbc27b4d280 Fri, 04 Sep 2015 14:03:27 -0400 Fri, 04 Sep 2015 14:03:27 -0400 1 {kubelet very-long-node-name} implicitly required container POD started Started with docker id dbbc27b4d280 Fri, 04 Sep 2015 14:03:27 -0400 Fri, 04 Sep 2015 14:03:27 -0400 1 {kubelet very-long-node-name} spec.containers{ghost} created Created with docker id ceb14ba72929 Fri, 04 Sep 2015 14:03:27 -0400 Fri, 04 Sep 2015 14:03:27 -0400 1 {kubelet very-long-node-name} spec.containers{ghost} started Started with docker id ceb14ba72929 Fri, 04 Sep 2015 14:03:27 -0400 Fri, 04 Sep 2015 14:03:27 -0400 1 {kubelet very-long-node-name} implicitly required container POD pulled Pod container image "gcr.io/google_containers/pause:0.8.0" already present on machine Fri, 04 Sep 2015 14:03:30 -0400 Fri, 04 Sep 2015 14:03:30 -0400 1 {kubelet very-long-node-name} spec.containers{ghost} started Started with docker id 0b8957fe9b61 Fri, 04 Sep 2015 14:03:30 -0400 Fri, 04 Sep 2015 14:03:30 -0400 1 {kubelet very-long-node-name} spec.containers{ghost} created Created with docker id 0b8957fe9b61 Fri, 04 Sep 2015 14:03:40 -0400 Fri, 04 Sep 2015 14:03:40 -0400 1 {kubelet very-long-node-name} spec.containers{ghost} created Created with docker id edaf0df38c01 Fri, 04 Sep 2015 14:03:40 -0400 Fri, 04 Sep 2015 14:03:40 -0400 1 {kubelet very-long-node-name} spec.containers{ghost} started Started with docker id edaf0df38c01 Fri, 04 Sep 2015 14:03:50 -0400 Fri, 04 Sep 2015 14:03:50 -0400 1 {kubelet very-long-node-name} spec.containers{ghost} started Started with docker id d33f5e5a9637 ... This cycle of created/started goes on forever, if I don't kill the pod. The only difference from the successful pod is the lack of a replication controller. I don't expect this is the problem because the tutorial mentions nothing about rc. Why is this happening? How can I create a successful pod from config file? And where would I find more verbose logs about what is going on?
I have been thinking of ways to enable swift as a volume plugin. Volume plugins for cloud block storage (EBS, cinder, persistent disk) are straight forward when compared to object storage. The main difference being block storage disks can be provisioned and attached to the vm on which kubelet is running, which can be mounted into the container. After which, it will behave like local file system and does not need any extra care. Read only mounts of object storage are also fairly straight forward and the functionality can be similar to gitRepo. On the other hand, writing back to object storage gets tricky. 2 ways come to mind: Some sort user space file system plugin which will map to the remote system. Use a side car container whose sole purpose would be to sync a particular dir to object storage system. Obviously both of the approaches would be significantly slow in terms of performance which will probably be directly proportional to network bandwidth.
###Instruction: Kubernetes volume support is listed here https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/user-guide/volumes.md , however it does not currently meet my needs. I need to support object storage, both Openstack Swift API and S3 compatible API. (k8s does support AWS directly but I can't use it to connect to a different private object storage that uses the s3 API). Is there any way to extend the functionality to support these two object storage APIs? I need to be able to mount from object storage into pods. EDIT: For now I don't have to support swift API, just the S3 API. Keep in mind it's not actually AWS storage, it's merely using S3 compatible API ### Response: I have been thinking of ways to enable swift as a volume plugin. Volume plugins for cloud block storage (EBS, cinder, persistent disk) are straight forward when compared to object storage. The main difference being block storage disks can be provisioned and attached to the vm on which kubelet is running, which can be mounted into the container. After which, it will behave like local file system and does not need any extra care. Read only mounts of object storage are also fairly straight forward and the functionality can be similar to gitRepo. On the other hand, writing back to object storage gets tricky. 2 ways come to mind: Some sort user space file system plugin which will map to the remote system. Use a side car container whose sole purpose would be to sync a particular dir to object storage system. Obviously both of the approaches would be significantly slow in terms of performance which will probably be directly proportional to network bandwidth.
Kubernetes volume support is listed here https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/user-guide/volumes.md , however it does not currently meet my needs. I need to support object storage, both Openstack Swift API and S3 compatible API. (k8s does support AWS directly but I can't use it to connect to a different private object storage that uses the s3 API). Is there any way to extend the functionality to support these two object storage APIs? I need to be able to mount from object storage into pods. EDIT: For now I don't have to support swift API, just the S3 API. Keep in mind it's not actually AWS storage, it's merely using S3 compatible API
You do not need to run docker pull manually. The pod definition contains the image name to pull and Kubernetes will pull the image for you. You have several options in terms of defining how Kubernetes will decide to pull the image, using the imagePullPolicy: definition in your pod spec. Much of this is documented here, but basically you can pull if the image is not present, pull always, never update (once the image is local). Hopefully that doc can get you started.
###Instruction: Does Kubernetes download Docker image automatically when i create a pod or should I use Docker pull manually to Download the image locally? ### Response: You do not need to run docker pull manually. The pod definition contains the image name to pull and Kubernetes will pull the image for you. You have several options in terms of defining how Kubernetes will decide to pull the image, using the imagePullPolicy: definition in your pod spec. Much of this is documented here, but basically you can pull if the image is not present, pull always, never update (once the image is local). Hopefully that doc can get you started.
Does Kubernetes download Docker image automatically when i create a pod or should I use Docker pull manually to Download the image locally?
At the moment there is not any API to directly manipulate DNS within the cluster. It's something we want to do but have not tackled yet. Can you explain what you're hoping to achieve? Edit: if you want to run Consul, nothing is stopping you. Our DNS server is just one implementation.
###Instruction: I am running a Kubernetes Cluster on Google Container Engine with the default SkyDNS implementation enabled. How can I add custom DNS Entries in Google Container Engine? Is there any way to: Add custom DNS Entries to SkyDNS that will persist if the SkyDNS container is restarted? Change the default DNS Settings on my nodes to a custom DNS Server, which in-turn will forward to SkyDNS? Change the Forward DNS of SkyDNS to my custom DNS Server? Edit: If trying to resolve a Google Compute Engine VM from inside a container, the default DNS Server that Google Container Engine uses will resolve VM Names in the formats: <vm-name>.c.<project-name>.internal <vm-name>.<project-id>.google.internal ### Response: At the moment there is not any API to directly manipulate DNS within the cluster. It's something we want to do but have not tackled yet. Can you explain what you're hoping to achieve? Edit: if you want to run Consul, nothing is stopping you. Our DNS server is just one implementation.
I am running a Kubernetes Cluster on Google Container Engine with the default SkyDNS implementation enabled. How can I add custom DNS Entries in Google Container Engine? Is there any way to: Add custom DNS Entries to SkyDNS that will persist if the SkyDNS container is restarted? Change the default DNS Settings on my nodes to a custom DNS Server, which in-turn will forward to SkyDNS? Change the Forward DNS of SkyDNS to my custom DNS Server? Edit: If trying to resolve a Google Compute Engine VM from inside a container, the default DNS Server that Google Container Engine uses will resolve VM Names in the formats: <vm-name>.c.<project-name>.internal <vm-name>.<project-id>.google.internal
I have no 'inside' knowledge of Borg so this answer is based only on what Google themselves have published here. For much greater detail, you should look into that paper. Section 8 makes specific reference to Kubernetes and is the basis of this answer (along with Kubernetes own docs): 1) Key differences: Borg groups work by 'job'; Kubernetes adds 'labels' for greater flexibility. Borg uses an IP-per-machine design; Kubernetes uses a network-per-machine and IP-per-Pod design to allow late-binding of ports (letting developers choose ports, not the infrastructure). Borg's API seems to be extensive and rich, but with a steep learning curve; Kubernetes APIs are presumably simpler. At least, for someone who hasn't worked with Borg, the Kubernetes API seems pretty clean and understandable. 2) Borg seems to use LMCTFY as its container technology. Kubernetes allows the use of Docker or rkt. Some other obvious differences are the Borg is not open source and not available for use outside of Google, while Kubernetes is both of those things. Borg has been in production use for more than 10 years, while Kubernetes just hit v1.0 in July 2015. Hope this helps. Check out that Borg paper; it is worth the time to read the whole thing.
###Instruction: As I know, Google's Kubernetes is based on Google's Borg; however, it seems like Borg is larger than Kubernetes. My understanding is that Borg is a large system containing a sub-system like Kubernetes and its own containers like Docker. So, I would like to know: 1) In term of containers cluster management, what's the key difference between Borg (sub-system inside) and Kubernetes? 2) In term of container technology, what's the key difference between Borg (sub-system inside) and Docker? ### Response: I have no 'inside' knowledge of Borg so this answer is based only on what Google themselves have published here. For much greater detail, you should look into that paper. Section 8 makes specific reference to Kubernetes and is the basis of this answer (along with Kubernetes own docs): 1) Key differences: Borg groups work by 'job'; Kubernetes adds 'labels' for greater flexibility. Borg uses an IP-per-machine design; Kubernetes uses a network-per-machine and IP-per-Pod design to allow late-binding of ports (letting developers choose ports, not the infrastructure). Borg's API seems to be extensive and rich, but with a steep learning curve; Kubernetes APIs are presumably simpler. At least, for someone who hasn't worked with Borg, the Kubernetes API seems pretty clean and understandable. 2) Borg seems to use LMCTFY as its container technology. Kubernetes allows the use of Docker or rkt. Some other obvious differences are the Borg is not open source and not available for use outside of Google, while Kubernetes is both of those things. Borg has been in production use for more than 10 years, while Kubernetes just hit v1.0 in July 2015. Hope this helps. Check out that Borg paper; it is worth the time to read the whole thing.
As I know, Google's Kubernetes is based on Google's Borg; however, it seems like Borg is larger than Kubernetes. My understanding is that Borg is a large system containing a sub-system like Kubernetes and its own containers like Docker. So, I would like to know: 1) In term of containers cluster management, what's the key difference between Borg (sub-system inside) and Kubernetes? 2) In term of container technology, what's the key difference between Borg (sub-system inside) and Docker?
You should be able to use a rolling update specifying the same image name that you are currently using: kubectl rolling-update <replication-controller-name> --image=foobar/myimage:[branch]-latest This will (behind the scenes) create a new replication controller that is a copy of your existing replication controller with the "new" image, and then stepwise resize each of the replication controllers until the old one has zero pods and the new one has the desired number of pods, finally deleting the old one and renaming the new one to use the old name.
###Instruction: I have a kubernetes RC/pod consisting of containers with images like: foobar/my-image:[branch]-latest where "branch" is the git branch ("master", etc). What's the best way to use rolling-update to force the RC to re-pull the images to get the latest version? The brute force method is to simply delete the RC and re-create it, but that causes downtime for the service. Is rolling update only possible if you specify an exact image tag, rather than something like "latest"? ### Response: You should be able to use a rolling update specifying the same image name that you are currently using: kubectl rolling-update <replication-controller-name> --image=foobar/myimage:[branch]-latest This will (behind the scenes) create a new replication controller that is a copy of your existing replication controller with the "new" image, and then stepwise resize each of the replication controllers until the old one has zero pods and the new one has the desired number of pods, finally deleting the old one and renaming the new one to use the old name.
I have a kubernetes RC/pod consisting of containers with images like: foobar/my-image:[branch]-latest where "branch" is the git branch ("master", etc). What's the best way to use rolling-update to force the RC to re-pull the images to get the latest version? The brute force method is to simply delete the RC and re-create it, but that causes downtime for the service. Is rolling update only possible if you specify an exact image tag, rather than something like "latest"?
If you use Docker v1.7 or above, you may have run into a cadvisor bug, where network stats are not collected correctly. See https://github.com/kubernetes/kubernetes/issues/13189 The bug has already being fixed in cadvisor, and kubernetes will include the fix soon.
###Instruction: I've the a lot of error logs reported by kubelet : Sep 07 09:43:51 kubenode-1 kubelet[10320]: I0907 09:43:51.651224 10320 container.go:369] Failed to update stats for container "/docker/01ad0eff434033752c1f39944e9965e38a07081fcbfe26dc35358bb63be18082": failed to read stat from "/sys/class/net/veth2fc2d33/statistics/rx_bytes" for device "veth2fc2d33", continuing to Sep 07 09:43:56 kubenode-1 kubelet[10320]: I0907 09:43:56.051022 10320 container.go:369] Failed to update stats for container "/": failed to read stat from "/sys/class/net/calic1976c4e52f/statistics/rx_bytes" for device "calic1976c4e52f", continuing to push stats I don't know what is the problem exactly. Please find below further information docker info Containers: 27 Images: 121 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 175 Dirperm1 Supported: true Execution Driver: native-0.2 Kernel Version: 3.16.0-4-amd64 Operating System: Debian GNU/Linux 8 (jessie) CPUs: 2 Total Memory: 2.95 GiB Name: kubenode-1 ID: LXO4:TD3E:ZAL5:AUWE:PN6W:KFZX:S4QR:AX6V:776M:VHVT:7Z3O:O72V Username: smaine Registry: [https://index.docker.io/v1/] kubelet --version Kubernetes v1.0.3 I'm running the kubernetes cluster on debian jessie. Furthermore i have these errors : Sep 07 09:37:19 kubenode-1 kubelet[10320]: W0907 09:37:19.148453 10320 manager.go:1161] No ref for pod '814c1a33de45655e8cff2044485913ab568b9ab858ed2c5aa30d0034b82a6660' Sep 07 09:37:29 kubenode-1 kubelet[10320]: W0907 09:37:29.265237 10320 manager.go:1161] No ref for pod '06a3e276f8b3dca0c3ea20b5feee4ab9b5ee97ef44aad1aef2f0102d5ddfa40c' Sep 07 09:37:31 kubenode-1 kubelet[10320]: W0907 09:37:31.065126 10320 manager.go:1161] No ref for pod '686039f754799616a0957d3fcc516bc46b9601a2c6ac304a010397142fc06cd0' Sep 07 09:37:31 kubenode-1 kubelet[10320]: W0907 09:37:31.075288 10320 kubelet.go:1343] Orphaned volume "c2f1c714-52f8-11e5-b108-5254000ec1d7/influxdb-persistent-storage" found, tearing down volume Sep 07 09:37:31 kubenode-1 kubelet[10320]: W0907 09:37:31.914188 10320 kubelet.go:1343] Orphaned volume "c2f1c714-52f8-11e5-b108-5254000ec1d7/default-token-v2j1u" found, tearing down volume Sep 07 09:38:49 kubenode-1 kubelet[10320]: E0907 09:38:49.224758 10320 kubelet.go:682] Image garbage collection failed: failed to find information for the filesystem labeled "docker-images" I'm using calico as network overlay but i don't think it's related. Please let me know if you need more info. Regards, Smana ### Response: If you use Docker v1.7 or above, you may have run into a cadvisor bug, where network stats are not collected correctly. See https://github.com/kubernetes/kubernetes/issues/13189 The bug has already being fixed in cadvisor, and kubernetes will include the fix soon.
I've the a lot of error logs reported by kubelet : Sep 07 09:43:51 kubenode-1 kubelet[10320]: I0907 09:43:51.651224 10320 container.go:369] Failed to update stats for container "/docker/01ad0eff434033752c1f39944e9965e38a07081fcbfe26dc35358bb63be18082": failed to read stat from "/sys/class/net/veth2fc2d33/statistics/rx_bytes" for device "veth2fc2d33", continuing to Sep 07 09:43:56 kubenode-1 kubelet[10320]: I0907 09:43:56.051022 10320 container.go:369] Failed to update stats for container "/": failed to read stat from "/sys/class/net/calic1976c4e52f/statistics/rx_bytes" for device "calic1976c4e52f", continuing to push stats I don't know what is the problem exactly. Please find below further information docker info Containers: 27 Images: 121 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 175 Dirperm1 Supported: true Execution Driver: native-0.2 Kernel Version: 3.16.0-4-amd64 Operating System: Debian GNU/Linux 8 (jessie) CPUs: 2 Total Memory: 2.95 GiB Name: kubenode-1 ID: LXO4:TD3E:ZAL5:AUWE:PN6W:KFZX:S4QR:AX6V:776M:VHVT:7Z3O:O72V Username: smaine Registry: [https://index.docker.io/v1/] kubelet --version Kubernetes v1.0.3 I'm running the kubernetes cluster on debian jessie. Furthermore i have these errors : Sep 07 09:37:19 kubenode-1 kubelet[10320]: W0907 09:37:19.148453 10320 manager.go:1161] No ref for pod '814c1a33de45655e8cff2044485913ab568b9ab858ed2c5aa30d0034b82a6660' Sep 07 09:37:29 kubenode-1 kubelet[10320]: W0907 09:37:29.265237 10320 manager.go:1161] No ref for pod '06a3e276f8b3dca0c3ea20b5feee4ab9b5ee97ef44aad1aef2f0102d5ddfa40c' Sep 07 09:37:31 kubenode-1 kubelet[10320]: W0907 09:37:31.065126 10320 manager.go:1161] No ref for pod '686039f754799616a0957d3fcc516bc46b9601a2c6ac304a010397142fc06cd0' Sep 07 09:37:31 kubenode-1 kubelet[10320]: W0907 09:37:31.075288 10320 kubelet.go:1343] Orphaned volume "c2f1c714-52f8-11e5-b108-5254000ec1d7/influxdb-persistent-storage" found, tearing down volume Sep 07 09:37:31 kubenode-1 kubelet[10320]: W0907 09:37:31.914188 10320 kubelet.go:1343] Orphaned volume "c2f1c714-52f8-11e5-b108-5254000ec1d7/default-token-v2j1u" found, tearing down volume Sep 07 09:38:49 kubenode-1 kubelet[10320]: E0907 09:38:49.224758 10320 kubelet.go:682] Image garbage collection failed: failed to find information for the filesystem labeled "docker-images" I'm using calico as network overlay but i don't think it's related. Please let me know if you need more info. Regards, Smana
This is a rather broadly asked question, so I will (and can) answer only in a rather broad manner. There are a lot of key concepts that have changed. These are the most important ones and you'll need some time to get into it, but they are a big improvement to OpenShift v2.: Cartridges vs. Docker Containers Gears vs. Kubernetes Pods Broker vs. Kubernetes Master Release of Red Hat Enterprise Linux Atomic Host When you'll study the links below you will understand, that (really exaggerated) OpenShift v3 has basically nothing to do with v2 besides the name, the logo and the PaaS focus. But it's still a great tool and IMO has set new standards in the PaaS-world. (No, I don't work for RedHat ;) What's New: https://docs.openshift.com/enterprise/3.0/whats_new/overview.html https://docs.openshift.com/enterprise/3.0/architecture/overview.html For starters; Docker & Kubernetes: https://blog.openshift.com/openshift-v3-platform-combines-docker-kubernetes-atomic-and-more/ Pretty new: Creating a Kubernetes Cluster to Run Docker Formatted Container Images EDIT 2016_06_30: Sorry for necro'ing this old post, but I wanted to add this quick, fun and very informative video about Kubernetes: https://youtu.be/4ht22ReBjno
###Instruction: I'm searching for a main difference between OpenShift V3 and V2. Is OpenShift V2 working like this?: https://www.openshift.com/walkthrough/how-it-works And how are Docker and Kubernetes working in V3? Can someone give me a clear explanation about the build-up of OpenShift V2 and V3 ### Response: This is a rather broadly asked question, so I will (and can) answer only in a rather broad manner. There are a lot of key concepts that have changed. These are the most important ones and you'll need some time to get into it, but they are a big improvement to OpenShift v2.: Cartridges vs. Docker Containers Gears vs. Kubernetes Pods Broker vs. Kubernetes Master Release of Red Hat Enterprise Linux Atomic Host When you'll study the links below you will understand, that (really exaggerated) OpenShift v3 has basically nothing to do with v2 besides the name, the logo and the PaaS focus. But it's still a great tool and IMO has set new standards in the PaaS-world. (No, I don't work for RedHat ;) What's New: https://docs.openshift.com/enterprise/3.0/whats_new/overview.html https://docs.openshift.com/enterprise/3.0/architecture/overview.html For starters; Docker & Kubernetes: https://blog.openshift.com/openshift-v3-platform-combines-docker-kubernetes-atomic-and-more/ Pretty new: Creating a Kubernetes Cluster to Run Docker Formatted Container Images EDIT 2016_06_30: Sorry for necro'ing this old post, but I wanted to add this quick, fun and very informative video about Kubernetes: https://youtu.be/4ht22ReBjno
I'm searching for a main difference between OpenShift V3 and V2. Is OpenShift V2 working like this?: https://www.openshift.com/walkthrough/how-it-works And how are Docker and Kubernetes working in V3? Can someone give me a clear explanation about the build-up of OpenShift V2 and V3
Yes,Kubernetes supports container and image garbage collection. More details in https://kubernetes.io/docs/concepts/cluster-administration/kubelet-garbage-collection/ You may want to check whether that particular node has enough disk space allocated to host the images for the pods assigned to it.
###Instruction: Do sporadic disk cleanup operations happen automatically in Kubernetes or should this be a scheduled "docker RM/RMI" to remove discarded images? A single node in my dev K8s cluster (other nodes are fine) keeps running out of disk space in / and the following message is show in the K8s UI events: reason: freeDiskSpaceFailed Message: failed to garbage collect required amount of images. Wanted to free 2069743207, but freed 0 ### Response: Yes,Kubernetes supports container and image garbage collection. More details in https://kubernetes.io/docs/concepts/cluster-administration/kubelet-garbage-collection/ You may want to check whether that particular node has enough disk space allocated to host the images for the pods assigned to it.
Do sporadic disk cleanup operations happen automatically in Kubernetes or should this be a scheduled "docker RM/RMI" to remove discarded images? A single node in my dev K8s cluster (other nodes are fine) keeps running out of disk space in / and the following message is show in the K8s UI events: reason: freeDiskSpaceFailed Message: failed to garbage collect required amount of images. Wanted to free 2069743207, but freed 0
UPDATE According to the docs, the format is now: _my-port-name._my-port-protocol.my-svc.my-namespace.svc.cluster.local See the related doc here: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pods ORIGINAL ANSWER: as of this date, this is actually not possible... but it is being looked at by the Kubernetes team. See this issue : https://github.com/kubernetes/kubernetes/issues/13552 UPDATE: DNS is available for Pods since 09/2015 See PR:https://github.com/kubernetes/kubernetes/pull/13759 in short: This will give pods dns in the form of <podIP>.<namespace>.pod.<clusterSuffix> Currently can be disabled, but is either on for all pods or off.
###Instruction: On GKE, kube-dns is running on my nodes, I can see the docker containers. I do have access to Services by name, which is great for all these applications where load balancing is a perfectly suitable solution, but how would I use the DNS to access individual pods? I know I can look up specific pods in the API, but I need to update the hosts file myself, and keep watching the pod list. DNS is supposed to do that for me so how is it meant to be used within a pod? The Kubernetes doc says the DNS info needs to be passed to the kubelet but I have no access to that on GKE that I know of, so is it just not setup that way on GKE or is there something to do to activate it? Some of my services (zookeeper in particular) is aware of other nodes on its own, and tries to connect to them by host name (that is pod name) and that fails unless I update the hosts file myself. I would like to use the integrated DNS service for that. Any guidance on how to do this would be appreciated. Thanks ### Response: UPDATE According to the docs, the format is now: _my-port-name._my-port-protocol.my-svc.my-namespace.svc.cluster.local See the related doc here: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pods ORIGINAL ANSWER: as of this date, this is actually not possible... but it is being looked at by the Kubernetes team. See this issue : https://github.com/kubernetes/kubernetes/issues/13552 UPDATE: DNS is available for Pods since 09/2015 See PR:https://github.com/kubernetes/kubernetes/pull/13759 in short: This will give pods dns in the form of <podIP>.<namespace>.pod.<clusterSuffix> Currently can be disabled, but is either on for all pods or off.
On GKE, kube-dns is running on my nodes, I can see the docker containers. I do have access to Services by name, which is great for all these applications where load balancing is a perfectly suitable solution, but how would I use the DNS to access individual pods? I know I can look up specific pods in the API, but I need to update the hosts file myself, and keep watching the pod list. DNS is supposed to do that for me so how is it meant to be used within a pod? The Kubernetes doc says the DNS info needs to be passed to the kubelet but I have no access to that on GKE that I know of, so is it just not setup that way on GKE or is there something to do to activate it? Some of my services (zookeeper in particular) is aware of other nodes on its own, and tries to connect to them by host name (that is pod name) and that fails unless I update the hosts file myself. I would like to use the integrated DNS service for that. Any guidance on how to do this would be appreciated. Thanks
All Docker images can be run on Kubernetes -- it uses Docker to run the images. You can expose ports from containers just like when using Docker directly, pass in environment variables, mount storage volumes from the host into the container, and more. If you have anything particular in mind, I'd be interested in hearing about any image you find that can't be run using Kubernetes.
###Instruction: How can I find out if a given Docker image can be run using Kubernetes? What should I do to help ensure that my images will run well in any Kubernetes-managed environment? ### Response: All Docker images can be run on Kubernetes -- it uses Docker to run the images. You can expose ports from containers just like when using Docker directly, pass in environment variables, mount storage volumes from the host into the container, and more. If you have anything particular in mind, I'd be interested in hearing about any image you find that can't be run using Kubernetes.
How can I find out if a given Docker image can be run using Kubernetes? What should I do to help ensure that my images will run well in any Kubernetes-managed environment?
This is something we want to support, and is/was underway (see PR 7053), but it is not yet possible. You can observe the status/progress on: https://github.com/kubernetes/kubernetes/issues/341 It's possible to work around the lack of this feature by creating a new label that would match the selector you'd like and then create a trivial selector that matches that new label. FYI, you can see the API specification for ReplicationController's spec here: http://kubernetes.io/v1.0/docs/api-reference/definitions.html#_v1_replicationcontrollerspec The schema is listed as "any", but it's actually a map of string to string, like labels.
###Instruction: Is it possible to specify a set-based label selector for a replication controller? I cannot figure out the syntax to do so in the request json. I can't find anything in the documentation, so if you have a link to the appropriate documentation, that would be helpful. ### Response: This is something we want to support, and is/was underway (see PR 7053), but it is not yet possible. You can observe the status/progress on: https://github.com/kubernetes/kubernetes/issues/341 It's possible to work around the lack of this feature by creating a new label that would match the selector you'd like and then create a trivial selector that matches that new label. FYI, you can see the API specification for ReplicationController's spec here: http://kubernetes.io/v1.0/docs/api-reference/definitions.html#_v1_replicationcontrollerspec The schema is listed as "any", but it's actually a map of string to string, like labels.
Is it possible to specify a set-based label selector for a replication controller? I cannot figure out the syntax to do so in the request json. I can't find anything in the documentation, so if you have a link to the appropriate documentation, that would be helpful.
It's almost certainly that the initial size of machine was too low, and ran out of memory (or something similar). To use a larger cluster size, follow this link[1] and set an environment variable before you bring up your cluster. In this case, something like: export MINION_SIZE=t2.large Should run forever.[2] [1] http://kubernetes.io/docs/getting-started-guides/aws/ [2] Or reasonable approximation thereof. :)
###Instruction: After running Kubernetes on AWS for a few days, my master node goes dead. This has happened after setting up 2 different clusters. The pods are still running and available, but there's no way to manage / proxy. Question is why? Or alternatively, how do I replace the master node on AWS? Or alternatively, how do I debug the existing one? Or alternatively, how do I use something other than a t2.micro, which may be too small to run master? Symptom: $ kubectl get pods error: couldn't read version from server: Get https://**.###.###.###/api: dial tcp **.###.###.###:443: connection refused Edit: This is what I found after further debugging: goroutine 571 [running]: net/http.func·018() /usr/src/go/src/net/http/transport.go:517 +0x2a net/http.(*Transport).CancelRequest(0xc2083c0630, 0xc209750d00) /usr/src/go/src/net/http/transport.go:284 +0x97 github.com/coreos/go-etcd/etcd.func·003() /go/src/github.com/GoogleCloudPlatform/kubernetes/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/requests.go:159 +0x236 created by github.com/coreos/go-etcd/etcd.(*Client).SendRequest /go/src/github.com/GoogleCloudPlatform/kubernetes/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/requests.go:168 +0x3e3 goroutine 1 [IO wait, 12 minutes]: net.(*pollDesc).Wait(0xc20870e760, 0x72, 0x0, 0x0) /usr/src/go/src/net/fd_poll_runtime.go:84 +0x47 net.(*pollDesc).WaitRead(0xc20870e760, 0x0, 0x0) /usr/src/go/src/net/fd_poll_runtime.go:89 +0x43 net.(*netFD).accept(0xc20870e700, 0x0, 0x7f4424a42008, 0xc20930a168) /usr/src/go/src/net/fd_unix.go:419 +0x40b net.(*TCPListener).AcceptTCP(0xc20804bec0, 0x5bccce, 0x0, 0x0) /usr/src/go/src/net/tcpsock_posix.go:234 +0x4e net/http.tcpKeepAliveListener.Accept(0xc20804bec0, 0x0, 0x0, 0x0, 0x0) /usr/src/go/src/net/http/server.go:1976 +0x4c net/http.(*Server).Serve(0xc20887ec60, 0x7f4424a66dc8, 0xc20804bec0, 0x0, 0x0) /usr/src/go/src/net/http/server.go:1728 +0x92 net/http.(*Server).ListenAndServe(0xc20887ec60, 0x0, 0x0) /usr/src/go/src/net/http/server.go:1718 +0x154 github.com/GoogleCloudPlatform/kubernetes/cmd/kube-apiserver/app.(*APIServer).Run(0xc2081f0e00, 0xc20806e0e0, 0x0, 0xe, 0x0, 0x0) /go/src/github.com/GoogleCloudPlatform/kubernetes/_output/dockerized/go/src/github.com/GoogleCloudPlatform/kubernetes/cmd/kube-apiserver/app/server.go:484 +0x264a main.main() /go/src/github.com/GoogleCloudPlatform/kubernetes/_output/dockerized/go/src/github.com/GoogleCloudPlatform/kubernetes/cmd/kube-apiserver/apiserver.go:48 +0x154 ### Response: It's almost certainly that the initial size of machine was too low, and ran out of memory (or something similar). To use a larger cluster size, follow this link[1] and set an environment variable before you bring up your cluster. In this case, something like: export MINION_SIZE=t2.large Should run forever.[2] [1] http://kubernetes.io/docs/getting-started-guides/aws/ [2] Or reasonable approximation thereof. :)
After running Kubernetes on AWS for a few days, my master node goes dead. This has happened after setting up 2 different clusters. The pods are still running and available, but there's no way to manage / proxy. Question is why? Or alternatively, how do I replace the master node on AWS? Or alternatively, how do I debug the existing one? Or alternatively, how do I use something other than a t2.micro, which may be too small to run master? Symptom: $ kubectl get pods error: couldn't read version from server: Get https://**.###.###.###/api: dial tcp **.###.###.###:443: connection refused Edit: This is what I found after further debugging: goroutine 571 [running]: net/http.func·018() /usr/src/go/src/net/http/transport.go:517 +0x2a net/http.(*Transport).CancelRequest(0xc2083c0630, 0xc209750d00) /usr/src/go/src/net/http/transport.go:284 +0x97 github.com/coreos/go-etcd/etcd.func·003() /go/src/github.com/GoogleCloudPlatform/kubernetes/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/requests.go:159 +0x236 created by github.com/coreos/go-etcd/etcd.(*Client).SendRequest /go/src/github.com/GoogleCloudPlatform/kubernetes/Godeps/_workspace/src/github.com/coreos/go-etcd/etcd/requests.go:168 +0x3e3 goroutine 1 [IO wait, 12 minutes]: net.(*pollDesc).Wait(0xc20870e760, 0x72, 0x0, 0x0) /usr/src/go/src/net/fd_poll_runtime.go:84 +0x47 net.(*pollDesc).WaitRead(0xc20870e760, 0x0, 0x0) /usr/src/go/src/net/fd_poll_runtime.go:89 +0x43 net.(*netFD).accept(0xc20870e700, 0x0, 0x7f4424a42008, 0xc20930a168) /usr/src/go/src/net/fd_unix.go:419 +0x40b net.(*TCPListener).AcceptTCP(0xc20804bec0, 0x5bccce, 0x0, 0x0) /usr/src/go/src/net/tcpsock_posix.go:234 +0x4e net/http.tcpKeepAliveListener.Accept(0xc20804bec0, 0x0, 0x0, 0x0, 0x0) /usr/src/go/src/net/http/server.go:1976 +0x4c net/http.(*Server).Serve(0xc20887ec60, 0x7f4424a66dc8, 0xc20804bec0, 0x0, 0x0) /usr/src/go/src/net/http/server.go:1728 +0x92 net/http.(*Server).ListenAndServe(0xc20887ec60, 0x0, 0x0) /usr/src/go/src/net/http/server.go:1718 +0x154 github.com/GoogleCloudPlatform/kubernetes/cmd/kube-apiserver/app.(*APIServer).Run(0xc2081f0e00, 0xc20806e0e0, 0x0, 0xe, 0x0, 0x0) /go/src/github.com/GoogleCloudPlatform/kubernetes/_output/dockerized/go/src/github.com/GoogleCloudPlatform/kubernetes/cmd/kube-apiserver/app/server.go:484 +0x264a main.main() /go/src/github.com/GoogleCloudPlatform/kubernetes/_output/dockerized/go/src/github.com/GoogleCloudPlatform/kubernetes/cmd/kube-apiserver/apiserver.go:48 +0x154
Unfortunately I don't know anything about rethinkdb, but it's very reasonable to use a gcePersistentDisk to store the data. That way if the Kubernetes node running your pod dies, the pod can be restarted on another node (assuming you have more than one node in your Kubernetes cluster) and continue to access the data when it comes back up. I don't think there's any reason you need to use persistent volumes here; straight-up GCEPersistentDisk as the VolumeSource should be fine. I'm not sure why you're losing your data when you scale the RC down to 0 and back up to 1. My understanding is that the PD should be re-mounted.
###Instruction: I'm having quite a bit of fun with the gcePersistentDisk in the context of pods inside of kubernetes: Currently I'm attempting to get my rethinkdb cluster to work well with a gcePersistentDisk mounted volume in order to facilitate backups, data recovery, data integrity, etc. This is proving a bit more difficult than I originally anticipated. So, I have a few questions: 1: Should I even be attempting to use the gcePersistentDisk for this use case? Or should I be using persistentVolumes, and using the file system/persistentVolumes on my host kubelets in order to persist the data, and only using gcePersistentDisk when I'm doing a backup? 2: [EDIT: FIGURED OUT] 3: Pretty sure this is just a bug, but if you attempt to scale up a pod with a gcePersistentDisk mounted as a volume, it does not throw the usual: 'The ReplicationController "rethinkdb" is invalid:spec.template.spec.volumes.GCEPersistentDisk.ReadOnly: invalid value 'false': ReadOnly must be true for replicated pods > 1, as GCE PD can only be mounted on multiple machines if it is read-only.' , but rather just hangs on the command line and loops forever when I view the kublet's logs. 4: Am I going completely in the wrong direction for solving this issue? And if so, how do I persist the DB data from my pods? ### Response: Unfortunately I don't know anything about rethinkdb, but it's very reasonable to use a gcePersistentDisk to store the data. That way if the Kubernetes node running your pod dies, the pod can be restarted on another node (assuming you have more than one node in your Kubernetes cluster) and continue to access the data when it comes back up. I don't think there's any reason you need to use persistent volumes here; straight-up GCEPersistentDisk as the VolumeSource should be fine. I'm not sure why you're losing your data when you scale the RC down to 0 and back up to 1. My understanding is that the PD should be re-mounted.
I'm having quite a bit of fun with the gcePersistentDisk in the context of pods inside of kubernetes: Currently I'm attempting to get my rethinkdb cluster to work well with a gcePersistentDisk mounted volume in order to facilitate backups, data recovery, data integrity, etc. This is proving a bit more difficult than I originally anticipated. So, I have a few questions: 1: Should I even be attempting to use the gcePersistentDisk for this use case? Or should I be using persistentVolumes, and using the file system/persistentVolumes on my host kubelets in order to persist the data, and only using gcePersistentDisk when I'm doing a backup? 2: [EDIT: FIGURED OUT] 3: Pretty sure this is just a bug, but if you attempt to scale up a pod with a gcePersistentDisk mounted as a volume, it does not throw the usual: 'The ReplicationController "rethinkdb" is invalid:spec.template.spec.volumes.GCEPersistentDisk.ReadOnly: invalid value 'false': ReadOnly must be true for replicated pods > 1, as GCE PD can only be mounted on multiple machines if it is read-only.' , but rather just hangs on the command line and loops forever when I view the kublet's logs. 4: Am I going completely in the wrong direction for solving this issue? And if so, how do I persist the DB data from my pods?
This will let you send stdin to a container: kubectl exec -i POD_NAME COMMAND Or kubectl attach -i POD_NAME But there isn't a good way to define to stdin sent to all containers in a pod, or all containers spawned by a replication controller
###Instruction: I have a kubernetes cluster running on coreos. I wish to run journal2gelf https://github.com/systemd/journal2gelf in a container in a pod I call logging. (I also have a fluentd container in this pod, which works great, I highly recommend it for streaming logs elsewhere). Is it possible to configure a pod to allow essentially this: journalctl -o json -f | docker run <my journal2gelf image> -d -p $GRAYLOG_PORT but within the containers: key in a replication controller config? And in general can kubernetes allow piping to a container? ### Response: This will let you send stdin to a container: kubectl exec -i POD_NAME COMMAND Or kubectl attach -i POD_NAME But there isn't a good way to define to stdin sent to all containers in a pod, or all containers spawned by a replication controller
I have a kubernetes cluster running on coreos. I wish to run journal2gelf https://github.com/systemd/journal2gelf in a container in a pod I call logging. (I also have a fluentd container in this pod, which works great, I highly recommend it for streaming logs elsewhere). Is it possible to configure a pod to allow essentially this: journalctl -o json -f | docker run <my journal2gelf image> -d -p $GRAYLOG_PORT but within the containers: key in a replication controller config? And in general can kubernetes allow piping to a container?