input
stringclasses
1 value
output
stringlengths
30
28.3k
text
stringlengths
151
31.8k
instruction
stringlengths
49
30.3k
I can't speak to the ProjectAtomic bits, nor to the KUBERNETES_PROVIDER env-var, since my experience has been with the CoreOS provisioner. I will talk about my experiences and see if that helps you dig a little more into your setup. Foremost, it is absolutely essential that the controller EC2 and the worker EC2 machines have the correct IAM role that will enable the machines to make AWS calls on behalf of your account. This includes things like provisioning ELBs and working with EBS Volumes (or attaching an EBS Volume to themselves, in the case of the worker). Without that, your cloud-config experience will go nowhere. I'm pretty sure the IAM payloads are defined somewhere other than those .go files, which are hard to read, but that's the quickest link I had handy to show what's needed. Fortunately, the answer to that question, and the one I'm about to talk about, are both centered around the apiserver and the controller-manager. The configuration of them and the logs they output. Both the apiserver and the controller-manager have an argument that points to an on-disk cloud configuration file that regrettably isn't documented anywhere except for the source. That Zone field is, in my experience, optional (just like they say in the comments). However, it was seeing the KubernetesClusterTag that led me to follow that field around in the code to see what it does. If your experience is anything like mine, you'll see in the docker logs of the controller-manager a bunch of error messages about how it created the ELB but could not find any subnets to attach to it; (that "docker logs" bit is presuming, of course, that ProjectAtomic also uses docker to run the Kubernetes daemons). Once I attached a Tag named KubernetesCluster and set every instance of the Tag to the same string (it can be anything, AFAIK), then the aws_loadbalancer was able to find the subnet in the VPC and it attached the Nodes to the ELB and everything was cool -- except for the part about it can only create Internet facing ELBs, right now. :-( Just for clarity: the aws.cfg contains a field named KubernetesClusterTag that allows you to redefine the Tag that Kubernetes will look for; without any value in that file, Kuberenetes will use the Tag name KubernetesCluster. I hope this helps you and I hope it helps others, because once Kubernetes is up, it's absolutely amazing.
###Instruction: I installed CentOS Atomic Host as operating system for kubernetes on AWS. Everything works fine, but it seems I missed something. I did not configure cloud provider and can not find any documentation on that. In this question I want to know: 1. What features cloud provider gives to kubernetes? 2. How to configure AWS cloud provider? UPD 1: external load balancer does not work; I have not tested awsElasticBlockStore yet, but I also suspect it does not work. UPD 2: Service details: $ kubectl get svc nginx-service-aws-lb -o yaml apiVersion: v1 kind: Service metadata: creationTimestamp: 2016-01-02T09:51:40Z name: nginx-service-aws-lb namespace: default resourceVersion: "74153" selfLink: /api/v1/namespaces/default/services/nginx-service-aws-lb uid: 6c28b718-b136-11e5-9bda-06c2feb29b0d spec: clusterIP: 10.254.172.185 ports: - name: http-proxy-protocol nodePort: 31385 port: 8080 protocol: TCP targetPort: 8080 - name: https-proxy-protocol nodePort: 31370 port: 8443 protocol: TCP targetPort: 8443 selector: app: nginx sessionAffinity: None type: LoadBalancer status: loadBalancer: {} ### Response: I can't speak to the ProjectAtomic bits, nor to the KUBERNETES_PROVIDER env-var, since my experience has been with the CoreOS provisioner. I will talk about my experiences and see if that helps you dig a little more into your setup. Foremost, it is absolutely essential that the controller EC2 and the worker EC2 machines have the correct IAM role that will enable the machines to make AWS calls on behalf of your account. This includes things like provisioning ELBs and working with EBS Volumes (or attaching an EBS Volume to themselves, in the case of the worker). Without that, your cloud-config experience will go nowhere. I'm pretty sure the IAM payloads are defined somewhere other than those .go files, which are hard to read, but that's the quickest link I had handy to show what's needed. Fortunately, the answer to that question, and the one I'm about to talk about, are both centered around the apiserver and the controller-manager. The configuration of them and the logs they output. Both the apiserver and the controller-manager have an argument that points to an on-disk cloud configuration file that regrettably isn't documented anywhere except for the source. That Zone field is, in my experience, optional (just like they say in the comments). However, it was seeing the KubernetesClusterTag that led me to follow that field around in the code to see what it does. If your experience is anything like mine, you'll see in the docker logs of the controller-manager a bunch of error messages about how it created the ELB but could not find any subnets to attach to it; (that "docker logs" bit is presuming, of course, that ProjectAtomic also uses docker to run the Kubernetes daemons). Once I attached a Tag named KubernetesCluster and set every instance of the Tag to the same string (it can be anything, AFAIK), then the aws_loadbalancer was able to find the subnet in the VPC and it attached the Nodes to the ELB and everything was cool -- except for the part about it can only create Internet facing ELBs, right now. :-( Just for clarity: the aws.cfg contains a field named KubernetesClusterTag that allows you to redefine the Tag that Kubernetes will look for; without any value in that file, Kuberenetes will use the Tag name KubernetesCluster. I hope this helps you and I hope it helps others, because once Kubernetes is up, it's absolutely amazing.
I installed CentOS Atomic Host as operating system for kubernetes on AWS. Everything works fine, but it seems I missed something. I did not configure cloud provider and can not find any documentation on that. In this question I want to know: 1. What features cloud provider gives to kubernetes? 2. How to configure AWS cloud provider? UPD 1: external load balancer does not work; I have not tested awsElasticBlockStore yet, but I also suspect it does not work. UPD 2: Service details: $ kubectl get svc nginx-service-aws-lb -o yaml apiVersion: v1 kind: Service metadata: creationTimestamp: 2016-01-02T09:51:40Z name: nginx-service-aws-lb namespace: default resourceVersion: "74153" selfLink: /api/v1/namespaces/default/services/nginx-service-aws-lb uid: 6c28b718-b136-11e5-9bda-06c2feb29b0d spec: clusterIP: 10.254.172.185 ports: - name: http-proxy-protocol nodePort: 31385 port: 8080 protocol: TCP targetPort: 8080 - name: https-proxy-protocol nodePort: 31370 port: 8443 protocol: TCP targetPort: 8443 selector: app: nginx sessionAffinity: None type: LoadBalancer status: loadBalancer: {}
You can restart pods by simply deleting them: if they are linked to a Replication controller, the RC will take care of restarting them kubectl delete pod <your-pod-name> if you have a couple pods, it's easy enougth to copy/paste the pod names, but if you have many pods it can become cumbersome. So another way to delete pods and restart them is to scale the RC down to 0 instances and back up to the number you need. kubectl scale --replicas=0 rc <your-rc> kubectl scale --replicas=<n> rc <your-rc> By-the-way, you may also want to look at 'rolling-updates' to do this in a more production friendly manner, but that implies updating the RC config.
###Instruction: Is there a way to reload currently running pods created by replicationcontroller to reapply newly created services? Example: I have a running pods created by ReplicationController config file. I have deleted a service called mongo-svc and recreated it again using different port. Is there a way for the pod's env file to be updated with the new IP and ports from the new mongo-svc? ### Response: You can restart pods by simply deleting them: if they are linked to a Replication controller, the RC will take care of restarting them kubectl delete pod <your-pod-name> if you have a couple pods, it's easy enougth to copy/paste the pod names, but if you have many pods it can become cumbersome. So another way to delete pods and restart them is to scale the RC down to 0 instances and back up to the number you need. kubectl scale --replicas=0 rc <your-rc> kubectl scale --replicas=<n> rc <your-rc> By-the-way, you may also want to look at 'rolling-updates' to do this in a more production friendly manner, but that implies updating the RC config.
Is there a way to reload currently running pods created by replicationcontroller to reapply newly created services? Example: I have a running pods created by ReplicationController config file. I have deleted a service called mongo-svc and recreated it again using different port. Is there a way for the pod's env file to be updated with the new IP and ports from the new mongo-svc?
Here's a temporary fix: As pointed out in https://github.com/kubernetes/kubernetes/issues/20088#issuecomment-174930620, the root issue is a change in URL for an underlying dependency reference by salt-bootstrap. This is fixed in https://github.com/saltstack/salt-bootstrap/pull/738, but the fix has not made it to salt-bootstrap stable. We can use the develop version: cd cluster/vagrant sed -i "s/https:\/\/bootstrap.saltstack.com/https:\/\/bootstrap.saltstack.com\/develop/" provision-minion.sh sed -i "s/https:\/\/bootstrap.saltstack.com/https:\/\/bootstrap.saltstack.com\/develop/" provision-master.sh Tested on Kubernetes 1.1.4.
###Instruction: I am running my local kubernetes v1.1.4 cluster on Ubuntu 14.04 using vagrant 1.8.1 and virtualbox 4.3.28. Everything worked fine but since last friday 22.01.2016 i keep getting the same error when executing ./cluster/kube-up.sh ==> master: * INFO: sh -- Version 2015.11.09 ==> master: * INFO: System Information: ==> master: * INFO: CPU: GenuineIntel ==> master: * INFO: CPU Arch: x86_64 ==> master: * INFO: OS Name: Linux ==> master: * INFO: OS Version: 3.17.4-301.fc21.x86_64 ==> master: * INFO: Distribution: Fedora 21 ==> master: * INFO: Installing master ==> master: * INFO: Found function install_fedora_deps ==> master: * INFO: Found function install_fedora_stable ==> master: * INFO: Found function install_fedora_stable_post ==> master: * INFO: Found function install_fedora_restart_daemons ==> master: * INFO: Found function daemons_running ==> master: * INFO: Found function install_fedora_check_services ==> master: * INFO: Running install_fedora_deps() ==> master: which: no dnf in (/usr/local/sbin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin) ==> master: * INFO: Adding SaltStack's COPR repository ==> master: ==> master: ==> master: File contains no section headers. ==> master: file: file:///etc/yum.repos.d/saltstack-salt-fedora-21.repo, line: 1 ==> master: '<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">\n' ==> master: * ERROR: Failed to run install_fedora_deps()!!! The SSH command responded with a non-zero exit status. Vagrant assumes that this means the command failed. The output for this command should be in the log above. Please read the output to determine what went wrong. i ssh into master and this is the content of /etc/yum.repos.d/saltstack-salt-fedora-21.repo <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>302 Found</title> </head><body> <h1>Found</h1> <p>The document has moved <a href="https://copr.fedorainfracloud.org/coprs/saltstack/salt/repo/fedora-21/saltstack-salt-fedora-21.repo">here</a>.</p> <hr> <address>Apache/2.4.6 (Red Hat Enterprise Linux) Server at copr.fedoraproject.org Port 80</address> </body></html> I get the same error when running the latest v1.2.0-alpha.6. Is this an issue with the salt-fedora21 repo? Can anyone replicate the error? Edit: related issue can be found here https://github.com/kubernetes/kubernetes/issues/20088 ### Response: Here's a temporary fix: As pointed out in https://github.com/kubernetes/kubernetes/issues/20088#issuecomment-174930620, the root issue is a change in URL for an underlying dependency reference by salt-bootstrap. This is fixed in https://github.com/saltstack/salt-bootstrap/pull/738, but the fix has not made it to salt-bootstrap stable. We can use the develop version: cd cluster/vagrant sed -i "s/https:\/\/bootstrap.saltstack.com/https:\/\/bootstrap.saltstack.com\/develop/" provision-minion.sh sed -i "s/https:\/\/bootstrap.saltstack.com/https:\/\/bootstrap.saltstack.com\/develop/" provision-master.sh Tested on Kubernetes 1.1.4.
I am running my local kubernetes v1.1.4 cluster on Ubuntu 14.04 using vagrant 1.8.1 and virtualbox 4.3.28. Everything worked fine but since last friday 22.01.2016 i keep getting the same error when executing ./cluster/kube-up.sh ==> master: * INFO: sh -- Version 2015.11.09 ==> master: * INFO: System Information: ==> master: * INFO: CPU: GenuineIntel ==> master: * INFO: CPU Arch: x86_64 ==> master: * INFO: OS Name: Linux ==> master: * INFO: OS Version: 3.17.4-301.fc21.x86_64 ==> master: * INFO: Distribution: Fedora 21 ==> master: * INFO: Installing master ==> master: * INFO: Found function install_fedora_deps ==> master: * INFO: Found function install_fedora_stable ==> master: * INFO: Found function install_fedora_stable_post ==> master: * INFO: Found function install_fedora_restart_daemons ==> master: * INFO: Found function daemons_running ==> master: * INFO: Found function install_fedora_check_services ==> master: * INFO: Running install_fedora_deps() ==> master: which: no dnf in (/usr/local/sbin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin) ==> master: * INFO: Adding SaltStack's COPR repository ==> master: ==> master: ==> master: File contains no section headers. ==> master: file: file:///etc/yum.repos.d/saltstack-salt-fedora-21.repo, line: 1 ==> master: '<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">\n' ==> master: * ERROR: Failed to run install_fedora_deps()!!! The SSH command responded with a non-zero exit status. Vagrant assumes that this means the command failed. The output for this command should be in the log above. Please read the output to determine what went wrong. i ssh into master and this is the content of /etc/yum.repos.d/saltstack-salt-fedora-21.repo <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>302 Found</title> </head><body> <h1>Found</h1> <p>The document has moved <a href="https://copr.fedorainfracloud.org/coprs/saltstack/salt/repo/fedora-21/saltstack-salt-fedora-21.repo">here</a>.</p> <hr> <address>Apache/2.4.6 (Red Hat Enterprise Linux) Server at copr.fedoraproject.org Port 80</address> </body></html> I get the same error when running the latest v1.2.0-alpha.6. Is this an issue with the salt-fedora21 repo? Can anyone replicate the error? Edit: related issue can be found here https://github.com/kubernetes/kubernetes/issues/20088
After a few moments, the external IP of the load balancer is listed in the IP(s) column of the service Usually it's 1-2 minutes. You was waiting only 23seconds. Try to wait a few moments more and it'll be OK.
###Instruction: I am trying to create a "Hello Node" sample application in Google Container Engine, following this tutorial However even after running the command kubectl expose rc hello-node --type="LoadBalancer", it is not exposing an external-IP to access the port. vagrant@docker-host:~/node-app$ kubectl run hello-node --image=gcr.io/${PROJECT_ID}/hello-node:v1 --port=8080 replicationcontroller "hello-node" created vagrant@docker-host:~/node-app$ kubectl expose rc hello-node --type="LoadBalancer" service "hello-node" exposed vagrant@docker-host:~/node-app$ kubectl get services hello-node NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE hello-node 10.163.248.xxx 8080/TCP run=hello-node 14s vagrant@docker-host:~/node-app$ kubectl get services hello-node NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE hello-node 10.163.248.xxx 8080/TCP run=hello-node 23s ### Response: After a few moments, the external IP of the load balancer is listed in the IP(s) column of the service Usually it's 1-2 minutes. You was waiting only 23seconds. Try to wait a few moments more and it'll be OK.
I am trying to create a "Hello Node" sample application in Google Container Engine, following this tutorial However even after running the command kubectl expose rc hello-node --type="LoadBalancer", it is not exposing an external-IP to access the port. vagrant@docker-host:~/node-app$ kubectl run hello-node --image=gcr.io/${PROJECT_ID}/hello-node:v1 --port=8080 replicationcontroller "hello-node" created vagrant@docker-host:~/node-app$ kubectl expose rc hello-node --type="LoadBalancer" service "hello-node" exposed vagrant@docker-host:~/node-app$ kubectl get services hello-node NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE hello-node 10.163.248.xxx 8080/TCP run=hello-node 14s vagrant@docker-host:~/node-app$ kubectl get services hello-node NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE hello-node 10.163.248.xxx 8080/TCP run=hello-node 23s
Found it, for future reference, here's what you might want to do: package main import ( "os" "k8s.io/kubernetes/pkg/api/unversioned" "k8s.io/kubernetes/pkg/api/v1" "k8s.io/kubernetes/pkg/runtime/serializer/json" ) func main() { ns := &v1.Namespace{ TypeMeta: unversioned.TypeMeta{ Kind: "Namespace", APIVersion: "v1", }, ObjectMeta: v1.ObjectMeta{ Name: "test", }, } e := json.NewYAMLSerializer(json.DefaultMetaFactory, nil, nil) err := e.EncodeToStream(ns, os.Stdout) if err != nil { panic(err) } } This might not be the best practice (better suggestions welcome), but it does the job.
###Instruction: I'm looking into writing a tool that generates Kubernetes definitions programatically for our project. I've found that the API types in Kubernetes can be found in k8s.io/kubernetes/pkg/api. I would like to output YAML based on these types. Given an object like this: ns := &api.Namespace{ ObjectMeta: api.ObjectMeta{ Name: "test", }, } What's the best way to generate the YAML output expected by kubectl create? ### Response: Found it, for future reference, here's what you might want to do: package main import ( "os" "k8s.io/kubernetes/pkg/api/unversioned" "k8s.io/kubernetes/pkg/api/v1" "k8s.io/kubernetes/pkg/runtime/serializer/json" ) func main() { ns := &v1.Namespace{ TypeMeta: unversioned.TypeMeta{ Kind: "Namespace", APIVersion: "v1", }, ObjectMeta: v1.ObjectMeta{ Name: "test", }, } e := json.NewYAMLSerializer(json.DefaultMetaFactory, nil, nil) err := e.EncodeToStream(ns, os.Stdout) if err != nil { panic(err) } } This might not be the best practice (better suggestions welcome), but it does the job.
I'm looking into writing a tool that generates Kubernetes definitions programatically for our project. I've found that the API types in Kubernetes can be found in k8s.io/kubernetes/pkg/api. I would like to output YAML based on these types. Given an object like this: ns := &api.Namespace{ ObjectMeta: api.ObjectMeta{ Name: "test", }, } What's the best way to generate the YAML output expected by kubectl create?
So in a way, this is an apples to oranges to grapes comparison. CoreOS is an operating system, Kubernetes is open source container orchestration software, and Triton is a PaaS. So CoreOS, it's a minimal operating system with a focus on security. I've been using this in production for several months now at work, haven't found a reason to not like it yet. It does not have a package manager, but it comes preinstalled with both rkt and Docker. You can run both docker and rkt just fine on there. It also comes with Etcd, which is a distributed key-value store, and it happens that kubernetes is backed by it. It also comes with Flannel which is a networking program for networking between containers and machines in your cluster. CoreOS also ships with Fleet, which you can think of like a distributed version of systemd, which systemd is CoreOS' init system. And as of recently, CoreOS ships with Kubernetes itself. Kubernetes is a container orchestration software that is made up of a few main components. There are masters, which use the APIServer, controller and scheduler to manage the cluster. And there are nodes which use the "kubelet" and kube-proxy". Through these components, Kubernetes schedules and manages where to run your containers on your cluster. As of v1.1 Kubernetes also can auto-scale your containers. I also have been using this in production as long as I have been using CoreOS, and the two go together very well. Triton is Joyent's Paas for Docker. Think of it like Joyent's traditional service, but instead of BSD jails (similar concept to Linux containers) and at one point Solaris Zones (could be wrong on that one, that was just something I heard from word of mouth), you're using Docker containers. This does abstract away a lot of the work you'd have to do with setting up CoreOS and Kubernetes, that said there are services that'll do the same and use kubernetes under the hood. Now I haven't used Triton like I have used Kubernetes and CoreOS, but it definitely seems to be quite well engineered. Ultimately, I'd say it's about your needs. Do you need flexibility and visibility, then something like CoreOS makes sense, particularly with Kubernetes. If you want that abstracted away and have these things handled for you, I'd say Triton makes sense.
###Instruction: Recently some alternatives for running docker containers or even the app container have developed. I know that there is rkt from coreos (https://coreos.com/blog/rocket/) and triton from joyent (https://www.joyent.com/) How do these two approaches compare? Edit Maybe I should re-phrase my question after these good comments from @ Lakatos Gyula How does Triton compare to coreos or kubernetes for running docker-containers at scale? ### Response: So in a way, this is an apples to oranges to grapes comparison. CoreOS is an operating system, Kubernetes is open source container orchestration software, and Triton is a PaaS. So CoreOS, it's a minimal operating system with a focus on security. I've been using this in production for several months now at work, haven't found a reason to not like it yet. It does not have a package manager, but it comes preinstalled with both rkt and Docker. You can run both docker and rkt just fine on there. It also comes with Etcd, which is a distributed key-value store, and it happens that kubernetes is backed by it. It also comes with Flannel which is a networking program for networking between containers and machines in your cluster. CoreOS also ships with Fleet, which you can think of like a distributed version of systemd, which systemd is CoreOS' init system. And as of recently, CoreOS ships with Kubernetes itself. Kubernetes is a container orchestration software that is made up of a few main components. There are masters, which use the APIServer, controller and scheduler to manage the cluster. And there are nodes which use the "kubelet" and kube-proxy". Through these components, Kubernetes schedules and manages where to run your containers on your cluster. As of v1.1 Kubernetes also can auto-scale your containers. I also have been using this in production as long as I have been using CoreOS, and the two go together very well. Triton is Joyent's Paas for Docker. Think of it like Joyent's traditional service, but instead of BSD jails (similar concept to Linux containers) and at one point Solaris Zones (could be wrong on that one, that was just something I heard from word of mouth), you're using Docker containers. This does abstract away a lot of the work you'd have to do with setting up CoreOS and Kubernetes, that said there are services that'll do the same and use kubernetes under the hood. Now I haven't used Triton like I have used Kubernetes and CoreOS, but it definitely seems to be quite well engineered. Ultimately, I'd say it's about your needs. Do you need flexibility and visibility, then something like CoreOS makes sense, particularly with Kubernetes. If you want that abstracted away and have these things handled for you, I'd say Triton makes sense.
Recently some alternatives for running docker containers or even the app container have developed. I know that there is rkt from coreos (https://coreos.com/blog/rocket/) and triton from joyent (https://www.joyent.com/) How do these two approaches compare? Edit Maybe I should re-phrase my question after these good comments from @ Lakatos Gyula How does Triton compare to coreos or kubernetes for running docker-containers at scale?
Looking at the error you posted it seems like the traffic can not be routed from your master to the Docker subnet of your node. The easiest way to validate this is to open a shell on your master and perform a request on your podIP:daemonPort: curl -I http://192.168.182.132:3000 Each node in your cluster should be able to communicate with every other node, and every Docker subnet should be routable. For most deployments you will need to setup an extra network fabric to make this happen, like flannel or Weave. Take a look at Getting started from Scratch >> Network Something else is funny. The cluster IP used by your service (192.168.182.76) and the pod IP of the endpoint (192.168.182.132) seem to be in the same subnet. However you need 3 different subnets: one for the hosts one for the Docker bridges (--bip flag of Docker) one for the service (--service-cluster-ip-range= of the API server)
###Instruction: So I have a service like as follow: { "kind": "Service", "apiVersion": "v1", "metadata": { "name": "monitoring-grafana", "namespace": "kube-system", "selfLink": "/api/v1/namespaces/kube-system/services/monitoring-grafana", "uid": "be0f72b2-c482-11e5-a22c-fa163ebc1085", "resourceVersion": "143360", "creationTimestamp": "2016-01-26T23:15:51Z", "labels": { "kubernetes.io/cluster-service": "true", "kubernetes.io/name": "monitoring-grafana" } }, "spec": { "ports": [ { "protocol": "TCP", "port": 80, "targetPort": 3000, "nodePort": 0 } ], "selector": { "name": "influxGrafana" }, "clusterIP": "192.168.182.76", "type": "ClusterIP", "sessionAffinity": "None" }, "status": { "loadBalancer": {} } However, whenever I try to access it through the proxy API, it always fails with this response. http://10.32.10.44:8080/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/ Error: 'dial tcp 192.168.182.132:3000: getsockopt: no route to host' Trying to reach: 'http://192.168.182.132:3000/' It happens on all of my services also, not just the one posted. What could be going wrong? Is something not installed? ### Response: Looking at the error you posted it seems like the traffic can not be routed from your master to the Docker subnet of your node. The easiest way to validate this is to open a shell on your master and perform a request on your podIP:daemonPort: curl -I http://192.168.182.132:3000 Each node in your cluster should be able to communicate with every other node, and every Docker subnet should be routable. For most deployments you will need to setup an extra network fabric to make this happen, like flannel or Weave. Take a look at Getting started from Scratch >> Network Something else is funny. The cluster IP used by your service (192.168.182.76) and the pod IP of the endpoint (192.168.182.132) seem to be in the same subnet. However you need 3 different subnets: one for the hosts one for the Docker bridges (--bip flag of Docker) one for the service (--service-cluster-ip-range= of the API server)
So I have a service like as follow: { "kind": "Service", "apiVersion": "v1", "metadata": { "name": "monitoring-grafana", "namespace": "kube-system", "selfLink": "/api/v1/namespaces/kube-system/services/monitoring-grafana", "uid": "be0f72b2-c482-11e5-a22c-fa163ebc1085", "resourceVersion": "143360", "creationTimestamp": "2016-01-26T23:15:51Z", "labels": { "kubernetes.io/cluster-service": "true", "kubernetes.io/name": "monitoring-grafana" } }, "spec": { "ports": [ { "protocol": "TCP", "port": 80, "targetPort": 3000, "nodePort": 0 } ], "selector": { "name": "influxGrafana" }, "clusterIP": "192.168.182.76", "type": "ClusterIP", "sessionAffinity": "None" }, "status": { "loadBalancer": {} } However, whenever I try to access it through the proxy API, it always fails with this response. http://10.32.10.44:8080/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/ Error: 'dial tcp 192.168.182.132:3000: getsockopt: no route to host' Trying to reach: 'http://192.168.182.132:3000/' It happens on all of my services also, not just the one posted. What could be going wrong? Is something not installed?
Currently we do not have a passthrough to this option in the API, though you may have some success with using a hostpath volume to mount a device file in.
###Instruction: I am very new to Kubernetes and docker. Am trying to find the config equivalent of --device option in docker run. This option in docker is used to add a device on the host to the container. Is there a equivalent in kubernetes which can be added to the yaml file? Thanks ### Response: Currently we do not have a passthrough to this option in the API, though you may have some success with using a hostpath volume to mount a device file in.
I am very new to Kubernetes and docker. Am trying to find the config equivalent of --device option in docker run. This option in docker is used to add a device on the host to the container. Is there a equivalent in kubernetes which can be added to the yaml file? Thanks
Before attempting to access your resources, kubectl first makes requests to the server's /version and /api endpoints to confirm compatibility and negotiate API version. In ABAC, the /version and /api endpoints are considered "nonResourcePaths", but those also require authorization. You can add a rule to your ABAC file allowing all users readonly access to nonResourcePaths as follows: {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"*", "nonResourcePath": "*", "readonly": true}} From there, you can make it more restrictive if you need to.
###Instruction: I want to set multiple accounts to only have access only to owned namespace, we try with authorization mode ABAC but we get when use kubectl "error: couldn't read version from server: the server does not allow access to the requested ressource" and it seems to be a bug. Is theire other way to do it ? ### Response: Before attempting to access your resources, kubectl first makes requests to the server's /version and /api endpoints to confirm compatibility and negotiate API version. In ABAC, the /version and /api endpoints are considered "nonResourcePaths", but those also require authorization. You can add a rule to your ABAC file allowing all users readonly access to nonResourcePaths as follows: {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"*", "nonResourcePath": "*", "readonly": true}} From there, you can make it more restrictive if you need to.
I want to set multiple accounts to only have access only to owned namespace, we try with authorization mode ABAC but we get when use kubectl "error: couldn't read version from server: the server does not allow access to the requested ressource" and it seems to be a bug. Is theire other way to do it ?
It isn't documented anywhere (that I know of). I always end up reading the code to find out what endpoints exist. Also note that unlike the API in the apiserver, there are no guarantees that the kubelet API will be stable between versions. Over time I expect that it will become properly versioned (and probably swaggerfied) and at that point we will provide documentation and a commitment to backward compatibility.
###Instruction: I know that the Kubelet process on each Kubernetes node exposes a simple API server, but I cannot find any documentation for it. Does someone know of a resource that has it? ### Response: It isn't documented anywhere (that I know of). I always end up reading the code to find out what endpoints exist. Also note that unlike the API in the apiserver, there are no guarantees that the kubelet API will be stable between versions. Over time I expect that it will become properly versioned (and probably swaggerfied) and at that point we will provide documentation and a commitment to backward compatibility.
I know that the Kubelet process on each Kubernetes node exposes a simple API server, but I cannot find any documentation for it. Does someone know of a resource that has it?
Your problem with the vagrant setup is currently an open issue https://github.com/kubernetes/kubernetes/issues/20088#issuecomment-174528066 . As a temporary fix you can do this to get it to work: https://stackoverflow.com/a/35015586/5834774
###Instruction: Following the instructions on this page - http://kubernetes.io/v1.1/docs/getting-started-guides/vagrant.html#setup, I'm getting the following error when trying to get Kubernetes up on a Mac running El Capitan, using Vagrant and VirtualBox. Where am I going wrong? OS X El Capitan 10.11.2 (15C50) Vagrant 1.8.1 VirtualBox 5.0.12 r104815 and trying to get a cluster up by executing these steps: export KUBERNETES_PROVIDER=vagrant curl -sS https://get.k8s.io | bash The output below is a capture of having executed these two commands. I'm following these instructions [link]http://kubernetes.io/v1.1/docs/getting-started-guides/vagrant.html#prerequisites Unpacking kubernetes release v1.1.4 Creating a kubernetes on vagrant... ... Starting cluster using provider: vagrant ... calling verify-prereqs ... calling kube-up Bringing machine 'master' up with 'virtualbox' provider... Bringing machine 'minion-1' up with 'virtualbox' provider... ==> master: VirtualBox VM is already running. ==> minion-1: Importing base box 'kube-fedora21'... ==> minion-1: Matching MAC address for NAT networking... ==> minion-1: Setting the name of the VM: kubernetes_minion-1_1454028157203_24352 ==> minion-1: Fixed port collision for 22 => 2222. Now on port 2200. ==> minion-1: Clearing any previously set network interfaces... ==> minion-1: Preparing network interfaces based on configuration... minion-1: Adapter 1: nat minion-1: Adapter 2: hostonly ==> minion-1: Forwarding ports... minion-1: 22 (guest) => 2200 (host) (adapter 1) ==> minion-1: Running 'pre-boot' VM customizations... ==> minion-1: Booting VM... ==> minion-1: Waiting for machine to boot. This may take a few minutes... minion-1: SSH address: 127.0.0.1:2200 minion-1: SSH username: vagrant minion-1: SSH auth method: private key minion-1: minion-1: Vagrant insecure key detected. Vagrant will automatically replace minion-1: this with a newly generated keypair for better security. minion-1: minion-1: Inserting generated public key within guest... minion-1: Removing insecure key from the guest if it's present... minion-1: Key inserted! Disconnecting and reconnecting using new SSH key... ==> minion-1: Machine booted and ready! ==> minion-1: Checking for guest additions in VM... ==> minion-1: Configuring and enabling network interfaces... ==> minion-1: Mounting shared folders... minion-1: /vagrant => /Users/lee/kubernetes ==> minion-1: Running provisioner: shell... minion-1: Running: /var/folders/cb/lpcc0zbs441777bwsl1zrcbh0000gn/T/vagrant-shell20160128-14233-gm7iq9.sh ==> minion-1: Adding kubernetes-master to hosts file ==> minion-1: Provisioning network on minion ==> minion-1: Resolving Dependencies ==> minion-1: --> Running transaction check ==> minion-1: ---> Package flannel.x86_64 0:0.5.0-3.fc21 will be installed ==> minion-1: --> Finished Dependency Resolution ==> minion-1: ==> minion-1: Dependencies Resolved ==> minion-1: ==> minion-1: ================================================================================ ==> minion-1: Package Arch Version Repository Size ==> minion-1: ================================================================================ ==> minion-1: Installing: ==> minion-1: flannel x86_64 0.5.0-3.fc21 updates 1.6 M ==> minion-1: ==> minion-1: Transaction Summary ==> minion-1: ================================================================================ ==> minion-1: Install 1 Package ==> minion-1: Total download size: 1.6 M ==> minion-1: Installed size: 7.0 M ==> minion-1: Downloading packages: ==> minion-1: warning: ==> minion-1: /var/cache/yum/x86_64/21/updates/packages/flannel-0.5.0-3.fc21.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID 95a43f54: NOKEY ==> minion-1: Public key for flannel-0.5.0-3.fc21.x86_64.rpm is not installed ==> minion-1: Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-21-x86_64 ==> minion-1: Importing GPG key 0x95A43F54: ==> minion-1: Userid : "Fedora (21) <fedora@fedoraproject.org>" ==> minion-1: Fingerprint: 6596 b8fb abda 5227 a9c5 b59e 89ad 4e87 95a4 3f54 ==> minion-1: Package : fedora-repos-21-2.noarch (@anaconda) ==> minion-1: From : /etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-21-x86_64 ==> minion-1: Running transaction check ==> minion-1: Running transaction test ==> minion-1: Transaction test succeeded ==> minion-1: Running transaction (shutdown inhibited) ==> minion-1: Installing : flannel-0.5.0-3.fc21.x86_64 1/1 ==> minion-1: ==> minion-1: Verifying : flannel-0.5.0-3.fc21.x86_64 1/1 ==> minion-1: ==> minion-1: ==> minion-1: Installed: ==> minion-1: flannel.x86_64 0:0.5.0-3.fc21 ==> minion-1: Complete! ==> minion-1: Created symlink from /etc/systemd/system/docker.service.requires/flanneld.service to /usr/lib/systemd/system/flanneld.service. ==> minion-1: Network configuration verified ==> minion-1: Disable swap memory to ensure proper QoS ==> minion-1: * INFO: sh -- Version 2015.11.09 ==> minion-1: ==> minion-1: * INFO: System Information: ==> minion-1: * INFO: CPU: GenuineIntel ==> minion-1: * INFO: CPU Arch: x86_64 ==> minion-1: * INFO: OS Name: Linux ==> minion-1: * INFO: OS Version: 3.17.4-301.fc21.x86_64 ==> minion-1: * INFO: Distribution: Fedora 21 ==> minion-1: * INFO: Installing minion ==> minion-1: * INFO: Found function install_fedora_deps ==> minion-1: * INFO: Found function install_fedora_stable ==> minion-1: * INFO: Found function install_fedora_stable_post ==> minion-1: * INFO: Found function install_fedora_restart_daemons ==> minion-1: * INFO: Found function daemons_running ==> minion-1: * INFO: Found function install_fedora_check_services ==> minion-1: * INFO: Running install_fedora_deps() ==> minion-1: which: no dnf in (/usr/local/sbin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin) ==> minion-1: * INFO: Adding SaltStack's COPR repository ==> minion-1: ==> minion-1: ==> minion-1: File contains no section headers. ==> minion-1: file: file:///etc/yum.repos.d/saltstack-salt-fedora-21.repo, line: 1 ==> minion-1: '<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">\n' ==> minion-1: * ERROR: Failed to run install_fedora_deps()!!! The SSH command responded with a non-zero exit status. Vagrant assumes that this means the command failed. The output for this command should be in the log above. Please read the output to determine what went wrong. ### Response: Your problem with the vagrant setup is currently an open issue https://github.com/kubernetes/kubernetes/issues/20088#issuecomment-174528066 . As a temporary fix you can do this to get it to work: https://stackoverflow.com/a/35015586/5834774
Following the instructions on this page - http://kubernetes.io/v1.1/docs/getting-started-guides/vagrant.html#setup, I'm getting the following error when trying to get Kubernetes up on a Mac running El Capitan, using Vagrant and VirtualBox. Where am I going wrong? OS X El Capitan 10.11.2 (15C50) Vagrant 1.8.1 VirtualBox 5.0.12 r104815 and trying to get a cluster up by executing these steps: export KUBERNETES_PROVIDER=vagrant curl -sS https://get.k8s.io | bash The output below is a capture of having executed these two commands. I'm following these instructions [link]http://kubernetes.io/v1.1/docs/getting-started-guides/vagrant.html#prerequisites Unpacking kubernetes release v1.1.4 Creating a kubernetes on vagrant... ... Starting cluster using provider: vagrant ... calling verify-prereqs ... calling kube-up Bringing machine 'master' up with 'virtualbox' provider... Bringing machine 'minion-1' up with 'virtualbox' provider... ==> master: VirtualBox VM is already running. ==> minion-1: Importing base box 'kube-fedora21'... ==> minion-1: Matching MAC address for NAT networking... ==> minion-1: Setting the name of the VM: kubernetes_minion-1_1454028157203_24352 ==> minion-1: Fixed port collision for 22 => 2222. Now on port 2200. ==> minion-1: Clearing any previously set network interfaces... ==> minion-1: Preparing network interfaces based on configuration... minion-1: Adapter 1: nat minion-1: Adapter 2: hostonly ==> minion-1: Forwarding ports... minion-1: 22 (guest) => 2200 (host) (adapter 1) ==> minion-1: Running 'pre-boot' VM customizations... ==> minion-1: Booting VM... ==> minion-1: Waiting for machine to boot. This may take a few minutes... minion-1: SSH address: 127.0.0.1:2200 minion-1: SSH username: vagrant minion-1: SSH auth method: private key minion-1: minion-1: Vagrant insecure key detected. Vagrant will automatically replace minion-1: this with a newly generated keypair for better security. minion-1: minion-1: Inserting generated public key within guest... minion-1: Removing insecure key from the guest if it's present... minion-1: Key inserted! Disconnecting and reconnecting using new SSH key... ==> minion-1: Machine booted and ready! ==> minion-1: Checking for guest additions in VM... ==> minion-1: Configuring and enabling network interfaces... ==> minion-1: Mounting shared folders... minion-1: /vagrant => /Users/lee/kubernetes ==> minion-1: Running provisioner: shell... minion-1: Running: /var/folders/cb/lpcc0zbs441777bwsl1zrcbh0000gn/T/vagrant-shell20160128-14233-gm7iq9.sh ==> minion-1: Adding kubernetes-master to hosts file ==> minion-1: Provisioning network on minion ==> minion-1: Resolving Dependencies ==> minion-1: --> Running transaction check ==> minion-1: ---> Package flannel.x86_64 0:0.5.0-3.fc21 will be installed ==> minion-1: --> Finished Dependency Resolution ==> minion-1: ==> minion-1: Dependencies Resolved ==> minion-1: ==> minion-1: ================================================================================ ==> minion-1: Package Arch Version Repository Size ==> minion-1: ================================================================================ ==> minion-1: Installing: ==> minion-1: flannel x86_64 0.5.0-3.fc21 updates 1.6 M ==> minion-1: ==> minion-1: Transaction Summary ==> minion-1: ================================================================================ ==> minion-1: Install 1 Package ==> minion-1: Total download size: 1.6 M ==> minion-1: Installed size: 7.0 M ==> minion-1: Downloading packages: ==> minion-1: warning: ==> minion-1: /var/cache/yum/x86_64/21/updates/packages/flannel-0.5.0-3.fc21.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID 95a43f54: NOKEY ==> minion-1: Public key for flannel-0.5.0-3.fc21.x86_64.rpm is not installed ==> minion-1: Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-21-x86_64 ==> minion-1: Importing GPG key 0x95A43F54: ==> minion-1: Userid : "Fedora (21) <fedora@fedoraproject.org>" ==> minion-1: Fingerprint: 6596 b8fb abda 5227 a9c5 b59e 89ad 4e87 95a4 3f54 ==> minion-1: Package : fedora-repos-21-2.noarch (@anaconda) ==> minion-1: From : /etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-21-x86_64 ==> minion-1: Running transaction check ==> minion-1: Running transaction test ==> minion-1: Transaction test succeeded ==> minion-1: Running transaction (shutdown inhibited) ==> minion-1: Installing : flannel-0.5.0-3.fc21.x86_64 1/1 ==> minion-1: ==> minion-1: Verifying : flannel-0.5.0-3.fc21.x86_64 1/1 ==> minion-1: ==> minion-1: ==> minion-1: Installed: ==> minion-1: flannel.x86_64 0:0.5.0-3.fc21 ==> minion-1: Complete! ==> minion-1: Created symlink from /etc/systemd/system/docker.service.requires/flanneld.service to /usr/lib/systemd/system/flanneld.service. ==> minion-1: Network configuration verified ==> minion-1: Disable swap memory to ensure proper QoS ==> minion-1: * INFO: sh -- Version 2015.11.09 ==> minion-1: ==> minion-1: * INFO: System Information: ==> minion-1: * INFO: CPU: GenuineIntel ==> minion-1: * INFO: CPU Arch: x86_64 ==> minion-1: * INFO: OS Name: Linux ==> minion-1: * INFO: OS Version: 3.17.4-301.fc21.x86_64 ==> minion-1: * INFO: Distribution: Fedora 21 ==> minion-1: * INFO: Installing minion ==> minion-1: * INFO: Found function install_fedora_deps ==> minion-1: * INFO: Found function install_fedora_stable ==> minion-1: * INFO: Found function install_fedora_stable_post ==> minion-1: * INFO: Found function install_fedora_restart_daemons ==> minion-1: * INFO: Found function daemons_running ==> minion-1: * INFO: Found function install_fedora_check_services ==> minion-1: * INFO: Running install_fedora_deps() ==> minion-1: which: no dnf in (/usr/local/sbin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin) ==> minion-1: * INFO: Adding SaltStack's COPR repository ==> minion-1: ==> minion-1: ==> minion-1: File contains no section headers. ==> minion-1: file: file:///etc/yum.repos.d/saltstack-salt-fedora-21.repo, line: 1 ==> minion-1: '<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">\n' ==> minion-1: * ERROR: Failed to run install_fedora_deps()!!! The SSH command responded with a non-zero exit status. Vagrant assumes that this means the command failed. The output for this command should be in the log above. Please read the output to determine what went wrong.
These are trying to talk to the insecure version of the API, which shouldn't work between machines. That will only work on the master. Additionally, the master isn't set up to accept work (register_node=false), so it is not expected to report back its status. The key piece of info we're missing, what machine did that log come from? Did you set the MASTER_HOST= parameter correctly? The address of the master node. In most cases this will be the publicly routable IP of the node. Worker nodes must be able to reach the master node(s) via this address on port 443. Also, note this section of the docs: Note that the kubelet running on a master node may log repeated attempts to post its status to the API server. These warnings are expected behavior and can be ignored. Future Kubernetes releases plan to handle this common deployment consideration more gracefully.
###Instruction: I am trying to setup Kubernetes cluster using the instruction at https://coreos.com/kubernetes/docs/latest/getting-started.html. I am in the step 2 (Deploy master) where when I start the master service, the master service is in active status but it cannot communicate with the API server. Also, there are 6 containers started but the logs are empty. Please find the kubelet log below: Jan 26 07:54:18 kubernetes-1.novalocal systemd[1]: Started kubelet.service. Jan 26 07:54:20 kubernetes-1.novalocal kubelet[1115]: W0126 07:54:20.214551 1115 server.go:585] Could not load kubeconfig file /var/lib/kubelet/kubeconfig: stat /var/lib/kubelet/kubeconfig: no such file or directory. Trying auth path instead. Jan 26 07:54:20 kubernetes-1.novalocal kubelet[1115]: W0126 07:54:20.214631 1115 server.go:547] Could not load kubernetes auth path /var/lib/kubelet/kubernetes_auth: stat /var/lib/kubelet/kubernetes_auth: no such file or directory. Continuing with defaults. Jan 26 07:54:20 kubernetes-1.novalocal kubelet[1115]: I0126 07:54:20.217269 1115 plugins.go:71] No cloud provider specified. Jan 26 07:54:20 kubernetes-1.novalocal kubelet[1115]: I0126 07:54:20.219217 1115 manager.go:128] cAdvisor running in container: "/system.slice/kubelet.service" Jan 26 07:54:20 kubernetes-1.novalocal kubelet[1115]: I0126 07:54:20.672952 1115 fs.go:108] Filesystem partitions: map[/dev/vda9:{mountpoint:/ major:254 minor:9 fsType: blockSize:0} /dev/vda3:{mountpoint:/usr major:254 minor:3 fsType: blockSize:0} /dev/vda6:{mountpoi Jan 26 07:54:20 kubernetes-1.novalocal kubelet[1115]: I0126 07:54:20.856238 1115 manager.go:163] Machine: {NumCores:2 CpuFrequency:1999999 MemoryCapacity:4149022720 MachineID:5a493caa9327449cabd050ac6cd2e065 SystemUUID:5A493CAA-9327-449C-ABD0-50AC6CD2E065 BootID:541d Jan 26 07:54:20 kubernetes-1.novalocal kubelet[1115]: I0126 07:54:20.858067 1115 manager.go:169] Version: {KernelVersion:4.3.3-coreos-r2 ContainerOsVersion:CoreOS 899.5.0 DockerVersion:1.9.1 CadvisorVersion: CadvisorRevision:} Jan 26 07:54:20 kubernetes-1.novalocal kubelet[1115]: I0126 07:54:20.862564 1115 server.go:798] Adding manifest file: /etc/kubernetes/manifests Jan 26 07:54:20 kubernetes-1.novalocal kubelet[1115]: I0126 07:54:20.862655 1115 server.go:808] Watching apiserver Jan 26 07:54:21 kubernetes-1.novalocal kubelet[1115]: I0126 07:54:21.165506 1115 plugins.go:56] Registering credential provider: .dockercfg Jan 26 07:54:21 kubernetes-1.novalocal kubelet[1115]: E0126 07:54:21.171563 1115 kubelet.go:2284] Error updating node status, will retry: error getting node "192.168.111.32": Get http://127.0.0.1:8080/api/v1/nodes/192.168.111.32: dial tcp 127.0.0.1:8080: connection r Jan 26 07:54:21 kubernetes-1.novalocal kubelet[1115]: E0126 07:54:21.172329 1115 kubelet.go:2284] Error updating node status, will retry: error getting node "192.168.111.32": Get http://127.0.0.1:8080/api/v1/nodes/192.168.111.32: dial tcp 127.0.0.1:8080: connection r Jan 26 07:54:21 kubernetes-1.novalocal kubelet[1115]: E0126 07:54:21.173114 1115 kubelet.go:2284] Error updating node status, will retry: error getting node "192.168.111.32": Get http://127.0.0.1:8080/api/v1/nodes/192.168.111.32: dial tcp 127.0.0.1:8080: connection refused Also, the following are the containers launched. 2bf275350996 gcr.io/google_containers/podmaster:1.1 "/podmaster --etcd-se" 26 minutes ago Up 26 minutes k8s_controller-manager-elector.5b0f7cea_kube-podmaster-192.168.111.32_kube-system_3b8350635fe89ab366063da0be8969fd_1f370f8c c64042286744 gcr.io/google_containers/podmaster:1.1 "/podmaster --etcd-se" 26 minutes ago Up 26 minutes k8s_scheduler-elector.bc3d71be_kube-podmaster-192.168.111.32_kube-system_3b8350635fe89ab366063da0be8969fd_c9ecb387 81bd74d0396a gcr.io/google_containers/hyperkube:v1.1.2 "/hyperkube proxy --m" 26 minutes ago Up 26 minutes k8s_kube-proxy.176f5569_kube-proxy-192.168.111.32_kube-system_8a987aa8c76c4d76bd80ccff5b65ffea_840d8228 39494ed8e814 gcr.io/google_containers/pause:0.8.0 "/pause" 27 minutes ago Up 27 minutes k8s_POD.6d00e006_kube-podmaster-192.168.111.32_kube-system_3b8350635fe89ab366063da0be8969fd_36b73b1d 632dc0a2f612 gcr.io/google_containers/pause:0.8.0 "/pause" 27 minutes ago Up 27 minutes k8s_POD.6d00e006_kube-apiserver-192.168.111.32_kube-system_86819bf93f678db0ee778b8c8bb658dc_815c6627 361b297b37f9 gcr.io/google_containers/pause:0.8.0 "/pause" 27 minutes ago Up 27 minutes k8s_POD.6d00e006_kube-proxy-192.168.111.32_kube-system_8a987aa8c76c4d76bd80ccff5b65ffea_7a6182ed ### Response: These are trying to talk to the insecure version of the API, which shouldn't work between machines. That will only work on the master. Additionally, the master isn't set up to accept work (register_node=false), so it is not expected to report back its status. The key piece of info we're missing, what machine did that log come from? Did you set the MASTER_HOST= parameter correctly? The address of the master node. In most cases this will be the publicly routable IP of the node. Worker nodes must be able to reach the master node(s) via this address on port 443. Also, note this section of the docs: Note that the kubelet running on a master node may log repeated attempts to post its status to the API server. These warnings are expected behavior and can be ignored. Future Kubernetes releases plan to handle this common deployment consideration more gracefully.
I am trying to setup Kubernetes cluster using the instruction at https://coreos.com/kubernetes/docs/latest/getting-started.html. I am in the step 2 (Deploy master) where when I start the master service, the master service is in active status but it cannot communicate with the API server. Also, there are 6 containers started but the logs are empty. Please find the kubelet log below: Jan 26 07:54:18 kubernetes-1.novalocal systemd[1]: Started kubelet.service. Jan 26 07:54:20 kubernetes-1.novalocal kubelet[1115]: W0126 07:54:20.214551 1115 server.go:585] Could not load kubeconfig file /var/lib/kubelet/kubeconfig: stat /var/lib/kubelet/kubeconfig: no such file or directory. Trying auth path instead. Jan 26 07:54:20 kubernetes-1.novalocal kubelet[1115]: W0126 07:54:20.214631 1115 server.go:547] Could not load kubernetes auth path /var/lib/kubelet/kubernetes_auth: stat /var/lib/kubelet/kubernetes_auth: no such file or directory. Continuing with defaults. Jan 26 07:54:20 kubernetes-1.novalocal kubelet[1115]: I0126 07:54:20.217269 1115 plugins.go:71] No cloud provider specified. Jan 26 07:54:20 kubernetes-1.novalocal kubelet[1115]: I0126 07:54:20.219217 1115 manager.go:128] cAdvisor running in container: "/system.slice/kubelet.service" Jan 26 07:54:20 kubernetes-1.novalocal kubelet[1115]: I0126 07:54:20.672952 1115 fs.go:108] Filesystem partitions: map[/dev/vda9:{mountpoint:/ major:254 minor:9 fsType: blockSize:0} /dev/vda3:{mountpoint:/usr major:254 minor:3 fsType: blockSize:0} /dev/vda6:{mountpoi Jan 26 07:54:20 kubernetes-1.novalocal kubelet[1115]: I0126 07:54:20.856238 1115 manager.go:163] Machine: {NumCores:2 CpuFrequency:1999999 MemoryCapacity:4149022720 MachineID:5a493caa9327449cabd050ac6cd2e065 SystemUUID:5A493CAA-9327-449C-ABD0-50AC6CD2E065 BootID:541d Jan 26 07:54:20 kubernetes-1.novalocal kubelet[1115]: I0126 07:54:20.858067 1115 manager.go:169] Version: {KernelVersion:4.3.3-coreos-r2 ContainerOsVersion:CoreOS 899.5.0 DockerVersion:1.9.1 CadvisorVersion: CadvisorRevision:} Jan 26 07:54:20 kubernetes-1.novalocal kubelet[1115]: I0126 07:54:20.862564 1115 server.go:798] Adding manifest file: /etc/kubernetes/manifests Jan 26 07:54:20 kubernetes-1.novalocal kubelet[1115]: I0126 07:54:20.862655 1115 server.go:808] Watching apiserver Jan 26 07:54:21 kubernetes-1.novalocal kubelet[1115]: I0126 07:54:21.165506 1115 plugins.go:56] Registering credential provider: .dockercfg Jan 26 07:54:21 kubernetes-1.novalocal kubelet[1115]: E0126 07:54:21.171563 1115 kubelet.go:2284] Error updating node status, will retry: error getting node "192.168.111.32": Get http://127.0.0.1:8080/api/v1/nodes/192.168.111.32: dial tcp 127.0.0.1:8080: connection r Jan 26 07:54:21 kubernetes-1.novalocal kubelet[1115]: E0126 07:54:21.172329 1115 kubelet.go:2284] Error updating node status, will retry: error getting node "192.168.111.32": Get http://127.0.0.1:8080/api/v1/nodes/192.168.111.32: dial tcp 127.0.0.1:8080: connection r Jan 26 07:54:21 kubernetes-1.novalocal kubelet[1115]: E0126 07:54:21.173114 1115 kubelet.go:2284] Error updating node status, will retry: error getting node "192.168.111.32": Get http://127.0.0.1:8080/api/v1/nodes/192.168.111.32: dial tcp 127.0.0.1:8080: connection refused Also, the following are the containers launched. 2bf275350996 gcr.io/google_containers/podmaster:1.1 "/podmaster --etcd-se" 26 minutes ago Up 26 minutes k8s_controller-manager-elector.5b0f7cea_kube-podmaster-192.168.111.32_kube-system_3b8350635fe89ab366063da0be8969fd_1f370f8c c64042286744 gcr.io/google_containers/podmaster:1.1 "/podmaster --etcd-se" 26 minutes ago Up 26 minutes k8s_scheduler-elector.bc3d71be_kube-podmaster-192.168.111.32_kube-system_3b8350635fe89ab366063da0be8969fd_c9ecb387 81bd74d0396a gcr.io/google_containers/hyperkube:v1.1.2 "/hyperkube proxy --m" 26 minutes ago Up 26 minutes k8s_kube-proxy.176f5569_kube-proxy-192.168.111.32_kube-system_8a987aa8c76c4d76bd80ccff5b65ffea_840d8228 39494ed8e814 gcr.io/google_containers/pause:0.8.0 "/pause" 27 minutes ago Up 27 minutes k8s_POD.6d00e006_kube-podmaster-192.168.111.32_kube-system_3b8350635fe89ab366063da0be8969fd_36b73b1d 632dc0a2f612 gcr.io/google_containers/pause:0.8.0 "/pause" 27 minutes ago Up 27 minutes k8s_POD.6d00e006_kube-apiserver-192.168.111.32_kube-system_86819bf93f678db0ee778b8c8bb658dc_815c6627 361b297b37f9 gcr.io/google_containers/pause:0.8.0 "/pause" 27 minutes ago Up 27 minutes k8s_POD.6d00e006_kube-proxy-192.168.111.32_kube-system_8a987aa8c76c4d76bd80ccff5b65ffea_7a6182ed
It sounds like you don't have the appropriate IAM instance profile set on your master VM. The kube-up script for AWS creates a role and associated policy that is attached to the master VM when it is created. Having the IAM policy attached should give you the credentials necessary to make API calls into AWS.
###Instruction: I'm setting up a Kubernetes cluster on AWS and as part of the configuration for say the API Server, I provide the --cloud-provider=aws setting. Once it starts up, however, I see in the logs that it complains about not having AWS credentials: NoCredentialProviders: no valid providers in chain After some searching, it seems that this issue was resolved for most people by using the "kube-up" script. However, for those who are not using the script to set up their cluster, how do we provide Kubernetes with AWS credentials? ### Response: It sounds like you don't have the appropriate IAM instance profile set on your master VM. The kube-up script for AWS creates a role and associated policy that is attached to the master VM when it is created. Having the IAM policy attached should give you the credentials necessary to make API calls into AWS.
I'm setting up a Kubernetes cluster on AWS and as part of the configuration for say the API Server, I provide the --cloud-provider=aws setting. Once it starts up, however, I see in the logs that it complains about not having AWS credentials: NoCredentialProviders: no valid providers in chain After some searching, it seems that this issue was resolved for most people by using the "kube-up" script. However, for those who are not using the script to set up their cluster, how do we provide Kubernetes with AWS credentials?
It's good practice to first check the docs site before posting a question. Kubernetes does handle container GC, quoting from the docs page: We also allow users to customize garbage collection policy, basically via following three flags. minimum-container-ttl-duration, minimum age for a finished container before it is garbage collected. Default is 1 minute. maximum-dead-containers-per-container, maximum number of old instances to retain per container. Default is 2. maximum-dead-containers, maximum number of old instances of containers to retain globally. Default is 100.
###Instruction: Is Kubernetes taking care of cleaning up failed/stopped containers at some point? It doesn't seem like it and it can pile up. I know how to clean them up manually but i wonder if there is a setting to automate it. ### Response: It's good practice to first check the docs site before posting a question. Kubernetes does handle container GC, quoting from the docs page: We also allow users to customize garbage collection policy, basically via following three flags. minimum-container-ttl-duration, minimum age for a finished container before it is garbage collected. Default is 1 minute. maximum-dead-containers-per-container, maximum number of old instances to retain per container. Default is 2. maximum-dead-containers, maximum number of old instances of containers to retain globally. Default is 100.
Is Kubernetes taking care of cleaning up failed/stopped containers at some point? It doesn't seem like it and it can pile up. I know how to clean them up manually but i wonder if there is a setting to automate it.
Most probably docker service failed to start. It's the only dependency to kubelet service.
###Instruction: I just want to know if any of you guys have ever followed this tutorial before and successed. I don't know why but whenever i restart and enable services on MINION side, it always failed with error "A Dependency job for kubelet.service failed" and of course it led to kubelet.service started failed. It didn't work no matter how many times i tried. So anyone has any idea? http://severalnines.com/blog/installing-kubernetes-cluster-minions-centos7-manage-pods-services ### Response: Most probably docker service failed to start. It's the only dependency to kubelet service.
I just want to know if any of you guys have ever followed this tutorial before and successed. I don't know why but whenever i restart and enable services on MINION side, it always failed with error "A Dependency job for kubelet.service failed" and of course it led to kubelet.service started failed. It didn't work no matter how many times i tried. So anyone has any idea? http://severalnines.com/blog/installing-kubernetes-cluster-minions-centos7-manage-pods-services
From the comments, it sounds like you got this to work. Traffic to service IPs is forwarded via kube-proxy, either through a user-space proxy or through iptables rules, depending on the Kubernetes release version and configuration. You may want to look at the iptables rules for the node corresponding to that container to see if that could be the source of the problem.
###Instruction: I am running kubernetes single node on coreos. I have created a pod running a python application and exposing an http endpoint. I have created a kubernetes service that exposes the HTTP endpoint. I have exposed this service using NodePort and can call it successfully from outside the cluster. What I am then trying is to call the HTTP service from a docker container. To test I am just using the same running container and using curl. I can docker exec into the docker container running my service. nslookup for 'my-service' resolves to an IP address. So I am happy that DNS service registration is working correctly. (I have also checked that the ENV variables for the service exist - and they do and are the same as the value returned from nslookup) However, if I then try: curl http://my-service:9042/status It just hangs with no response. (Port 9042 is the correct port and status is a valid resource) I'm assuming that I'm missing something really obvious. Any suggestions warmly received. Update: The response of curl -v is: root@lake-cluster-manager:/# curl -v http://lake-service:9042/status * Hostname was NOT found in DNS cache * Trying 10.3.0.128... The output from nslookup is: root@lake-cluster-manager:/# nslookup lake-service Server: 10.3.0.10 Address: 10.3.0.10#53 Name: lake-service.default.svc.cluster.local Address: 10.3.0.128 ### Response: From the comments, it sounds like you got this to work. Traffic to service IPs is forwarded via kube-proxy, either through a user-space proxy or through iptables rules, depending on the Kubernetes release version and configuration. You may want to look at the iptables rules for the node corresponding to that container to see if that could be the source of the problem.
I am running kubernetes single node on coreos. I have created a pod running a python application and exposing an http endpoint. I have created a kubernetes service that exposes the HTTP endpoint. I have exposed this service using NodePort and can call it successfully from outside the cluster. What I am then trying is to call the HTTP service from a docker container. To test I am just using the same running container and using curl. I can docker exec into the docker container running my service. nslookup for 'my-service' resolves to an IP address. So I am happy that DNS service registration is working correctly. (I have also checked that the ENV variables for the service exist - and they do and are the same as the value returned from nslookup) However, if I then try: curl http://my-service:9042/status It just hangs with no response. (Port 9042 is the correct port and status is a valid resource) I'm assuming that I'm missing something really obvious. Any suggestions warmly received. Update: The response of curl -v is: root@lake-cluster-manager:/# curl -v http://lake-service:9042/status * Hostname was NOT found in DNS cache * Trying 10.3.0.128... The output from nslookup is: root@lake-cluster-manager:/# nslookup lake-service Server: 10.3.0.10 Address: 10.3.0.10#53 Name: lake-service.default.svc.cluster.local Address: 10.3.0.128
I assume that you figured out that it was the pause container that couldn't be pulled from the Kubelet logs. Kubernetes needs to create a container for the pod in order to hold shared resources, such as the network namespace. It uses the pause container for this, which is a very small container that just sleeps forever.
###Instruction: I just installed a kubernetes local cluster, but when I tried the command cluster/kubectl.sh run my-nginx --image=nginx --replicas=2 --port=80 to create and run pods, here is what I got: NAME READY STATUS RESTARTS AGE my-nginx-00t7f 0/1 ContainerCreating 0 23m my-nginx-spy2b 0/1 ContainerCreating 0 23m and I used kubectl logs, I got Pod "my-nginx-00t7f" in namespace "default" : pod is not in 'Running', 'Succeeded' or 'Failed' state - State: "Pending" Seems it got stuck in 'pending' status. Then I used 'kubectl describe' and got Name: my-nginx-00t7f Namespace: default Image(s): nginx Node: 127.0.0.1/127.0.0.1 Start Time: Thu, 17 Dec 2015 22:27:18 +0800 Labels: run=my-nginx Status: Pending Reason: Message: IP: Replication Controllers: my-nginx (2/2 replicas created) Containers: my-nginx: Container ID: Image: nginx Image ID: QoS Tier: cpu: BestEffort memory: BestEffort State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Environment Variables: Conditions: Type Status Ready False Volumes: default-token-p09p6: Type: Secret (a secret that should populate this volume) SecretName: default-token-p09p6 Events: FirstSeen LastSeen Count From SubobjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 26m 26m 1 {scheduler } Normal Scheduled Successfully assigned my-nginx-00t7f to 127.0.0.1 22m 1m 79 {kubelet 127.0.0.1} Warning FailedSync Error syncing pod, skipping: ImagePullBackOff 24m 5s 8 {kubelet 127.0.0.1} Warning FailedSync Error syncing pod, skipping: ErrImagePull It seems my docker can not pull images, but actually it can, there is no problem when I docker pull nginx. ### Response: I assume that you figured out that it was the pause container that couldn't be pulled from the Kubelet logs. Kubernetes needs to create a container for the pod in order to hold shared resources, such as the network namespace. It uses the pause container for this, which is a very small container that just sleeps forever.
I just installed a kubernetes local cluster, but when I tried the command cluster/kubectl.sh run my-nginx --image=nginx --replicas=2 --port=80 to create and run pods, here is what I got: NAME READY STATUS RESTARTS AGE my-nginx-00t7f 0/1 ContainerCreating 0 23m my-nginx-spy2b 0/1 ContainerCreating 0 23m and I used kubectl logs, I got Pod "my-nginx-00t7f" in namespace "default" : pod is not in 'Running', 'Succeeded' or 'Failed' state - State: "Pending" Seems it got stuck in 'pending' status. Then I used 'kubectl describe' and got Name: my-nginx-00t7f Namespace: default Image(s): nginx Node: 127.0.0.1/127.0.0.1 Start Time: Thu, 17 Dec 2015 22:27:18 +0800 Labels: run=my-nginx Status: Pending Reason: Message: IP: Replication Controllers: my-nginx (2/2 replicas created) Containers: my-nginx: Container ID: Image: nginx Image ID: QoS Tier: cpu: BestEffort memory: BestEffort State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Environment Variables: Conditions: Type Status Ready False Volumes: default-token-p09p6: Type: Secret (a secret that should populate this volume) SecretName: default-token-p09p6 Events: FirstSeen LastSeen Count From SubobjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 26m 26m 1 {scheduler } Normal Scheduled Successfully assigned my-nginx-00t7f to 127.0.0.1 22m 1m 79 {kubelet 127.0.0.1} Warning FailedSync Error syncing pod, skipping: ImagePullBackOff 24m 5s 8 {kubelet 127.0.0.1} Warning FailedSync Error syncing pod, skipping: ErrImagePull It seems my docker can not pull images, but actually it can, there is no problem when I docker pull nginx.
Ensure your nodes have externally reachable IP addresses. Then create a service of type NodePort: https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/services.md#type-nodeport And direct traffic to nodes at the allocated port.
###Instruction: What I want to do is run kubernetes within docker and expose the kubernetes services externally. I followed the docs on getting kubernetes running within docker. As long as I connect from the localhost, I can access my services. However, connecting from a different computer doesn't work. If I spin up a docker image directly, then I can access it. Only things running within kubernetes aren't exposed. Is this possible? ### Response: Ensure your nodes have externally reachable IP addresses. Then create a service of type NodePort: https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/services.md#type-nodeport And direct traffic to nodes at the allocated port.
What I want to do is run kubernetes within docker and expose the kubernetes services externally. I followed the docs on getting kubernetes running within docker. As long as I connect from the localhost, I can access my services. However, connecting from a different computer doesn't work. If I spin up a docker image directly, then I can access it. Only things running within kubernetes aren't exposed. Is this possible?
I filed an issue to document the recommended practice. I put a sketch of the approach in the issue: https://github.com/kubernetes/kubernetes/issues/20473 ensure the pods have a non-zero terminationGracePeriodSeconds set configure a readinessProbe on the main serving container of the pods handle SIGTERM in the application: fail the readinessProbe but continue * to handle normal requests and do not exit set maxUnavailable and/or maxSurge large enough to ensure enough serving instances in the Deployment API spec (available in 1.2) Container restarts, especially when they pull images, are fairly expensive for the system. The Kubelet backs off restarts of crashing containers in order to degrade gracefully with DOSing docker, the registry, the apiserver, etc.
###Instruction: I have been experimenting with kubernetes recently, and I have been trying to test the failover in pods, by having a replication controller, in which containers crash as soon as they are used (thus causing a restart). I have adapted the bashttpd project for this: https://github.com/Chronojam/bashttpd (Where in I have set it up so that it serves the hostname of the container, then exits) This works great, except the restart is far to slow for what I am trying to do, as it works for the first couple of requests, then stops for a while - then starts working again when the pods are restarted. (ideally id like to see no interruption at all when accessing the service). I think (but not sure) that the backup delay mentioned here is to blame: https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/pod-states.md#restartpolicy some output: #] kubectl get pods NAME READY STATUS RESTARTS AGE chronojam-blog-a23ak 1/1 Running 0 6h chronojam-blog-abhh7 1/1 Running 0 6h chronojam-serve-once-1cwmb 1/1 Running 7 4h chronojam-serve-once-46jck 1/1 Running 7 4h chronojam-serve-once-j8uyc 1/1 Running 3 4h chronojam-serve-once-r8pi4 1/1 Running 7 4h chronojam-serve-once-xhbkd 1/1 Running 4 4h chronojam-serve-once-yb9hc 1/1 Running 7 4h chronojam-tactics-is1go 1/1 Running 0 5h chronojam-tactics-tqm8c 1/1 Running 0 5h #] curl http://serve-once.chronojam.co.uk <h3> chronojam-serve-once-j8uyc </h3> #] curl http://serve-once.chronojam.co.uk <h3> chronojam-serve-once-r8pi4 </h3> #] curl http://serve-once.chronojam.co.uk <h3> chronojam-serve-once-yb9hc </h3> #] curl http://serve-once.chronojam.co.uk <h3> chronojam-serve-once-46jck </h3> #] curl http://serve-once.chronojam.co.uk #] curl http://serve-once.chronojam.co.uk You'll also note that even though there should be 2 still-healthy pods there, it stops returning after the 4th. So my question is two fold: 1) Can I tweak the backoff delay? 2) Why does my service not send my request to the healthy containers? Observations: I think that it might be the webserver itself not being able to start serving requests that quickly, so kubernetes is reckonizing those pods as healthy, and sending requests there (but coming back with nothing because the process hasnt started?) ### Response: I filed an issue to document the recommended practice. I put a sketch of the approach in the issue: https://github.com/kubernetes/kubernetes/issues/20473 ensure the pods have a non-zero terminationGracePeriodSeconds set configure a readinessProbe on the main serving container of the pods handle SIGTERM in the application: fail the readinessProbe but continue * to handle normal requests and do not exit set maxUnavailable and/or maxSurge large enough to ensure enough serving instances in the Deployment API spec (available in 1.2) Container restarts, especially when they pull images, are fairly expensive for the system. The Kubelet backs off restarts of crashing containers in order to degrade gracefully with DOSing docker, the registry, the apiserver, etc.
I have been experimenting with kubernetes recently, and I have been trying to test the failover in pods, by having a replication controller, in which containers crash as soon as they are used (thus causing a restart). I have adapted the bashttpd project for this: https://github.com/Chronojam/bashttpd (Where in I have set it up so that it serves the hostname of the container, then exits) This works great, except the restart is far to slow for what I am trying to do, as it works for the first couple of requests, then stops for a while - then starts working again when the pods are restarted. (ideally id like to see no interruption at all when accessing the service). I think (but not sure) that the backup delay mentioned here is to blame: https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/pod-states.md#restartpolicy some output: #] kubectl get pods NAME READY STATUS RESTARTS AGE chronojam-blog-a23ak 1/1 Running 0 6h chronojam-blog-abhh7 1/1 Running 0 6h chronojam-serve-once-1cwmb 1/1 Running 7 4h chronojam-serve-once-46jck 1/1 Running 7 4h chronojam-serve-once-j8uyc 1/1 Running 3 4h chronojam-serve-once-r8pi4 1/1 Running 7 4h chronojam-serve-once-xhbkd 1/1 Running 4 4h chronojam-serve-once-yb9hc 1/1 Running 7 4h chronojam-tactics-is1go 1/1 Running 0 5h chronojam-tactics-tqm8c 1/1 Running 0 5h #] curl http://serve-once.chronojam.co.uk <h3> chronojam-serve-once-j8uyc </h3> #] curl http://serve-once.chronojam.co.uk <h3> chronojam-serve-once-r8pi4 </h3> #] curl http://serve-once.chronojam.co.uk <h3> chronojam-serve-once-yb9hc </h3> #] curl http://serve-once.chronojam.co.uk <h3> chronojam-serve-once-46jck </h3> #] curl http://serve-once.chronojam.co.uk #] curl http://serve-once.chronojam.co.uk You'll also note that even though there should be 2 still-healthy pods there, it stops returning after the 4th. So my question is two fold: 1) Can I tweak the backoff delay? 2) Why does my service not send my request to the healthy containers? Observations: I think that it might be the webserver itself not being able to start serving requests that quickly, so kubernetes is reckonizing those pods as healthy, and sending requests there (but coming back with nothing because the process hasnt started?)
To configure an HA master, you should follow the High Availability Kubernetes Cluster document, in particular making sure you have replicated storage across failure domains and a load balancer in front of your replicated apiservers.
###Instruction: What I am trying to do: I have setup kubernete cluster using documentation available on Kubernetes website (http_kubernetes.io/v1.1/docs/getting-started-guides/aws.html). Using kube-up.sh, i was able to bring kubernete cluster up with 1 master and 3 minions (as highlighted in blue rectangle in the diagram below). From the documentation as far as i know we can add minions as and when required, So from my point of view k8s master instance is single point of failure when it comes to high availability. Kubernetes Master HA on AWS So I am trying to setup HA k8s master layer with the three master nodes as shown above in the diagram. For accomplishing this I am following kubernetes high availability cluster guide, http_kubernetes.io/v1.1/docs/admin/high-availability.html#establishing-a-redundant-reliable-data-storage-layer What I have done: Setup k8s cluster using kube-up.sh and provider aws (master1 and minion1, minion2, and minion3) Setup two fresh master instance’s (master2 and master3) I then started configuring etcd cluster on master1, master 2 and master 3 by following below mentioned link: http_kubernetes.io/v1.1/docs/admin/high-availability.html#establishing-a-redundant-reliable-data-storage-layer So in short i have copied etcd.yaml from the kubernetes website (http_kubernetes.io/v1.1/docs/admin/high-availability/etcd.yaml) and updated Node_IP, Node_Name and Discovery Token on all the three nodes as shown below. NODE_NAME NODE_IP DISCOVERY_TOKEN Master1 172.20.3.150 https_discovery.etcd.io/5d84f4e97f6e47b07bf81be243805bed Master2 172.20.3.200 https_discovery.etcd.io/5d84f4e97f6e47b07bf81be243805bed Master3 172.20.3.250 https_discovery.etcd.io/5d84f4e97f6e47b07bf81be243805bed And on running etcdctl member list on all the three nodes, I am getting: $ docker exec <container-id> etcdctl member list ce2a822cea30bfca: name=default peerURLs=http_localhost:2380,http_localhost:7001 clientURLs=http_127.0.0.1:4001 As per documentation we need to keep etcd.yaml in /etc/kubernete/manifest, this directory already contains etcd.manifest and etcd-event.manifest files. For testing I modified etcd.manifest file with etcd parameters. After making above changes I forcefully terminated docker container, container was existing after few seconds and I was getting below mentioned error on running kubectl get nodes: error: couldn't read version from server: Get httplocalhost:8080/api: dial tcp 127.0.0.1:8080: connection refused So please kindly suggest how can I setup k8s master highly available setup on AWS. ### Response: To configure an HA master, you should follow the High Availability Kubernetes Cluster document, in particular making sure you have replicated storage across failure domains and a load balancer in front of your replicated apiservers.
What I am trying to do: I have setup kubernete cluster using documentation available on Kubernetes website (http_kubernetes.io/v1.1/docs/getting-started-guides/aws.html). Using kube-up.sh, i was able to bring kubernete cluster up with 1 master and 3 minions (as highlighted in blue rectangle in the diagram below). From the documentation as far as i know we can add minions as and when required, So from my point of view k8s master instance is single point of failure when it comes to high availability. Kubernetes Master HA on AWS So I am trying to setup HA k8s master layer with the three master nodes as shown above in the diagram. For accomplishing this I am following kubernetes high availability cluster guide, http_kubernetes.io/v1.1/docs/admin/high-availability.html#establishing-a-redundant-reliable-data-storage-layer What I have done: Setup k8s cluster using kube-up.sh and provider aws (master1 and minion1, minion2, and minion3) Setup two fresh master instance’s (master2 and master3) I then started configuring etcd cluster on master1, master 2 and master 3 by following below mentioned link: http_kubernetes.io/v1.1/docs/admin/high-availability.html#establishing-a-redundant-reliable-data-storage-layer So in short i have copied etcd.yaml from the kubernetes website (http_kubernetes.io/v1.1/docs/admin/high-availability/etcd.yaml) and updated Node_IP, Node_Name and Discovery Token on all the three nodes as shown below. NODE_NAME NODE_IP DISCOVERY_TOKEN Master1 172.20.3.150 https_discovery.etcd.io/5d84f4e97f6e47b07bf81be243805bed Master2 172.20.3.200 https_discovery.etcd.io/5d84f4e97f6e47b07bf81be243805bed Master3 172.20.3.250 https_discovery.etcd.io/5d84f4e97f6e47b07bf81be243805bed And on running etcdctl member list on all the three nodes, I am getting: $ docker exec <container-id> etcdctl member list ce2a822cea30bfca: name=default peerURLs=http_localhost:2380,http_localhost:7001 clientURLs=http_127.0.0.1:4001 As per documentation we need to keep etcd.yaml in /etc/kubernete/manifest, this directory already contains etcd.manifest and etcd-event.manifest files. For testing I modified etcd.manifest file with etcd parameters. After making above changes I forcefully terminated docker container, container was existing after few seconds and I was getting below mentioned error on running kubectl get nodes: error: couldn't read version from server: Get httplocalhost:8080/api: dial tcp 127.0.0.1:8080: connection refused So please kindly suggest how can I setup k8s master highly available setup on AWS.
I had this same problem after updating the certificate and key of the api-server. I fixed the DNS service after removing the secret: # Search the secret name kubectl get pod --namespace=kube-system -l k8s-app=kube-dns -o yaml | grep -A1 serviceaccount # Delete the current secret kubectl delete secret/<name-of-the-secret> --namespace=kube-system I also need to restart the whole POD, don't know if there is a better approach: kubectl delete svc/kube-dns rc/kube-dns-v9 --namespace=kube-system kubectl create -f dns-addon.yaml
###Instruction: I'm trying to install kubernetes on ubutnu (bare-metal installation). I install one master and one node and it looks fine. I installed the ui addon without a problem but when I tried to install the dns addon the pod restart itself continuously. in kube-apiserver logs I get: E1218 12:56:15.298118 5 handlers.go:37] Unable to authenticate the request due to an error: crypto/rsa: verification error I1218 12:56:15.298220 5 handlers.go:131] GET /api/v1/services: (534.467Β΅s) 401 [[kube2sky/v0.20.2 (linux/amd64) kubernetes/unknown] 172.27.35.39:33013] E1218 12:56:15.298396 5 handlers.go:37] Unable to authenticate the request due to an error: crypto/rsa: verification error I1218 12:56:15.298469 5 handlers.go:131] GET /api/v1/endpoints: (493.5Β΅s) 401 [[kube2sky/v0.20.2 (linux/amd64) kubernetes/unknown] 172.27.35.39:33014] I1218 12:56:16.001321 5 handlers.go:131] GET /healthz: (83.326Β΅s) 0 [[Go 1.1 package http] 127.0.0.1:42096] E1218 12:56:16.303274 5 handlers.go:37] Unable to authenticate the request due to an error: crypto/rsa: verification error E1218 12:56:16.303274 5 handlers.go:37] Unable to authenticate the request due to an error: crypto/rsa: verification error And in the kube2sky container log I get the following errors: E1218 12:57:51.713651 1 reflector.go:136] Failed to list *api.Service: the server has asked for the client to provide credentials (get services) E1218 12:57:51.713850 1 reflector.go:136] Failed to list *api.Endpoints: the server has asked for the client to provide credentials (get endpoints) looks to me like I have a problem with service account and token but I looked in the kube2sky instance and I see that I have a token in /tmp/secrets/kubernetes.io/serviceaccount directory and the ca.crt is the correct ca of the cluster I am using: ubuntu 14.04 kubernetes 1.1.2 add I try to create the dns addon from the addon directory of kubernetes 1.1.2 ### Response: I had this same problem after updating the certificate and key of the api-server. I fixed the DNS service after removing the secret: # Search the secret name kubectl get pod --namespace=kube-system -l k8s-app=kube-dns -o yaml | grep -A1 serviceaccount # Delete the current secret kubectl delete secret/<name-of-the-secret> --namespace=kube-system I also need to restart the whole POD, don't know if there is a better approach: kubectl delete svc/kube-dns rc/kube-dns-v9 --namespace=kube-system kubectl create -f dns-addon.yaml
I'm trying to install kubernetes on ubutnu (bare-metal installation). I install one master and one node and it looks fine. I installed the ui addon without a problem but when I tried to install the dns addon the pod restart itself continuously. in kube-apiserver logs I get: E1218 12:56:15.298118 5 handlers.go:37] Unable to authenticate the request due to an error: crypto/rsa: verification error I1218 12:56:15.298220 5 handlers.go:131] GET /api/v1/services: (534.467Β΅s) 401 [[kube2sky/v0.20.2 (linux/amd64) kubernetes/unknown] 172.27.35.39:33013] E1218 12:56:15.298396 5 handlers.go:37] Unable to authenticate the request due to an error: crypto/rsa: verification error I1218 12:56:15.298469 5 handlers.go:131] GET /api/v1/endpoints: (493.5Β΅s) 401 [[kube2sky/v0.20.2 (linux/amd64) kubernetes/unknown] 172.27.35.39:33014] I1218 12:56:16.001321 5 handlers.go:131] GET /healthz: (83.326Β΅s) 0 [[Go 1.1 package http] 127.0.0.1:42096] E1218 12:56:16.303274 5 handlers.go:37] Unable to authenticate the request due to an error: crypto/rsa: verification error E1218 12:56:16.303274 5 handlers.go:37] Unable to authenticate the request due to an error: crypto/rsa: verification error And in the kube2sky container log I get the following errors: E1218 12:57:51.713651 1 reflector.go:136] Failed to list *api.Service: the server has asked for the client to provide credentials (get services) E1218 12:57:51.713850 1 reflector.go:136] Failed to list *api.Endpoints: the server has asked for the client to provide credentials (get endpoints) looks to me like I have a problem with service account and token but I looked in the kube2sky instance and I see that I have a token in /tmp/secrets/kubernetes.io/serviceaccount directory and the ca.crt is the correct ca of the cluster I am using: ubuntu 14.04 kubernetes 1.1.2 add I try to create the dns addon from the addon directory of kubernetes 1.1.2
No, the point of Kubernetes is that your app doesn't have to be 'aware' of it. (Mesos has more of a "apps need to know about us" philosophy.) In Kubernetes, each pod just starts up and listens on a port. The app doesn't register it's presence, or even tell what version it is. When it needs to talk to another service, it uses DNS (or even a fixed service IP) to find LB for that downstream service. Typically application developers don't prioritize infrastructure-related concerns when designing an application In general, there are only two things to worry about: 1) Make your service stateless, so you push the state out to the edges (Databases and/or clients). This allows Kubernetes to 'scale' your app by running more copies. Stateful services are nearly impossible to scale. 2) Break up your app into multiple "microservices" so you can scale the "product view" function without scaling the "customer login" function. 3) Optional: Head towards 12factor apps.
###Instruction: Typically application developers, take J2EE for instance, don't prioritize infrastructure-related concerns when designing an application. It's hard to interface with the traditional non-programmable infrastructure. A traditional approach is to build a .war file that could then be run in an app server such as JBoss. Traditional frameworks such as Spring (except the new flavor of Spring Cloud) take this as a premise. Now, if there is a fault-tolerant, elastic deployment runtime available as it is provided by Kubernetes for instance, it seems as if writing the business application the same way would ignore capabilities such as scheduling, that are provided by the runtime. A concrete question: is it typical for applications to talk (and benefit) from the runtime (i.e. Kubernetes, Mesos, etc) API? If so, could you point to a good example. Most of the resources I have found are focused on the Ops side more than the Dev. ### Response: No, the point of Kubernetes is that your app doesn't have to be 'aware' of it. (Mesos has more of a "apps need to know about us" philosophy.) In Kubernetes, each pod just starts up and listens on a port. The app doesn't register it's presence, or even tell what version it is. When it needs to talk to another service, it uses DNS (or even a fixed service IP) to find LB for that downstream service. Typically application developers don't prioritize infrastructure-related concerns when designing an application In general, there are only two things to worry about: 1) Make your service stateless, so you push the state out to the edges (Databases and/or clients). This allows Kubernetes to 'scale' your app by running more copies. Stateful services are nearly impossible to scale. 2) Break up your app into multiple "microservices" so you can scale the "product view" function without scaling the "customer login" function. 3) Optional: Head towards 12factor apps.
Typically application developers, take J2EE for instance, don't prioritize infrastructure-related concerns when designing an application. It's hard to interface with the traditional non-programmable infrastructure. A traditional approach is to build a .war file that could then be run in an app server such as JBoss. Traditional frameworks such as Spring (except the new flavor of Spring Cloud) take this as a premise. Now, if there is a fault-tolerant, elastic deployment runtime available as it is provided by Kubernetes for instance, it seems as if writing the business application the same way would ignore capabilities such as scheduling, that are provided by the runtime. A concrete question: is it typical for applications to talk (and benefit) from the runtime (i.e. Kubernetes, Mesos, etc) API? If so, could you point to a good example. Most of the resources I have found are focused on the Ops side more than the Dev.
This isn't an option in Google Compute Engine at the moment, but the feature that you're looking for will at least be worked on in Kubernetes soon.
###Instruction: I'm one question about networking on Google Compute Engine. Is there any possibility to set default "Source tag / IP range" on your account to get this value automatically set when new rules is created? For example, if I create this rule: gcloud beta compute firewall-rules create "test-net1-ssh-in" \ --allow tcp:22 \ --network "test-net1" \ --source-ranges "0.0.0.0/0" \ --target-tags "ssh-servers" Will replaced (automatically of course) by: gcloud beta compute firewall-rules create "test-net1-ssh-in" \ --allow tcp:22 \ --network "test-net1" \ --source-ranges "8.8.8.8/32" \ --target-tags "ssh-servers" And why this should be useful? Because we want that every rule created on the platform, custom rules or auto-generated rules (from kubernetes cluster) will allow only or testing computer IP Any help will be much appreciated. Thanks :) ### Response: This isn't an option in Google Compute Engine at the moment, but the feature that you're looking for will at least be worked on in Kubernetes soon.
I'm one question about networking on Google Compute Engine. Is there any possibility to set default "Source tag / IP range" on your account to get this value automatically set when new rules is created? For example, if I create this rule: gcloud beta compute firewall-rules create "test-net1-ssh-in" \ --allow tcp:22 \ --network "test-net1" \ --source-ranges "0.0.0.0/0" \ --target-tags "ssh-servers" Will replaced (automatically of course) by: gcloud beta compute firewall-rules create "test-net1-ssh-in" \ --allow tcp:22 \ --network "test-net1" \ --source-ranges "8.8.8.8/32" \ --target-tags "ssh-servers" And why this should be useful? Because we want that every rule created on the platform, custom rules or auto-generated rules (from kubernetes cluster) will allow only or testing computer IP Any help will be much appreciated. Thanks :)
Figured it out: It's not the number of GCE instances: I increased the number of GCE instances with no change in the number of streaming clients. It's the number of pods: each pod apparently can handle 8 connections. I simply scaled my replication controller with kubernetes scale rc <rc-name> --replicas=3 to support 24 clients. I'll be looking into autoscaling (with a horizontal pod scaler?) the number of pods based on incoming HTTP requests. Update 1: Kubernetes doesn't currently support horizontal pod scaling based on HTTP. Update 2: Apparently there are other things at play here, like the size of the thread pool available to the server. With N threads and P pods, I'm able to maintain P*N open channels. This works particularly well for me because my clients only need to poll the server once every few seconds, and they sleep when inactive.
###Instruction: I've included more detail below, but the question I'm trying to answer is in the title. I'm currently trying to figure this out, but thought I'd ask here first in case anyone knows the answer off-hand. About my setup I have a Kubernetes service running on a Google Compute Engine cluster (started via Google Container Engine). It consists of a service (for the front-end stable IP), a replication controller, and pods running a Python server. The server is a Python gRPC server sleep-listening on a port. There are 2 pods (2 replicas specified in the replication controller), one rc, one service, and 4 GCE instances (set to autoscale up to 5 based on CPU). I'd like the service to be able to handle an arbitrary number of clients that want to stream information. However, I'm currently seeing that the service only talks to 16 of the clients. I'm hypothesizing that the number of connections is either limited by the number of GCE instances I have, or by the number of pods. I'll be doing experiments to see how changing these numbers affects things. ### Response: Figured it out: It's not the number of GCE instances: I increased the number of GCE instances with no change in the number of streaming clients. It's the number of pods: each pod apparently can handle 8 connections. I simply scaled my replication controller with kubernetes scale rc <rc-name> --replicas=3 to support 24 clients. I'll be looking into autoscaling (with a horizontal pod scaler?) the number of pods based on incoming HTTP requests. Update 1: Kubernetes doesn't currently support horizontal pod scaling based on HTTP. Update 2: Apparently there are other things at play here, like the size of the thread pool available to the server. With N threads and P pods, I'm able to maintain P*N open channels. This works particularly well for me because my clients only need to poll the server once every few seconds, and they sleep when inactive.
I've included more detail below, but the question I'm trying to answer is in the title. I'm currently trying to figure this out, but thought I'd ask here first in case anyone knows the answer off-hand. About my setup I have a Kubernetes service running on a Google Compute Engine cluster (started via Google Container Engine). It consists of a service (for the front-end stable IP), a replication controller, and pods running a Python server. The server is a Python gRPC server sleep-listening on a port. There are 2 pods (2 replicas specified in the replication controller), one rc, one service, and 4 GCE instances (set to autoscale up to 5 based on CPU). I'd like the service to be able to handle an arbitrary number of clients that want to stream information. However, I'm currently seeing that the service only talks to 16 of the clients. I'm hypothesizing that the number of connections is either limited by the number of GCE instances I have, or by the number of pods. I'll be doing experiments to see how changing these numbers affects things.
Kubernetes monitoring is detailed in the documentation here, but that mostly covers tools using heapster. Node-specific information is exposed through the cAdvisor UI which can be accessed on port 4194 (see the commands below to access this through the proxy API). Heapster queries the kubelet for stats served at <kubelet address>:10255/stats/ (other endpoints can be found in the code here). Try this: $ kubectl proxy & Starting to serve on 127.0.0.1:8001 $ NODE=$(kubectl get nodes -o=jsonpath="{.items[0].metadata.name}") $ curl -X "POST" -d '{"containerName":"/","subcontainers":true,"num_stats":1}' localhost:8001/api/v1/proxy/nodes/${NODE}:10255/stats/container ... Note that these endpoints are not documented as they are intended for internal use (and debugging), and may change in the future (we eventually want to offer a more stable versioned endpoint). Update: As of Kubernetes version 1.2, the Kubelet exports a "summary" API that aggregates stats from all Pods: $ kubectl proxy & Starting to serve on 127.0.0.1:8001 $ NODE=$(kubectl get nodes -o=jsonpath="{.items[0].metadata.name}") $ curl localhost:8001/api/v1/proxy/nodes/${NODE}:10255/stats/summary ...
###Instruction: Without using Heapster is there any way to collect like CPU or Disk metrics about a node within a Kubernetes cluster? How does Heapster even collect those metrics in the first place? ### Response: Kubernetes monitoring is detailed in the documentation here, but that mostly covers tools using heapster. Node-specific information is exposed through the cAdvisor UI which can be accessed on port 4194 (see the commands below to access this through the proxy API). Heapster queries the kubelet for stats served at <kubelet address>:10255/stats/ (other endpoints can be found in the code here). Try this: $ kubectl proxy & Starting to serve on 127.0.0.1:8001 $ NODE=$(kubectl get nodes -o=jsonpath="{.items[0].metadata.name}") $ curl -X "POST" -d '{"containerName":"/","subcontainers":true,"num_stats":1}' localhost:8001/api/v1/proxy/nodes/${NODE}:10255/stats/container ... Note that these endpoints are not documented as they are intended for internal use (and debugging), and may change in the future (we eventually want to offer a more stable versioned endpoint). Update: As of Kubernetes version 1.2, the Kubelet exports a "summary" API that aggregates stats from all Pods: $ kubectl proxy & Starting to serve on 127.0.0.1:8001 $ NODE=$(kubectl get nodes -o=jsonpath="{.items[0].metadata.name}") $ curl localhost:8001/api/v1/proxy/nodes/${NODE}:10255/stats/summary ...
Without using Heapster is there any way to collect like CPU or Disk metrics about a node within a Kubernetes cluster? How does Heapster even collect those metrics in the first place?
You can use the FSGroup field of the pod's security context to make GCE PDs writable by non-root users. In this example, the gce volume will be owned by group 1234 and the container process will have 1234 in its list of supplemental groups: apiVersion: v1 kind: Pod metadata: name: test-pd spec: securityContext: fsGroup: 1234 containers: - image: gcr.io/google_containers/test-webserver name: test-container volumeMounts: - mountPath: /test-pd name: test-volume volumes: - name: test-volume # This GCE PD must already exist. gcePersistentDisk: pdName: my-data-disk fsType: ext4
###Instruction: I'm playing with kubernetes and google container engine (GKE). I deployed a container from this image jupyter/all-spark-notebook This is my replication controller : { "apiVersion": "v1", "kind": "ReplicationController", "metadata": { "name": "datalab-notebook" }, "spec": { "replicas": 1, "selector": { "app": "datalab-notebook" }, "template": { "metadata": { "name": "datalab-notebook", "labels": { "environment": "TEST", "app": "datalab-notebook" } }, "spec": { "containers": [{ "name": "datalab-notebook-container", "image": "jupyter/all-spark-notebook", "env": [], "ports": [{ "containerPort": 8888, "name": "datalab-port" }], "volumeMounts": [{ "name": "datalab-notebook-persistent-storage", "mountPath": "/home/jovyan/work" }] }], "volumes": [{ "name": "datalab-notebook-persistent-storage", "gcePersistentDisk": { "pdName": "datalab-notebook-disk", "fsType": "ext4" } }] } } } } As you can see I mounted a Google Compute Engine Persistent Disk. My issue is that the container uses a non-root user and the mounted disk is owned by root. so my container can not write to the disk. Is there a way to mount GCE persistent disks and make them read/write for containers without non-root users? Another general question : is it safe to run container with root user in Google Container Engine? Thank you in advance for your inputs ### Response: You can use the FSGroup field of the pod's security context to make GCE PDs writable by non-root users. In this example, the gce volume will be owned by group 1234 and the container process will have 1234 in its list of supplemental groups: apiVersion: v1 kind: Pod metadata: name: test-pd spec: securityContext: fsGroup: 1234 containers: - image: gcr.io/google_containers/test-webserver name: test-container volumeMounts: - mountPath: /test-pd name: test-volume volumes: - name: test-volume # This GCE PD must already exist. gcePersistentDisk: pdName: my-data-disk fsType: ext4
I'm playing with kubernetes and google container engine (GKE). I deployed a container from this image jupyter/all-spark-notebook This is my replication controller : { "apiVersion": "v1", "kind": "ReplicationController", "metadata": { "name": "datalab-notebook" }, "spec": { "replicas": 1, "selector": { "app": "datalab-notebook" }, "template": { "metadata": { "name": "datalab-notebook", "labels": { "environment": "TEST", "app": "datalab-notebook" } }, "spec": { "containers": [{ "name": "datalab-notebook-container", "image": "jupyter/all-spark-notebook", "env": [], "ports": [{ "containerPort": 8888, "name": "datalab-port" }], "volumeMounts": [{ "name": "datalab-notebook-persistent-storage", "mountPath": "/home/jovyan/work" }] }], "volumes": [{ "name": "datalab-notebook-persistent-storage", "gcePersistentDisk": { "pdName": "datalab-notebook-disk", "fsType": "ext4" } }] } } } } As you can see I mounted a Google Compute Engine Persistent Disk. My issue is that the container uses a non-root user and the mounted disk is owned by root. so my container can not write to the disk. Is there a way to mount GCE persistent disks and make them read/write for containers without non-root users? Another general question : is it safe to run container with root user in Google Container Engine? Thank you in advance for your inputs
On Unix, MySQL programs treat the host name localhost specially, in a way that is likely different from what you expect compared to other network-based programs. For connections to localhost, MySQL programs attempt to connect to the local server by using a Unix socket file. This occurs even if a --port or -P option is given to specify a port number. To ensure that the client makes a TCP/IP connection to the local server, use --host or -h to specify a host name value of 127.0.0.1, or the IP address or name of the local server. You can also specify the connection protocol explicitly, even for localhost, by using the --protocol=TCP option. For example: shell> mysql --host=127.0.0.1 shell> mysql --protocol=TCP The --protocol option enables you to establish a particular type of connection even when the other options would normally default to some other protocol. The other solution is using Kubernetes Volume abstraction to share a path between containers. Edit /etc/mysql/my.cnf on both containers' images and change socket location for both MySQL server and client to point to the shared directory or disk.
###Instruction: Good day, I am using Kubernetes to run containers on the google container engine. The idea is to run two containers in a pod. One container uses the docker mysql image, the other runs php, laravel, nginx and composer. Locally, this works. The idea is that the php can connect to the database on localhost, and this should work if both containers are in the same pod. However, when the pod is launched, we see the following message in the log: SQLSTATE[HY000] [2002] Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) The only difference is that when testing locally, I change localhost to the internal docker ip. Thanks and good day ### Response: On Unix, MySQL programs treat the host name localhost specially, in a way that is likely different from what you expect compared to other network-based programs. For connections to localhost, MySQL programs attempt to connect to the local server by using a Unix socket file. This occurs even if a --port or -P option is given to specify a port number. To ensure that the client makes a TCP/IP connection to the local server, use --host or -h to specify a host name value of 127.0.0.1, or the IP address or name of the local server. You can also specify the connection protocol explicitly, even for localhost, by using the --protocol=TCP option. For example: shell> mysql --host=127.0.0.1 shell> mysql --protocol=TCP The --protocol option enables you to establish a particular type of connection even when the other options would normally default to some other protocol. The other solution is using Kubernetes Volume abstraction to share a path between containers. Edit /etc/mysql/my.cnf on both containers' images and change socket location for both MySQL server and client to point to the shared directory or disk.
Good day, I am using Kubernetes to run containers on the google container engine. The idea is to run two containers in a pod. One container uses the docker mysql image, the other runs php, laravel, nginx and composer. Locally, this works. The idea is that the php can connect to the database on localhost, and this should work if both containers are in the same pod. However, when the pod is launched, we see the following message in the log: SQLSTATE[HY000] [2002] Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) The only difference is that when testing locally, I change localhost to the internal docker ip. Thanks and good day
It's definitely confusing that some of the logs are showing up. Given that error message, I'd expect none of your logs to be showing up in the viewer, since it sounds like the logging API hasn't been enabled for your project yet. You can do so from the Developers Console, here. Try going there, clicking the Enable API button, and seeing whether the errors keep coming.
###Instruction: I have a rather small (1-2 node) kubernetes cluster running in GKE with Β±40 Pods running. The problem at hand is that it's not logging to the GCE Console properly. I see lots of messages from the fluentd container(s) in the following format: $ kubectl logs fluentd-cloud-logging-gke-xxxxxxxx-node-xxxx 2016-02-02 23:30:09 +0000 [warn]: Dropping 10 log message(s) error_class="Google::APIClient::ClientError" error="Project has not enabled the API. Please use Google Developers Console to activate the 'logging' API for your project." 2016-02-02 23:30:09 +0000 [warn]: Dropping 1 log message(s) error_class="Google::APIClient::ClientError" error="Project has not enabled the API. Please use Google Developers Console to activate the 'logging' API for your project." 2016-02-02 23:30:09 +0000 [warn]: Dropping 3 log message(s) error_class="Google::APIClient::ClientError" error="Project has not enabled the API. Please use Google Developers Console to activate the 'logging' API for your project." 2016-02-02 23:30:09 +0000 [warn]: Dropping 41 log message(s) error_class="Google::APIClient::ClientError" error="Project has not enabled the API. Please use Google Developers Console to activate the 'logging' API for your project." 2016-02-02 23:30:09 +0000 [warn]: Dropping 5 log message(s) error_class="Google::APIClient::ClientError" error="Project has not enabled the API. Please use Google Developers Console to activate the 'logging' API for your project." ...and so on. I'm seeing ~5 of these messages per second, so I know things are producing logs. However, in the compute engine console I see something like the following: So somewhere in between I'm obviously loosing lots of messages. Strange though, that I'm not loosing all these messages! The cluster is configured with Logging.write and Monitoring.all privileges as suggested in GH issue #15727 ### Response: It's definitely confusing that some of the logs are showing up. Given that error message, I'd expect none of your logs to be showing up in the viewer, since it sounds like the logging API hasn't been enabled for your project yet. You can do so from the Developers Console, here. Try going there, clicking the Enable API button, and seeing whether the errors keep coming.
I have a rather small (1-2 node) kubernetes cluster running in GKE with Β±40 Pods running. The problem at hand is that it's not logging to the GCE Console properly. I see lots of messages from the fluentd container(s) in the following format: $ kubectl logs fluentd-cloud-logging-gke-xxxxxxxx-node-xxxx 2016-02-02 23:30:09 +0000 [warn]: Dropping 10 log message(s) error_class="Google::APIClient::ClientError" error="Project has not enabled the API. Please use Google Developers Console to activate the 'logging' API for your project." 2016-02-02 23:30:09 +0000 [warn]: Dropping 1 log message(s) error_class="Google::APIClient::ClientError" error="Project has not enabled the API. Please use Google Developers Console to activate the 'logging' API for your project." 2016-02-02 23:30:09 +0000 [warn]: Dropping 3 log message(s) error_class="Google::APIClient::ClientError" error="Project has not enabled the API. Please use Google Developers Console to activate the 'logging' API for your project." 2016-02-02 23:30:09 +0000 [warn]: Dropping 41 log message(s) error_class="Google::APIClient::ClientError" error="Project has not enabled the API. Please use Google Developers Console to activate the 'logging' API for your project." 2016-02-02 23:30:09 +0000 [warn]: Dropping 5 log message(s) error_class="Google::APIClient::ClientError" error="Project has not enabled the API. Please use Google Developers Console to activate the 'logging' API for your project." ...and so on. I'm seeing ~5 of these messages per second, so I know things are producing logs. However, in the compute engine console I see something like the following: So somewhere in between I'm obviously loosing lots of messages. Strange though, that I'm not loosing all these messages! The cluster is configured with Logging.write and Monitoring.all privileges as suggested in GH issue #15727
Looks like it Google Container Engine requires readOnly on both: containers: - volumeMounts: - readOnly: true volumes: - gcePersistentDisk: readOnly: true
###Instruction: I'd like use a single google compute engine disk and mount it to multiple places a Kubernetes Google Container Engine. I'm wondering if using persistentVolumeClaim would work, but what I did expect to work is being able to mount two partitions of the same disk: ... volumes: - name: database gcePersistentDisk: pdName: dist-1 fsType: ext4 partition: 1 readOnly: true - name: media gcePersistentDisk: pdName: disk-1 fsType: ext4 partition: 2 readOnly: true Mounting either of them separately works, but trying to mount both gives an error: FailedMount Unable to mount volumes for pod "frontend-ni7uf_foo": Could not attach GCE PD "disk-1". Timeout waiting for mount paths to be created. Technically I can manually mount both partitions on a single instance. I'm wondering if it's a bug and if there is another solution for that (outside of having multiple disks)? ### Response: Looks like it Google Container Engine requires readOnly on both: containers: - volumeMounts: - readOnly: true volumes: - gcePersistentDisk: readOnly: true
I'd like use a single google compute engine disk and mount it to multiple places a Kubernetes Google Container Engine. I'm wondering if using persistentVolumeClaim would work, but what I did expect to work is being able to mount two partitions of the same disk: ... volumes: - name: database gcePersistentDisk: pdName: dist-1 fsType: ext4 partition: 1 readOnly: true - name: media gcePersistentDisk: pdName: disk-1 fsType: ext4 partition: 2 readOnly: true Mounting either of them separately works, but trying to mount both gives an error: FailedMount Unable to mount volumes for pod "frontend-ni7uf_foo": Could not attach GCE PD "disk-1". Timeout waiting for mount paths to be created. Technically I can manually mount both partitions on a single instance. I'm wondering if it's a bug and if there is another solution for that (outside of having multiple disks)?
What about the -o wide flag of kubectl get?
###Instruction: Is there any command I can run to easily see which pods are assigned to which node? As I mess around with resource requests and limits, it would be very helpful to visualize how pods have been assigned to cluster members. ### Response: What about the -o wide flag of kubectl get?
Is there any command I can run to easily see which pods are assigned to which node? As I mess around with resource requests and limits, it would be very helpful to visualize how pods have been assigned to cluster members.
If you use the kubectl expose command: --external-ip="": External IP address to set for the service. The service can be accessed by this IP in addition to its generated service IP. Or if you create from a json or yaml file, use the spec/externalIPs array.
###Instruction: I am running a kubernetes cluster with 1 master (also a node) and 2 nodes on Azure. I am using Ubuntu with Flannel overlay network. So far everything is working well. The only problem I have is exposing the service to the internet. I am running the cluster on an azure subnet. The master has a NIC attached to it that has a public IP. This means if I run a simple server that listens on port 80, I can reach my server using a domain name (Azure gives an option to have a domain name for a public IP). I am also able to reach the kubernetes guest book frontend service with some hack. What I did was check all the listening ports on the master and try each port with the public IP. I was able to hit the kubernetes service and get response. Based on my understanding this is directly going to the pod that is running on the master (which is also a node) rather than going through the service IP (which would have load balanced across any of the pods). My question is how do I map the external IP to the service IP? I know kubernetes has a setting that works only on GCE (which I can't use right now). But is there some neat way of telling etcd/flannel to do this? ### Response: If you use the kubectl expose command: --external-ip="": External IP address to set for the service. The service can be accessed by this IP in addition to its generated service IP. Or if you create from a json or yaml file, use the spec/externalIPs array.
I am running a kubernetes cluster with 1 master (also a node) and 2 nodes on Azure. I am using Ubuntu with Flannel overlay network. So far everything is working well. The only problem I have is exposing the service to the internet. I am running the cluster on an azure subnet. The master has a NIC attached to it that has a public IP. This means if I run a simple server that listens on port 80, I can reach my server using a domain name (Azure gives an option to have a domain name for a public IP). I am also able to reach the kubernetes guest book frontend service with some hack. What I did was check all the listening ports on the master and try each port with the public IP. I was able to hit the kubernetes service and get response. Based on my understanding this is directly going to the pod that is running on the master (which is also a node) rather than going through the service IP (which would have load balanced across any of the pods). My question is how do I map the external IP to the service IP? I know kubernetes has a setting that works only on GCE (which I can't use right now). But is there some neat way of telling etcd/flannel to do this?
I think I was likely running into https://github.com/kubernetes/kubernetes/issues/15073 . (If I create a new EBS volume, I first get a different failure, and then after the pod has been killed if I try to re-create the rc I get the failure I mentioned in my question.) Also, for anyone else wondering where to look for logs, /var/log/syslog and /var/log/containers/XXX on the kubelet was where I ended up having to look.
###Instruction: I'm attempting to follow the instructions at https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/registry to add a private docker registry to Kubernetes, but the pod created by the rc isn't able to mount the persistent volume claim. First I'm creating a volume on EBS like so: aws ec2 create-volume --region us-west-1 --availability-zone us-west-1a --size 32 --volume-type gp2 (us-west-1a is also the availability zone that all of my kube minions are running in.) Then I create a persistent volume like so: kind: PersistentVolume apiVersion: v1 metadata: name: kube-system-kube-registry-pv labels: kubernetes.io/cluster-service: "true" spec: capacity: storage: 30Gi accessModes: - ReadWriteOnce awsElasticBlockStore: volumeID: vol-XXXXXXXX fsType: ext4 And a claim on the persistent volume like so: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: kube-registry-pvc namespace: kube-system labels: kubernetes.io/cluster-service: "true" spec: accessModes: - ReadWriteOnce resources: requests: storage: 30Gi The replication controller is specified like so: apiVersion: v1 kind: ReplicationController metadata: name: kube-registry-v0 namespace: kube-system labels: k8s-app: kube-registry version: v0 kubernetes.io/cluster-service: "true" spec: replicas: 1 selector: k8s-app: kube-registry version: v0 template: metadata: labels: k8s-app: kube-registry version: v0 kubernetes.io/cluster-service: "true" spec: containers: - name: registry image: registry:2 resources: limits: cpu: 100m memory: 100Mi env: - name: REGISTRY_HTTP_ADDR value: :5000 - name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY value: /var/lib/registry volumeMounts: - name: image-store mountPath: /var/lib/registry ports: - containerPort: 5000 name: registry protocol: TCP volumes: - name: image-store persistentVolumeClaim: claimName: kube-registry-pvc When I create the rc, It successfully starts a pod, but the pod is unable to mount the volume: $ kubectl describe po kube-registry --namespace=kube-system ... Events: FirstSeen LastSeen Count From SubobjectPath Reason Message ───────── ──────── ───── ──── ───────────── ────── ─────── 1m 1m 1 {scheduler } Scheduled Successfully assigned kube-registry-v0-3jobf to XXXXXXXXXXXXXXX.us-west-1.compute.internal 22s 22s 1 {kubelet XXXXXXXXXXXXXXX.us-west-1.compute.internal} FailedMount Unable to mount volumes for pod "kube-registry-v0-3jobf_kube-system": Timeout waiting for volume state 22s 22s 1 {kubelet XXXXXXXXXXXXXXX.us-west-1.compute.internal} FailedSync Error syncing pod, skipping: Timeout waiting for volume state I'm able to successfully mount EBS volumes if I don't use persistent volumes and persistent volume claims. The following works without error, for example: apiVersion: v1 kind: Pod metadata: name: test-ebs spec: containers: - image: gcr.io/google_containers/test-webserver name: test-container volumeMounts: - mountPath: /test-ebs name: test-volume volumes: - name: test-volume awsElasticBlockStore: volumeID: vol-XXXXXXXX fsType: ext4 My two questions are: Does anyone know what might be going wrong and how to fix it? In general, where can I look for more details on errors like these? I haven't been able to find more detailed log messages anywhere, and "Unable to mount volumes...Timeout waiting for volume state" isn't terribly helpful. ### Response: I think I was likely running into https://github.com/kubernetes/kubernetes/issues/15073 . (If I create a new EBS volume, I first get a different failure, and then after the pod has been killed if I try to re-create the rc I get the failure I mentioned in my question.) Also, for anyone else wondering where to look for logs, /var/log/syslog and /var/log/containers/XXX on the kubelet was where I ended up having to look.
I'm attempting to follow the instructions at https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/registry to add a private docker registry to Kubernetes, but the pod created by the rc isn't able to mount the persistent volume claim. First I'm creating a volume on EBS like so: aws ec2 create-volume --region us-west-1 --availability-zone us-west-1a --size 32 --volume-type gp2 (us-west-1a is also the availability zone that all of my kube minions are running in.) Then I create a persistent volume like so: kind: PersistentVolume apiVersion: v1 metadata: name: kube-system-kube-registry-pv labels: kubernetes.io/cluster-service: "true" spec: capacity: storage: 30Gi accessModes: - ReadWriteOnce awsElasticBlockStore: volumeID: vol-XXXXXXXX fsType: ext4 And a claim on the persistent volume like so: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: kube-registry-pvc namespace: kube-system labels: kubernetes.io/cluster-service: "true" spec: accessModes: - ReadWriteOnce resources: requests: storage: 30Gi The replication controller is specified like so: apiVersion: v1 kind: ReplicationController metadata: name: kube-registry-v0 namespace: kube-system labels: k8s-app: kube-registry version: v0 kubernetes.io/cluster-service: "true" spec: replicas: 1 selector: k8s-app: kube-registry version: v0 template: metadata: labels: k8s-app: kube-registry version: v0 kubernetes.io/cluster-service: "true" spec: containers: - name: registry image: registry:2 resources: limits: cpu: 100m memory: 100Mi env: - name: REGISTRY_HTTP_ADDR value: :5000 - name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY value: /var/lib/registry volumeMounts: - name: image-store mountPath: /var/lib/registry ports: - containerPort: 5000 name: registry protocol: TCP volumes: - name: image-store persistentVolumeClaim: claimName: kube-registry-pvc When I create the rc, It successfully starts a pod, but the pod is unable to mount the volume: $ kubectl describe po kube-registry --namespace=kube-system ... Events: FirstSeen LastSeen Count From SubobjectPath Reason Message ───────── ──────── ───── ──── ───────────── ────── ─────── 1m 1m 1 {scheduler } Scheduled Successfully assigned kube-registry-v0-3jobf to XXXXXXXXXXXXXXX.us-west-1.compute.internal 22s 22s 1 {kubelet XXXXXXXXXXXXXXX.us-west-1.compute.internal} FailedMount Unable to mount volumes for pod "kube-registry-v0-3jobf_kube-system": Timeout waiting for volume state 22s 22s 1 {kubelet XXXXXXXXXXXXXXX.us-west-1.compute.internal} FailedSync Error syncing pod, skipping: Timeout waiting for volume state I'm able to successfully mount EBS volumes if I don't use persistent volumes and persistent volume claims. The following works without error, for example: apiVersion: v1 kind: Pod metadata: name: test-ebs spec: containers: - image: gcr.io/google_containers/test-webserver name: test-container volumeMounts: - mountPath: /test-ebs name: test-volume volumes: - name: test-volume awsElasticBlockStore: volumeID: vol-XXXXXXXX fsType: ext4 My two questions are: Does anyone know what might be going wrong and how to fix it? In general, where can I look for more details on errors like these? I haven't been able to find more detailed log messages anywhere, and "Unable to mount volumes...Timeout waiting for volume state" isn't terribly helpful.
This answer is out of date. I wrote a detailed step-by-step tutorial here using more up to date methods. I highly recommend reading it all. In a nutshell, you run a sidecar app to configure the replica set for you, and either use a service per instance or ping the K8s API for the pod IP addresses. Example: This will only work in Google Cloud. You will need to make modifications for other platforms, particularly around the volumes: Follow the example in https://github.com/leportlabs/mongo-k8s-sidecar.git git clone https://github.com/leportlabs/mongo-k8s-sidecar.git cd mongo-k8s-sidecar/example/ make add-replica ENV=GoogleCloudPlatform (do this three times) Connect to the replica set via services. mongodb://mongo-1,mongo-2,mongo-3:27017/dbname_? You can also use the raw pod IP addresses instead of creating a service per pod Use this https://github.com/thesandlord/kubernetes-pod-ip-finder.git
###Instruction: I'd like to set-up a Mongo replica set on Kubernetes. I'd like to have three replicas. This means I'd need to start 3 instances. Should I start three pods, with Mongo in each one, and use the service the point to the primary? Or should I used a replication controller somehow? ### Response: This answer is out of date. I wrote a detailed step-by-step tutorial here using more up to date methods. I highly recommend reading it all. In a nutshell, you run a sidecar app to configure the replica set for you, and either use a service per instance or ping the K8s API for the pod IP addresses. Example: This will only work in Google Cloud. You will need to make modifications for other platforms, particularly around the volumes: Follow the example in https://github.com/leportlabs/mongo-k8s-sidecar.git git clone https://github.com/leportlabs/mongo-k8s-sidecar.git cd mongo-k8s-sidecar/example/ make add-replica ENV=GoogleCloudPlatform (do this three times) Connect to the replica set via services. mongodb://mongo-1,mongo-2,mongo-3:27017/dbname_? You can also use the raw pod IP addresses instead of creating a service per pod Use this https://github.com/thesandlord/kubernetes-pod-ip-finder.git
I'd like to set-up a Mongo replica set on Kubernetes. I'd like to have three replicas. This means I'd need to start 3 instances. Should I start three pods, with Mongo in each one, and use the service the point to the primary? Or should I used a replication controller somehow?
I think Mesos Chronos is still ideal solution.
###Instruction: distributed CRON in Kubernetes is still a work in progress (https://github.com/kubernetes/kubernetes/issues/2156). What do you use for CRON jobs in Kubernetes today? Do you recommend any solution that works well with Spring/JVM-based services? Spring/JVM startup time is quite high and if CRON scheduler started a new JVM for each job, startup time might be much higher than time of actual work - is there any solution that could run the job in existing JVM? Thank you, Jakub ### Response: I think Mesos Chronos is still ideal solution.
distributed CRON in Kubernetes is still a work in progress (https://github.com/kubernetes/kubernetes/issues/2156). What do you use for CRON jobs in Kubernetes today? Do you recommend any solution that works well with Spring/JVM-based services? Spring/JVM startup time is quite high and if CRON scheduler started a new JVM for each job, startup time might be much higher than time of actual work - is there any solution that could run the job in existing JVM? Thank you, Jakub
Johan- You can watch the pods resource to get add/update/remove events for them. http://kubernetes.io/v1.1/docs/api-reference/v1/operations.html, search for 'list or watch objects of kind Pod'. Hope that helps. Note, events above does not refer to the Event API resource, but rather watch events.
###Instruction: I'm trying to get the internal IP of a pod that is being killed. What I do now is to subscribe to events with reason "Killing" that have an "involvedObject" that is of type "Pod". If so I try to retrieve the pod to get its IP. The problem (I think) is that the Pod is already killed by the time I make the request to get it so I cannot its IP this way (this approach works fine when reason is "Created"). Is there a way to get the internal IP of a pod that is being stopped/killed before it's actually removed by Kubernetes? Update What I try to do is to implement a Kubernetes service discovery module (A) for Turbine to detect all of the pods that expose Hystrix streams. I use fabric8 k8s library to watch for Kubernetes events. When an event is discovered by A to be a pod running with label "hystrix-stream" I load the pod details (using the fabric8 library) in order to the get internal IP to the pod. This IP is then sent to Turbine so that it subscribes to the hystrix stream. This works great! The problem is that when A discovers that a pod is being killed I cannot load the pod details since by that time the pod has already been removed. Thus I cannot get the IP of the pod and I cannot tell Turbine to unsubscribe from this pod. My workaround is to maintain state in A that maps the pod name to its IP address. So what I'm after is a way to avoid maintaining this state in A and instead ask Kuberentes for the IP address of a pod that is being killed. ### Response: Johan- You can watch the pods resource to get add/update/remove events for them. http://kubernetes.io/v1.1/docs/api-reference/v1/operations.html, search for 'list or watch objects of kind Pod'. Hope that helps. Note, events above does not refer to the Event API resource, but rather watch events.
I'm trying to get the internal IP of a pod that is being killed. What I do now is to subscribe to events with reason "Killing" that have an "involvedObject" that is of type "Pod". If so I try to retrieve the pod to get its IP. The problem (I think) is that the Pod is already killed by the time I make the request to get it so I cannot its IP this way (this approach works fine when reason is "Created"). Is there a way to get the internal IP of a pod that is being stopped/killed before it's actually removed by Kubernetes? Update What I try to do is to implement a Kubernetes service discovery module (A) for Turbine to detect all of the pods that expose Hystrix streams. I use fabric8 k8s library to watch for Kubernetes events. When an event is discovered by A to be a pod running with label "hystrix-stream" I load the pod details (using the fabric8 library) in order to the get internal IP to the pod. This IP is then sent to Turbine so that it subscribes to the hystrix stream. This works great! The problem is that when A discovers that a pod is being killed I cannot load the pod details since by that time the pod has already been removed. Thus I cannot get the IP of the pod and I cannot tell Turbine to unsubscribe from this pod. My workaround is to maintain state in A that maps the pod name to its IP address. So what I'm after is a way to avoid maintaining this state in A and instead ask Kuberentes for the IP address of a pod that is being killed.
I wrote a small Go app that functions like cron but writes log info to stdout (no email!) and can be built into a static binary for easy containerization. I built kubectl from source as a static binary and included it in the image (it may be a static binary in the most recent releases). Kubectl will automatically look for the service account token/certs in /var/run/secrets/kubernetes.io/serviceaccount/ so you should be good to go unless you're not using the default service account. I then set up a crontab to run kubectl to create a job at the period that I wanted. The crontab and yaml files for the jobs can be mounted as a secret. You can either use conf2kube or some other way of generating the secrets. I wrote a simple python script. It's totally a workaround until there is proper support but I hope that helps.
###Instruction: distributed CRON in Kubernetes is still a work in progress (https://github.com/kubernetes/kubernetes/issues/2156). What do you use for CRON jobs in Kubernetes today? Do you recommend any solution that works well with Spring/JVM-based services? Spring/JVM startup time is quite high and if CRON scheduler started a new JVM for each job, startup time might be much higher than time of actual work - is there any solution that could run the job in existing JVM? Thank you, Jakub ### Response: I wrote a small Go app that functions like cron but writes log info to stdout (no email!) and can be built into a static binary for easy containerization. I built kubectl from source as a static binary and included it in the image (it may be a static binary in the most recent releases). Kubectl will automatically look for the service account token/certs in /var/run/secrets/kubernetes.io/serviceaccount/ so you should be good to go unless you're not using the default service account. I then set up a crontab to run kubectl to create a job at the period that I wanted. The crontab and yaml files for the jobs can be mounted as a secret. You can either use conf2kube or some other way of generating the secrets. I wrote a simple python script. It's totally a workaround until there is proper support but I hope that helps.
distributed CRON in Kubernetes is still a work in progress (https://github.com/kubernetes/kubernetes/issues/2156). What do you use for CRON jobs in Kubernetes today? Do you recommend any solution that works well with Spring/JVM-based services? Spring/JVM startup time is quite high and if CRON scheduler started a new JVM for each job, startup time might be much higher than time of actual work - is there any solution that could run the job in existing JVM? Thank you, Jakub
I'm an advocate of running all Kubernetes services directly on the OS so forgive me if my answer is very opinionionated. You have to ask yourself if running everything in a container makes sense at such a low level, considering that you have to mount so many libs from your host and can't benefit from systemd's journal while your services run in containers. In my case the benefit was not obvious. On top of that, as you mentioned, running kubelet inside a container is not 100% supported yet. Running Kubernetes using systemd services is also a totally valid pattern technically speaking, so you shouldn't avoid updates invoking the reason that you can't run everything inside a container. However you should not mix versions (1.0 and 1.1)
###Instruction: On CoreOS, Kuberenetes master processes (apiserver, kube-proxy, controller-manager and podmaster) run in Docker, while the kubelet process runs as a systemd process outside Docker. Would it be recommended to run the master processes V1.1+ and kubelet V1.0.3 together on the master host? The reason I am asking is that CentOS Atomic Host ships with Kubernetes V1.0.3, but we would like to upgrade the master processes to V1.1.+ by running it in Docker instead of as system services directly on the opsys (CentOS intends to run all components as systemd services). Thanks, Andrej ### Response: I'm an advocate of running all Kubernetes services directly on the OS so forgive me if my answer is very opinionionated. You have to ask yourself if running everything in a container makes sense at such a low level, considering that you have to mount so many libs from your host and can't benefit from systemd's journal while your services run in containers. In my case the benefit was not obvious. On top of that, as you mentioned, running kubelet inside a container is not 100% supported yet. Running Kubernetes using systemd services is also a totally valid pattern technically speaking, so you shouldn't avoid updates invoking the reason that you can't run everything inside a container. However you should not mix versions (1.0 and 1.1)
On CoreOS, Kuberenetes master processes (apiserver, kube-proxy, controller-manager and podmaster) run in Docker, while the kubelet process runs as a systemd process outside Docker. Would it be recommended to run the master processes V1.1+ and kubelet V1.0.3 together on the master host? The reason I am asking is that CentOS Atomic Host ships with Kubernetes V1.0.3, but we would like to upgrade the master processes to V1.1.+ by running it in Docker instead of as system services directly on the opsys (CentOS intends to run all components as systemd services). Thanks, Andrej
It looks to me like the selector for your service is wrong. It's looking for a label of name: nginx, but your pods actually have app: nginx. Try changing your service file to: apiVersion: v1 kind: Service metadata: labels: name: nginx name: nginx spec: ports: - port: 80 selector: app: nginx ... or change your replication controller template to use name: nginx instead of app: nginx as the label. Basically, the labels have to match so that the service knows how to present a unified facade over your pods.
###Instruction: i've a simple kubernetes cluster with a master and 3 minions. In this scenario, if i run a simple pod of a nginx or a mysql it works properly but, if i change type of KIND into yaml file and i try to run a replicated service, pods will start but i can't access to the service. this is my yaml file for nginx with 3 replicas: apiVersion: v1 kind: ReplicationController metadata: name: nginx spec: replicas: 3 selector: app: nginx template: metadata: name: nginx labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 this is service yaml config file: apiVersion: v1 kind: Service metadata: labels: name: nginx name: nginx spec: ports: - port: 80 selector: name: nginx i run it with: # kubectl create -f nginx-rc.yaml # kubectl create -f nginx-rc-service.yaml if i run: # kubectl get pod,svc,rc -o wide i see: NAME READY STATUS RESTARTS AGE NODE nginx-kgq1s 1/1 Running 0 1m node01 nginx-pomx3 1/1 Running 0 1m node02 nginx-xi54i 1/1 Running 0 1m node03 NAME LABELS SELECTOR IP(S) PORT(S) kubernetes component=apiserver,provider=kubernetes <none> 10.254.0.1 443/TCP nginx name=nginx name=nginx 10.254.47.150 80/TCP CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS nginx nginx nginx app=nginx 3 i can see description for pod: Name: nginx-kgq1s Namespace: default Image(s): nginx Node: node01/node01 Labels: app=nginx Status: Running Reason: Message: IP: 172.17.52.3 Replication Controllers: nginx (3/3 replicas created) Containers: nginx: Image: nginx State: Running Started: Thu, 11 Feb 2016 16:28:08 +0100 Ready: True Restart Count: 0 Conditions: Type Status Ready True Events: FirstSeen LastSeen Count From SubobjectPath Reason Message Thu, 11 Feb 2016 16:27:47 +0100 Thu, 11 Feb 2016 16:27:47 +0100 1 {scheduler } scheduled Successfully assigned nginx-kgq1s to node01 Thu, 11 Feb 2016 16:27:57 +0100 Thu, 11 Feb 2016 16:27:57 +0100 1 {kubelet node01} implicitly required container POD pulled Pod container image "gcr.io/google_containers/pause:0.8.0" already present on machine Thu, 11 Feb 2016 16:28:02 +0100 Thu, 11 Feb 2016 16:28:02 +0100 1 {kubelet node01} implicitly required container POD created Created with docker id bed30a90c6eb Thu, 11 Feb 2016 16:28:02 +0100 Thu, 11 Feb 2016 16:28:02 +0100 1 {kubelet node01} implicitly required container POD started Started with docker id bed30a90c6eb Thu, 11 Feb 2016 16:28:07 +0100 Thu, 11 Feb 2016 16:28:07 +0100 1 {kubelet node01} spec.containers{nginx} created Created with docker id 0a5c69cd0481 Thu, 11 Feb 2016 16:28:08 +0100 Thu, 11 Feb 2016 16:28:08 +0100 1 {kubelet node01} spec.containers{nginx} started Started with docker id 0a5c69cd0481 this is what i see if i get description for rc: Name: nginx Namespace: default Image(s): nginx Selector: app=nginx Labels: app=nginx Replicas: 3 current / 3 desired Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed Events: FirstSeen LastSeen Count From SubobjectPath Reason Message Thu, 11 Feb 2016 16:27:47 +0100 Thu, 11 Feb 2016 16:27:47 +0100 1 {replication-controller } successfulCreate Created pod: nginx-kgq1s Thu, 11 Feb 2016 16:27:47 +0100 Thu, 11 Feb 2016 16:27:47 +0100 1 {replication-controller } successfulCreate Created pod: nginx-pomx3 Thu, 11 Feb 2016 16:27:47 +0100 Thu, 11 Feb 2016 16:27:47 +0100 1 {replication-controller } successfulCreate Created pod: nginx-xi54i and this is what i see if i get description of service: Name: nginx Namespace: default Labels: name=nginx Selector: name=nginx Type: ClusterIP IP: 10.254.47.150 Port: <unnamed> 80/TCP Endpoints: <none> Session Affinity: None No events. as i can see, the problem may be that i don't have an ENDPOINT but i don't have any idea how i could solve. ### Response: It looks to me like the selector for your service is wrong. It's looking for a label of name: nginx, but your pods actually have app: nginx. Try changing your service file to: apiVersion: v1 kind: Service metadata: labels: name: nginx name: nginx spec: ports: - port: 80 selector: app: nginx ... or change your replication controller template to use name: nginx instead of app: nginx as the label. Basically, the labels have to match so that the service knows how to present a unified facade over your pods.
i've a simple kubernetes cluster with a master and 3 minions. In this scenario, if i run a simple pod of a nginx or a mysql it works properly but, if i change type of KIND into yaml file and i try to run a replicated service, pods will start but i can't access to the service. this is my yaml file for nginx with 3 replicas: apiVersion: v1 kind: ReplicationController metadata: name: nginx spec: replicas: 3 selector: app: nginx template: metadata: name: nginx labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 this is service yaml config file: apiVersion: v1 kind: Service metadata: labels: name: nginx name: nginx spec: ports: - port: 80 selector: name: nginx i run it with: # kubectl create -f nginx-rc.yaml # kubectl create -f nginx-rc-service.yaml if i run: # kubectl get pod,svc,rc -o wide i see: NAME READY STATUS RESTARTS AGE NODE nginx-kgq1s 1/1 Running 0 1m node01 nginx-pomx3 1/1 Running 0 1m node02 nginx-xi54i 1/1 Running 0 1m node03 NAME LABELS SELECTOR IP(S) PORT(S) kubernetes component=apiserver,provider=kubernetes <none> 10.254.0.1 443/TCP nginx name=nginx name=nginx 10.254.47.150 80/TCP CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS nginx nginx nginx app=nginx 3 i can see description for pod: Name: nginx-kgq1s Namespace: default Image(s): nginx Node: node01/node01 Labels: app=nginx Status: Running Reason: Message: IP: 172.17.52.3 Replication Controllers: nginx (3/3 replicas created) Containers: nginx: Image: nginx State: Running Started: Thu, 11 Feb 2016 16:28:08 +0100 Ready: True Restart Count: 0 Conditions: Type Status Ready True Events: FirstSeen LastSeen Count From SubobjectPath Reason Message Thu, 11 Feb 2016 16:27:47 +0100 Thu, 11 Feb 2016 16:27:47 +0100 1 {scheduler } scheduled Successfully assigned nginx-kgq1s to node01 Thu, 11 Feb 2016 16:27:57 +0100 Thu, 11 Feb 2016 16:27:57 +0100 1 {kubelet node01} implicitly required container POD pulled Pod container image "gcr.io/google_containers/pause:0.8.0" already present on machine Thu, 11 Feb 2016 16:28:02 +0100 Thu, 11 Feb 2016 16:28:02 +0100 1 {kubelet node01} implicitly required container POD created Created with docker id bed30a90c6eb Thu, 11 Feb 2016 16:28:02 +0100 Thu, 11 Feb 2016 16:28:02 +0100 1 {kubelet node01} implicitly required container POD started Started with docker id bed30a90c6eb Thu, 11 Feb 2016 16:28:07 +0100 Thu, 11 Feb 2016 16:28:07 +0100 1 {kubelet node01} spec.containers{nginx} created Created with docker id 0a5c69cd0481 Thu, 11 Feb 2016 16:28:08 +0100 Thu, 11 Feb 2016 16:28:08 +0100 1 {kubelet node01} spec.containers{nginx} started Started with docker id 0a5c69cd0481 this is what i see if i get description for rc: Name: nginx Namespace: default Image(s): nginx Selector: app=nginx Labels: app=nginx Replicas: 3 current / 3 desired Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed Events: FirstSeen LastSeen Count From SubobjectPath Reason Message Thu, 11 Feb 2016 16:27:47 +0100 Thu, 11 Feb 2016 16:27:47 +0100 1 {replication-controller } successfulCreate Created pod: nginx-kgq1s Thu, 11 Feb 2016 16:27:47 +0100 Thu, 11 Feb 2016 16:27:47 +0100 1 {replication-controller } successfulCreate Created pod: nginx-pomx3 Thu, 11 Feb 2016 16:27:47 +0100 Thu, 11 Feb 2016 16:27:47 +0100 1 {replication-controller } successfulCreate Created pod: nginx-xi54i and this is what i see if i get description of service: Name: nginx Namespace: default Labels: name=nginx Selector: name=nginx Type: ClusterIP IP: 10.254.47.150 Port: <unnamed> 80/TCP Endpoints: <none> Session Affinity: None No events. as i can see, the problem may be that i don't have an ENDPOINT but i don't have any idea how i could solve.
Summary of slack discussion: No, there is no retry limit. However, you can set a deadline on the job as of v1.2 with activeDeadlineSeconds. The system should back off restarts and then terminate the job when it hits the deadline.
###Instruction: I have batch jobs that I want to run on Kubernetes. The way I understand Jobs: If I choose restartPolicy: Never it means that if the Job fails, it will destroy the Pod and reschedule onto (potentially) another node. If restartPolicy: OnFailure, it will restart the container in the existing Pod. I'd consider a certain number of failures unrecoverable. Is there a way I can prevent it from rescheduling or restarting after a certain period of time and cleanup the unrecoverable Jobs? My current thought for a workaround to this is to have some watchdog process that looks at retryTimes and cleans up Jobs after a specified number of retries. ### Response: Summary of slack discussion: No, there is no retry limit. However, you can set a deadline on the job as of v1.2 with activeDeadlineSeconds. The system should back off restarts and then terminate the job when it hits the deadline.
I have batch jobs that I want to run on Kubernetes. The way I understand Jobs: If I choose restartPolicy: Never it means that if the Job fails, it will destroy the Pod and reschedule onto (potentially) another node. If restartPolicy: OnFailure, it will restart the container in the existing Pod. I'd consider a certain number of failures unrecoverable. Is there a way I can prevent it from rescheduling or restarting after a certain period of time and cleanup the unrecoverable Jobs? My current thought for a workaround to this is to have some watchdog process that looks at retryTimes and cleans up Jobs after a specified number of retries.
TL;DR You can share PV and PVC within the same project/namespace for shared volumes (nfs, gluster, etc...), you can also access your shared volume from multiple project/namespaces but it will require project dedicated PV and PVCs, as a PV is bound to single project/namespace and PVC is project/namespace scoped. Below I've tried to illustrate the current behavior and how PV and PVCs are scoped within OpenShift. These are simple examples using NFS as the persistent storage layer. the accessModes at this point are just labels, they have no real functionality in terms of controlling access to PV. Below are some examples to show this the PV is global in the sense that it can be seen/accessed by any project/namespace, HOWEVER once it is bound to a project, it can then only be accessed by containers from the same project/namespace the PVC is project/namespace specific (so if you have multple projects you would need to have a new PV and PVC for each project to connect to the shared NFS volume - can not reuse the PV from first project) Example 1: I have 2 distinct pods running in "default" project/namespace, both accessing the same PV and NFS exported share. Both mount and run fine. [root@k8dev nfs_error]# oc get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE pv-nfs <none> 1Gi RWO Bound default/nfs-claim 3m [root@k8dev nfs_error]# oc get pods <--- running from DEFAULT project, no issues connecting to PV NAME READY STATUS RESTARTS AGE nfs-bb-pod2-pvc 1/1 Running 0 11m nfs-bb-pod3-pvc 1/1 Running 0 10m Example 2: I have 2 distinct pods running in "default" project/namespace and attempt to create another pod using the same PV but from a new project called testproject to access the same NFS export. The third pod from the new testproject will not be able to bind to the PV as it is already bound by default project. [root@k8dev nfs_error]# oc get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE pv-nfs <none> 1Gi RWO Bound default/nfs-claim 3m [root@k8dev nfs_error]# oc get pods <--- running from DEFAULT project, no issues connecting to PV NAME READY STATUS RESTARTS AGE nfs-bb-pod2-pvc 1/1 Running 0 11m nfs-bb-pod3-pvc 1/1 Running 0 10m ** Create a new claim against the existing PV from another project (testproject) and the PVC will fail [root@k8dev nfs_error]# oc get pvc NAME LABELS STATUS VOLUME CAPACITY ACCESSMODES AGE nfs-claim <none> Pending 2s ** nfs-claim will never bind to the pv-nfs PV because it can not see it from it's current project scope Example 3: I have 2 distinct pods running in the "default" project and then create another PV and PVC and Pod from testproject. Both projects will be able to access the same NFS exported share but I need a PV and PVC in each of the projects. [root@k8dev nfs_error]# oc get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE pv-nfs <none> 1Gi RWX Bound default/nfs-claim 14m pv-nfs2 <none> 1Gi RWX Bound testproject/nfs-claim2 9m [root@k8dev nfs_error]# oc get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE default nfs-bb-pod2-pvc 1/1 Running 0 11m default nfs-bb-pod3-pvc 1/1 Running 0 11m testproject nfs-bb-pod4-pvc 1/1 Running 0 15s ** notice, I now have three pods running to the same NFS shared volume across two projects, but I needed two PV's as they are bound to a single project, and 2 PVC's, one for each project and the NFS PV I am trying to access Example 4: If I by-pass PV and PVC, I can connect to the shared NFS volumes directly from any project using the nfs plugin directly volumes: - name: nfsvol nfs: path: /opt/data5 server: nfs1.rhs Now, the volume security is another layer on top of this, using supplementalGroups (for shared storage, i.e. nfs, gluster, etc...), admins and devs should further be able to manage and control access to the shared NFS system. Hope that helps
###Instruction: This may be a dumb question but I haven't found much online and want to clarify this. Given two deployments A and B, both with different container images: They're deployed in two different pods(different rc, svc etc.) in a K8/OpenShift cluster. They both need to access the same volume to read files (let's leave locking out of this for now) or at least the same directory structure in that volume. Mounting this volume using a PVC (Persistent Volume Claim) backed by a PV (Persistent Volume) configured against a NFS share. Can I confirm that the above would actually be possible? I.e. two different pods connected to the same volume with the same PVC. So they both are reading from the same volume. Hope that makes sense... ### Response: TL;DR You can share PV and PVC within the same project/namespace for shared volumes (nfs, gluster, etc...), you can also access your shared volume from multiple project/namespaces but it will require project dedicated PV and PVCs, as a PV is bound to single project/namespace and PVC is project/namespace scoped. Below I've tried to illustrate the current behavior and how PV and PVCs are scoped within OpenShift. These are simple examples using NFS as the persistent storage layer. the accessModes at this point are just labels, they have no real functionality in terms of controlling access to PV. Below are some examples to show this the PV is global in the sense that it can be seen/accessed by any project/namespace, HOWEVER once it is bound to a project, it can then only be accessed by containers from the same project/namespace the PVC is project/namespace specific (so if you have multple projects you would need to have a new PV and PVC for each project to connect to the shared NFS volume - can not reuse the PV from first project) Example 1: I have 2 distinct pods running in "default" project/namespace, both accessing the same PV and NFS exported share. Both mount and run fine. [root@k8dev nfs_error]# oc get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE pv-nfs <none> 1Gi RWO Bound default/nfs-claim 3m [root@k8dev nfs_error]# oc get pods <--- running from DEFAULT project, no issues connecting to PV NAME READY STATUS RESTARTS AGE nfs-bb-pod2-pvc 1/1 Running 0 11m nfs-bb-pod3-pvc 1/1 Running 0 10m Example 2: I have 2 distinct pods running in "default" project/namespace and attempt to create another pod using the same PV but from a new project called testproject to access the same NFS export. The third pod from the new testproject will not be able to bind to the PV as it is already bound by default project. [root@k8dev nfs_error]# oc get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE pv-nfs <none> 1Gi RWO Bound default/nfs-claim 3m [root@k8dev nfs_error]# oc get pods <--- running from DEFAULT project, no issues connecting to PV NAME READY STATUS RESTARTS AGE nfs-bb-pod2-pvc 1/1 Running 0 11m nfs-bb-pod3-pvc 1/1 Running 0 10m ** Create a new claim against the existing PV from another project (testproject) and the PVC will fail [root@k8dev nfs_error]# oc get pvc NAME LABELS STATUS VOLUME CAPACITY ACCESSMODES AGE nfs-claim <none> Pending 2s ** nfs-claim will never bind to the pv-nfs PV because it can not see it from it's current project scope Example 3: I have 2 distinct pods running in the "default" project and then create another PV and PVC and Pod from testproject. Both projects will be able to access the same NFS exported share but I need a PV and PVC in each of the projects. [root@k8dev nfs_error]# oc get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE pv-nfs <none> 1Gi RWX Bound default/nfs-claim 14m pv-nfs2 <none> 1Gi RWX Bound testproject/nfs-claim2 9m [root@k8dev nfs_error]# oc get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE default nfs-bb-pod2-pvc 1/1 Running 0 11m default nfs-bb-pod3-pvc 1/1 Running 0 11m testproject nfs-bb-pod4-pvc 1/1 Running 0 15s ** notice, I now have three pods running to the same NFS shared volume across two projects, but I needed two PV's as they are bound to a single project, and 2 PVC's, one for each project and the NFS PV I am trying to access Example 4: If I by-pass PV and PVC, I can connect to the shared NFS volumes directly from any project using the nfs plugin directly volumes: - name: nfsvol nfs: path: /opt/data5 server: nfs1.rhs Now, the volume security is another layer on top of this, using supplementalGroups (for shared storage, i.e. nfs, gluster, etc...), admins and devs should further be able to manage and control access to the shared NFS system. Hope that helps
This may be a dumb question but I haven't found much online and want to clarify this. Given two deployments A and B, both with different container images: They're deployed in two different pods(different rc, svc etc.) in a K8/OpenShift cluster. They both need to access the same volume to read files (let's leave locking out of this for now) or at least the same directory structure in that volume. Mounting this volume using a PVC (Persistent Volume Claim) backed by a PV (Persistent Volume) configured against a NFS share. Can I confirm that the above would actually be possible? I.e. two different pods connected to the same volume with the same PVC. So they both are reading from the same volume. Hope that makes sense...
This limitation is fixed in the next (v1.2) release of Kubernetes, for those clouds that support UDP load-balancing.
###Instruction: I'm having some issues with gcloud setting up a container and trying to use a loadbalancer The problem is that I need it by udp and not tcp I use this: kubectl run mserver --image=gcr.io/${PROJECT_ID}/mserver:v1 --port=28000 kubectl expose rc mserver --protocol=UDP --type="LoadBalancer" using --protocol=UDP I receive this: spec.ports[0].protocol: invalid value 'UDP', Details: cannot create an external load balancer with non-TCP ports Any idea about if it's possible to create a load balancer by udp? ### Response: This limitation is fixed in the next (v1.2) release of Kubernetes, for those clouds that support UDP load-balancing.
I'm having some issues with gcloud setting up a container and trying to use a loadbalancer The problem is that I need it by udp and not tcp I use this: kubectl run mserver --image=gcr.io/${PROJECT_ID}/mserver:v1 --port=28000 kubectl expose rc mserver --protocol=UDP --type="LoadBalancer" using --protocol=UDP I receive this: spec.ports[0].protocol: invalid value 'UDP', Details: cannot create an external load balancer with non-TCP ports Any idea about if it's possible to create a load balancer by udp?
As kubernetes reserve some space if more cpu or memory is needed you should check the capacity allocated by the cluster instead of the utilization. kubectl describe nodes You can find a deeper description about the capacity of the nodes in: http://kubernetes.io/docs/user-guide/compute-resources/
###Instruction: I'm running a small node in gcloud with 2 pods running. Google cloud console shows all resources utilization <40% cpu utilization about 8k n\w bytes about 64 disk bytes. When adding the next pod, it fails with below error. FailedScheduling:Failed for reason PodExceedsFreeCPU and possibly others Based on the numbers I see in google console, ~60% CPU is available. is there anyway to get more logs? Am I missing something obvious here? Thanks in advance ! ### Response: As kubernetes reserve some space if more cpu or memory is needed you should check the capacity allocated by the cluster instead of the utilization. kubectl describe nodes You can find a deeper description about the capacity of the nodes in: http://kubernetes.io/docs/user-guide/compute-resources/
I'm running a small node in gcloud with 2 pods running. Google cloud console shows all resources utilization <40% cpu utilization about 8k n\w bytes about 64 disk bytes. When adding the next pod, it fails with below error. FailedScheduling:Failed for reason PodExceedsFreeCPU and possibly others Based on the numbers I see in google console, ~60% CPU is available. is there anyway to get more logs? Am I missing something obvious here? Thanks in advance !
You can only have one active context at a time. But kubectl accepts a --context argument (see https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#define-clusters-users-and-contexts) with every command so that you can explicitly pass in the context for each command.
###Instruction: kubectl switches the context the moment we create a new cluster. since I'm trying to mimic a client server architecture, I need to view rolling logs (kubectl logs pod-id) from both sides (server cluster and agents cluster). Right now, I'm switching context using kubectl config use-context between them, is that right method? is there a way to keep more than one context active at the same time? ### Response: You can only have one active context at a time. But kubectl accepts a --context argument (see https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#define-clusters-users-and-contexts) with every command so that you can explicitly pass in the context for each command.
kubectl switches the context the moment we create a new cluster. since I'm trying to mimic a client server architecture, I need to view rolling logs (kubectl logs pod-id) from both sides (server cluster and agents cluster). Right now, I'm switching context using kubectl config use-context between them, is that right method? is there a way to keep more than one context active at the same time?
There are two ways to solve that: The first option is to use a cloudsql proxy using the instructions available in: https://cloud.google.com/sql/docs/sql-proxy In your docker image you need to ensure that fuse is available in your installation, in wasn't my case (using a ubuntu:trusty-20160119 as base image). If you need to able that, then use the following steps in your Dockerfile: # install fusermount # RUN apt-get install build-essential -y # RUN wget https://github.com/libfuse/libfuse/releases/download/fuse_2_9_5/fuse-2.9.5.tar.gz # RUN tar -xzvf fuse-2.9.5.tar.gz # RUN cd fuse-2.9.5 && ./configure && make -j8 && make install Then at the startup of your container you must create a script that open the socket as described in https://cloud.google.com/sql/docs/sql-proxy#example_proxy_invocations_and_connection_strings. The second way is just to allow the ips from the nodes that support the kubernetes cluster in the whitelist for the cloudsql. I prefer the first option, because it works in any machine I deploy the image and I don't need to care about to add or remove ips if I need to deliver more nodes in the kubernetes cluster.
###Instruction: I'm creating a container with a connection to a cloudsql database, when I run the image with kubernetes It does not have an external IP that I can use to allow the new image to connect to the database. But as this is part of the init configuration I can't wait to know what is the public IP to add to the whitelist databases. I know that are ways to connect a database through services in the same cluster, but I can't figure out how to connect with the cloudsql provided by google. ### Response: There are two ways to solve that: The first option is to use a cloudsql proxy using the instructions available in: https://cloud.google.com/sql/docs/sql-proxy In your docker image you need to ensure that fuse is available in your installation, in wasn't my case (using a ubuntu:trusty-20160119 as base image). If you need to able that, then use the following steps in your Dockerfile: # install fusermount # RUN apt-get install build-essential -y # RUN wget https://github.com/libfuse/libfuse/releases/download/fuse_2_9_5/fuse-2.9.5.tar.gz # RUN tar -xzvf fuse-2.9.5.tar.gz # RUN cd fuse-2.9.5 && ./configure && make -j8 && make install Then at the startup of your container you must create a script that open the socket as described in https://cloud.google.com/sql/docs/sql-proxy#example_proxy_invocations_and_connection_strings. The second way is just to allow the ips from the nodes that support the kubernetes cluster in the whitelist for the cloudsql. I prefer the first option, because it works in any machine I deploy the image and I don't need to care about to add or remove ips if I need to deliver more nodes in the kubernetes cluster.
I'm creating a container with a connection to a cloudsql database, when I run the image with kubernetes It does not have an external IP that I can use to allow the new image to connect to the database. But as this is part of the init configuration I can't wait to know what is the public IP to add to the whitelist databases. I know that are ways to connect a database through services in the same cluster, but I can't figure out how to connect with the cloudsql provided by google.
I eventually settled on using helm (https://helm.sh/) and installed redis-cluster using helm install redis-cluster. I can connect to the redis cluster using the cluster_ip (not external ip) in kubernetes and this satisfies my security requirement. redis sentinel works out of the box with this approach.
###Instruction: I've successfully set up a password-protected redis cluster using the guide here: http://kubernetes.io/v1.1/examples/redis/ I can connect to the sentinel just fine using redis-cli, but I cannot connect to the redis master/slave even though I have exposed the sentinels. I'm using ruby and the following connection string.. Am I doing this wrong? SENTINELS = [{host: "104.122.24.897", port: 26379}] redis = Redis.new(url: "redis://mymaster", sentinels: SENTINELS, :role => :master, password: "longasspassword", timeout: 16) The error I get is: Error connecting to Redis on 10.64.7.33:6379 (Redis::TimeoutError) (Redis::CannotConnectError)` ### Response: I eventually settled on using helm (https://helm.sh/) and installed redis-cluster using helm install redis-cluster. I can connect to the redis cluster using the cluster_ip (not external ip) in kubernetes and this satisfies my security requirement. redis sentinel works out of the box with this approach.
I've successfully set up a password-protected redis cluster using the guide here: http://kubernetes.io/v1.1/examples/redis/ I can connect to the sentinel just fine using redis-cli, but I cannot connect to the redis master/slave even though I have exposed the sentinels. I'm using ruby and the following connection string.. Am I doing this wrong? SENTINELS = [{host: "104.122.24.897", port: 26379}] redis = Redis.new(url: "redis://mymaster", sentinels: SENTINELS, :role => :master, password: "longasspassword", timeout: 16) The error I get is: Error connecting to Redis on 10.64.7.33:6379 (Redis::TimeoutError) (Redis::CannotConnectError)`
It seems like your api-server is not working for some reason. Check for api-server logs at /tmp/kube-apiserver.log
###Instruction: I have been trying to get my local cluster up with kubernetes. I have created the binaries and I am running the script by the name of ./hack/local-up-cluster.sh. When I run the script, I am getting this error: root@ubuntu1604:~/kubernetes# ./hack/local-up-cluster.sh +++ [0215 00:22:00] Building go targets for linux/ppc64le: cmd/kube-apiserver cmd/kube-controller-manager cmd/kube-proxy cmd/kubectl cmd/kubelet plugin/cmd/kube-scheduler +++ [0215 00:22:03] Placing binaries curl: (7) Failed to connect to 127.0.0.1 port 8080: Connection refused API SERVER port is free, proceeding... Detected host and ready to start services. Doing some housekeeping first... Using GO_OUT /root/kubernetes/_output/local/bin/linux/ppc64le Starting services now! Starting etcd etcd -data-dir /tmp/tmp.5C5e2WqxLN --bind-addr 127.0.0.1:4001 >/dev/null 2>/dev/null Waiting for etcd to come up. +++ [0215 00:22:03] On try 1, etcd: : {"action":"set","node":{"key":"/_test","value":"","modifiedIndex":3,"createdIndex":3}} Waiting for apiserver to come up !!! [0215 00:22:13] Timed out waiting for apiserver: to answer at http://127.0.0.1:8080/api/v1/pods; tried 10 waiting 1 between each Cleaning up... Any idea how should I debug this? ### Response: It seems like your api-server is not working for some reason. Check for api-server logs at /tmp/kube-apiserver.log
I have been trying to get my local cluster up with kubernetes. I have created the binaries and I am running the script by the name of ./hack/local-up-cluster.sh. When I run the script, I am getting this error: root@ubuntu1604:~/kubernetes# ./hack/local-up-cluster.sh +++ [0215 00:22:00] Building go targets for linux/ppc64le: cmd/kube-apiserver cmd/kube-controller-manager cmd/kube-proxy cmd/kubectl cmd/kubelet plugin/cmd/kube-scheduler +++ [0215 00:22:03] Placing binaries curl: (7) Failed to connect to 127.0.0.1 port 8080: Connection refused API SERVER port is free, proceeding... Detected host and ready to start services. Doing some housekeeping first... Using GO_OUT /root/kubernetes/_output/local/bin/linux/ppc64le Starting services now! Starting etcd etcd -data-dir /tmp/tmp.5C5e2WqxLN --bind-addr 127.0.0.1:4001 >/dev/null 2>/dev/null Waiting for etcd to come up. +++ [0215 00:22:03] On try 1, etcd: : {"action":"set","node":{"key":"/_test","value":"","modifiedIndex":3,"createdIndex":3}} Waiting for apiserver to come up !!! [0215 00:22:13] Timed out waiting for apiserver: to answer at http://127.0.0.1:8080/api/v1/pods; tried 10 waiting 1 between each Cleaning up... Any idea how should I debug this?
You're not trying to "ssh", you're forwarding your standard input and receiving a standard output over HTTP through the Kubernetes API. That said, you're using Docker 1.10 whereas Kubernetes doesn't support it yet. Check this out https://github.com/kubernetes/kubernetes/issues/19720 edit: Kubernetes supports Docker 1.10+ since the 1.3.0 release.
###Instruction: I'm trying to ssh into my pod with this command kubectl --namespace=default exec -ti pod-name /bin/bash I get this error: Content-Type specified (plain/text) must be 'application/json' The process gets stuck and I have to close the terminal. I was able to ssh into my pods before I re install kubernetes in my machine. Is this an issue with latest kubernetes releases? ### Response: You're not trying to "ssh", you're forwarding your standard input and receiving a standard output over HTTP through the Kubernetes API. That said, you're using Docker 1.10 whereas Kubernetes doesn't support it yet. Check this out https://github.com/kubernetes/kubernetes/issues/19720 edit: Kubernetes supports Docker 1.10+ since the 1.3.0 release.
I'm trying to ssh into my pod with this command kubectl --namespace=default exec -ti pod-name /bin/bash I get this error: Content-Type specified (plain/text) must be 'application/json' The process gets stuck and I have to close the terminal. I was able to ssh into my pods before I re install kubernetes in my machine. Is this an issue with latest kubernetes releases?
You can't use Docker hostnames and links with Kubernetes. It works only when you have one machine, but Kubernetes is designated to be clustered solution. Enable DNS in your cluster. Create Master service (rabbitmqsvc - already done) and then use DNS name of your Master service in you slave configuration: - name: CLUSTER_WITH value: "rabbitmqsvc.svc.cluster.local" You can use also environment variables, but IMHO DNS is better option.
###Instruction: I successfully setup rabbitmq cluster using docker compose to understand the clustering concept. It worked fine below is docker compose file: rabbit1: image: ipf-queue-node hostname: rabbit1 cap_add: - ALL - NET_ADMIN - SYS_ADMIN ports: - "5671:5671" - "5672:5672" - "15672:15672" rabbit2: image: ipf-queue-node hostname: rabbit2 cap_add: - ALL - NET_ADMIN - SYS_ADMIN links: - rabbit1 environment: - CLUSTERED=true - CLUSTER_WITH=rabbit1 - RAM_NODE=true ports: - "5675:5671" - "5673:5672" - "15673:15672" Docker file content: FROM queue-base # Create directories RUN mkdir /opt/rabbit RUN mkdir /opt/simulator RUN mkdir /opt/simulator/tools # Add the files from the local repository into the container ADD rabbitmq.config /etc/rabbitmq/ ADD rabbitmq-env.conf /etc/rabbitmq/ ADD erlang.cookie /var/lib/rabbitmq/.erlang.cookie ADD startclusternode.sh /opt/rabbit/ ADD debugnodes.sh /opt/rabbit/ ADD tl /bin/tl ADD rl /bin/rl ADD rst /bin/rst # Add the simulator tooling ADD simulator_tools/ /opt/simulator/tools/ ADD ./testca /tmp/ssl ADD ./server /tmp/ssl # Set the file permissions in the container RUN chmod 644 /etc/rabbitmq/rabbitmq.config RUN chmod 644 /etc/rabbitmq/rabbitmq-env.conf RUN chmod 400 /var/lib/rabbitmq/.erlang.cookie RUN chmod 777 /opt/rabbit/startclusternode.sh RUN chmod 777 /opt/rabbit/debugnodes.sh RUN chmod 777 /bin/tl RUN chmod 777 /bin/rl RUN chmod 777 /bin/rst RUN chmod -R 777 /opt/simulator # Set ownership permissions on files in the container RUN chown rabbitmq:rabbitmq /var/lib/rabbitmq/.erlang.cookie # Expose ports inside the container to the host EXPOSE 5672 EXPOSE 5671 EXPOSE 15672 EXPOSE 25672 # Run this to debug the cluster nodes by allowing ssh login #CMD /opt/rabbit/debugnodes.sh # Run this to autostart the cluster nodes CMD /opt/rabbit/startclusternode.sh startclusternode.sh is the script to setup the cluster: #!/bin/bash logfile="/tmp/rabbitnode.log" firsttimefile="/tmp/firsttimerunning" curhostname=`hostname` username="<<username>>" password="<<password>>" echo "" > $logfile echo "New Start Date:" >> $logfile date >> $logfile echo "" >> $logfile ( sleep 40 ; \ rabbitmqctl add_user $username $password ; \ rabbitmqctl set_user_tags $username administrator ; \ rabbitmqctl add_vhost $curhostname ; \ rabbitmqctl add_vhost localhost; \ rabbitmqctl set_permissions -p $curhostname $username ".*" ".*" ".*" ; \ rabbitmqctl set_permissions -p localhost $username ".*" ".*" ".*" ; \ rabbitmqctl set_policy ha-all "" '{"ha-mode":"all","ha-sync-mode":"automatic"}' ) & sleep 5 # For version 3.5.6 the first time running the cluster needs to enable the plugins if [ -f $firsttimefile ]; then echo "First Time Running Enabling Plugins" >> $logfile /usr/sbin/rabbitmq-server -d & echo "Waiting for RabbitMQ Server to start" >> $logfile sleep 3 echo "Enabling Plugins" >> $logfile /usr/sbin/rabbitmq-plugins enable rabbitmq_stomp rabbitmq_management rabbitmq_management_agent rabbitmq_management_visualiser rabbitmq_federation rabbitmq_federation_management sockjs >> $logfile echo "Waiting for Plugins to finish" >> $logfile sleep 1 echo "Stopping the RabbitMQ using stop_app" >> $logfile /usr/sbin/rabbitmqctl stop_app echo "Stopping the RabbitMQ using stop" >> $logfile /usr/sbin/rabbitmqctl stop echo "Stopping the RabbitMQ Server" >> $logfile kill -9 `ps auwwx | grep rabbitmq-server | awk '{print $2}'` sleep 1 echo "Done First Time Running Enabling Plugins" >> $logfile rm -f $firsttimefile >> $logfile echo "Done Cleanup First Time File" >> $logfile # Allow the cluster nodes to wait for the master to start the first time if [ -z "$CLUSTERED" ]; then echo "Ignoring as this is the server node" >> $logfile else if [ -z "$CLUSTER_WITH" ]; then echo "Ignoring as this is the cluster master node" >> $logfile else echo "Waiting for the master node to start up" >> $logfile sleep 5 echo "Done waiting for the master node to start up" >> $logfile fi fi fi if [ -z "$CLUSTERED" ]; then echo "Starting non-Clustered Server Instance" >> $logfile # if not clustered then start it normally as if it is a single server /usr/sbin/rabbitmq-server >> $logfile echo "Done Starting non-Clustered Server Instance" >> $logfile # Tail to keep the foreground process active. tail -f /var/log/rabbitmq/* else if [ -z "$CLUSTER_WITH" ]; then # If clustered, but cluster is not specified then start normally as this could be the first server in the cluster echo "Starting Single Server Instance" >> $logfile /usr/sbin/rabbitmq-server >> $logfile echo "Done Starting Single Server Instance" >> $logfile else echo "Starting Clustered Server Instance as a DETACHED single instance" >> $logfile /usr/sbin/rabbitmq-server -detached >> $logfile echo "Stopping App with /usr/sbin/rabbitmqctl stop_app" >> $logfile /usr/sbin/rabbitmqctl stop_app >> $logfile # This should attempt to join a cluster master node from the yaml file if [ -z "$RAM_NODE" ]; then echo "Attempting to join as DISC node: /usr/sbin/rabbitmqctl join_cluster rabbit@$CLUSTER_WITH" >> $logfile /usr/sbin/rabbitmqctl join_cluster rabbit@$CLUSTER_WITH >> $logfile else echo "Attempting to join as RAM node: /usr/sbin/rabbitmqctl join_cluster --ram rabbit@$CLUSTER_WITH" >> $logfile /usr/sbin/rabbitmqctl join_cluster --ram rabbit@$CLUSTER_WITH >> $logfile fi echo "Starting App" >> $logfile /usr/sbin/rabbitmqctl start_app >> $logfile echo "Done Starting Cluster Node" >> $logfile fi # Tail to keep the foreground process active. tail -f /var/log/rabbitmq/* fi Problem is when I tried to do the same setup using kubernetes I am unable to connect to master from slave node. Approach that I took is, I created a pod for master node and another for slave node, passed hostname of the master (currently hard-coded) through environment variable. I also checked the log file at /tmp/rabbitmq.log, it is correctly taking all the environment variables. However it is unable to register with the master. I tried doing it manually also using rabbitmqctl command. But it did't work says host unreachable. Tried changing /etc/hosts file too. As per my understanding pods in kubernetes communicate through services, I guess because of this, passing directly container hostname doesn't work and rabbitmq clusiering work based on hostnames. Have anybody tried any workaround? I want to run master and slaves on different nodes. Below are the content of master and slave pods: apiVersion: v1 kind: Service metadata: name: rabbitmqsvc labels: app: queue-master spec: ports: - port: 5672 name: queue-rw-port - port: 15672 name: queue-mgt-port nodePort: 31606 - port: 5671 name: queue-ssl nodePort: 32718 selector: app: queue-master type: NodePort clusterIP: 10.16.0.121 --- apiVersion: v1 kind: ReplicationController metadata: name: rabbitmq-controller labels: app: queue-master spec: replicas: 1 selector: app: queue-master template: metadata: name: rabbitmq-pod labels: app: queue-master spec: nodeSelector: nodesize: small1 containers: - name: rabbitmq-master image: 172.17.0.1:5000/queue-node ports: - containerPort: 5672 name: queue-rw-port - containerPort: 15672 name: queue-mgt-port - containerPort: 5671 name: queue-ssl SLAVE: apiVersion: v1 kind: Service metadata: name: rabbitmqsvc-slave labels: app: queue-slave spec: ports: - port: 5672 name: queue-rw-port - port: 15672 name: queue-mgt-port nodePort: 31607 - port: 5671 name: queue-ssl nodePort: 32719 selector: app: queue-slave type: NodePort clusterIP: 10.16.0.122 --- apiVersion: v1 kind: ReplicationController metadata: name: rabbitmq-controller-slave labels: app: queue-slave spec: replicas: 1 selector: app: queue-slave template: metadata: name: rabbitmq-pod labels: app: queue-slave spec: nodeSelector: nodesize: small2 containers: - name: rabbitmq-slave image: 172.17.0.1:5000/queue-node env: - name: CLUSTERED value: "true" - name: CLUSTER_WITH value: "rabbitmq-controller-2ll1s" - name: RAM_NODE value: "true" - name: HOST_NAME value: "rabbit2" ports: - containerPort: 5672 name: queue-rw-port - containerPort: 15672 name: queue-mgt-port - containerPort: 5671 name: queue-ssl ### Response: You can't use Docker hostnames and links with Kubernetes. It works only when you have one machine, but Kubernetes is designated to be clustered solution. Enable DNS in your cluster. Create Master service (rabbitmqsvc - already done) and then use DNS name of your Master service in you slave configuration: - name: CLUSTER_WITH value: "rabbitmqsvc.svc.cluster.local" You can use also environment variables, but IMHO DNS is better option.
I successfully setup rabbitmq cluster using docker compose to understand the clustering concept. It worked fine below is docker compose file: rabbit1: image: ipf-queue-node hostname: rabbit1 cap_add: - ALL - NET_ADMIN - SYS_ADMIN ports: - "5671:5671" - "5672:5672" - "15672:15672" rabbit2: image: ipf-queue-node hostname: rabbit2 cap_add: - ALL - NET_ADMIN - SYS_ADMIN links: - rabbit1 environment: - CLUSTERED=true - CLUSTER_WITH=rabbit1 - RAM_NODE=true ports: - "5675:5671" - "5673:5672" - "15673:15672" Docker file content: FROM queue-base # Create directories RUN mkdir /opt/rabbit RUN mkdir /opt/simulator RUN mkdir /opt/simulator/tools # Add the files from the local repository into the container ADD rabbitmq.config /etc/rabbitmq/ ADD rabbitmq-env.conf /etc/rabbitmq/ ADD erlang.cookie /var/lib/rabbitmq/.erlang.cookie ADD startclusternode.sh /opt/rabbit/ ADD debugnodes.sh /opt/rabbit/ ADD tl /bin/tl ADD rl /bin/rl ADD rst /bin/rst # Add the simulator tooling ADD simulator_tools/ /opt/simulator/tools/ ADD ./testca /tmp/ssl ADD ./server /tmp/ssl # Set the file permissions in the container RUN chmod 644 /etc/rabbitmq/rabbitmq.config RUN chmod 644 /etc/rabbitmq/rabbitmq-env.conf RUN chmod 400 /var/lib/rabbitmq/.erlang.cookie RUN chmod 777 /opt/rabbit/startclusternode.sh RUN chmod 777 /opt/rabbit/debugnodes.sh RUN chmod 777 /bin/tl RUN chmod 777 /bin/rl RUN chmod 777 /bin/rst RUN chmod -R 777 /opt/simulator # Set ownership permissions on files in the container RUN chown rabbitmq:rabbitmq /var/lib/rabbitmq/.erlang.cookie # Expose ports inside the container to the host EXPOSE 5672 EXPOSE 5671 EXPOSE 15672 EXPOSE 25672 # Run this to debug the cluster nodes by allowing ssh login #CMD /opt/rabbit/debugnodes.sh # Run this to autostart the cluster nodes CMD /opt/rabbit/startclusternode.sh startclusternode.sh is the script to setup the cluster: #!/bin/bash logfile="/tmp/rabbitnode.log" firsttimefile="/tmp/firsttimerunning" curhostname=`hostname` username="<<username>>" password="<<password>>" echo "" > $logfile echo "New Start Date:" >> $logfile date >> $logfile echo "" >> $logfile ( sleep 40 ; \ rabbitmqctl add_user $username $password ; \ rabbitmqctl set_user_tags $username administrator ; \ rabbitmqctl add_vhost $curhostname ; \ rabbitmqctl add_vhost localhost; \ rabbitmqctl set_permissions -p $curhostname $username ".*" ".*" ".*" ; \ rabbitmqctl set_permissions -p localhost $username ".*" ".*" ".*" ; \ rabbitmqctl set_policy ha-all "" '{"ha-mode":"all","ha-sync-mode":"automatic"}' ) & sleep 5 # For version 3.5.6 the first time running the cluster needs to enable the plugins if [ -f $firsttimefile ]; then echo "First Time Running Enabling Plugins" >> $logfile /usr/sbin/rabbitmq-server -d & echo "Waiting for RabbitMQ Server to start" >> $logfile sleep 3 echo "Enabling Plugins" >> $logfile /usr/sbin/rabbitmq-plugins enable rabbitmq_stomp rabbitmq_management rabbitmq_management_agent rabbitmq_management_visualiser rabbitmq_federation rabbitmq_federation_management sockjs >> $logfile echo "Waiting for Plugins to finish" >> $logfile sleep 1 echo "Stopping the RabbitMQ using stop_app" >> $logfile /usr/sbin/rabbitmqctl stop_app echo "Stopping the RabbitMQ using stop" >> $logfile /usr/sbin/rabbitmqctl stop echo "Stopping the RabbitMQ Server" >> $logfile kill -9 `ps auwwx | grep rabbitmq-server | awk '{print $2}'` sleep 1 echo "Done First Time Running Enabling Plugins" >> $logfile rm -f $firsttimefile >> $logfile echo "Done Cleanup First Time File" >> $logfile # Allow the cluster nodes to wait for the master to start the first time if [ -z "$CLUSTERED" ]; then echo "Ignoring as this is the server node" >> $logfile else if [ -z "$CLUSTER_WITH" ]; then echo "Ignoring as this is the cluster master node" >> $logfile else echo "Waiting for the master node to start up" >> $logfile sleep 5 echo "Done waiting for the master node to start up" >> $logfile fi fi fi if [ -z "$CLUSTERED" ]; then echo "Starting non-Clustered Server Instance" >> $logfile # if not clustered then start it normally as if it is a single server /usr/sbin/rabbitmq-server >> $logfile echo "Done Starting non-Clustered Server Instance" >> $logfile # Tail to keep the foreground process active. tail -f /var/log/rabbitmq/* else if [ -z "$CLUSTER_WITH" ]; then # If clustered, but cluster is not specified then start normally as this could be the first server in the cluster echo "Starting Single Server Instance" >> $logfile /usr/sbin/rabbitmq-server >> $logfile echo "Done Starting Single Server Instance" >> $logfile else echo "Starting Clustered Server Instance as a DETACHED single instance" >> $logfile /usr/sbin/rabbitmq-server -detached >> $logfile echo "Stopping App with /usr/sbin/rabbitmqctl stop_app" >> $logfile /usr/sbin/rabbitmqctl stop_app >> $logfile # This should attempt to join a cluster master node from the yaml file if [ -z "$RAM_NODE" ]; then echo "Attempting to join as DISC node: /usr/sbin/rabbitmqctl join_cluster rabbit@$CLUSTER_WITH" >> $logfile /usr/sbin/rabbitmqctl join_cluster rabbit@$CLUSTER_WITH >> $logfile else echo "Attempting to join as RAM node: /usr/sbin/rabbitmqctl join_cluster --ram rabbit@$CLUSTER_WITH" >> $logfile /usr/sbin/rabbitmqctl join_cluster --ram rabbit@$CLUSTER_WITH >> $logfile fi echo "Starting App" >> $logfile /usr/sbin/rabbitmqctl start_app >> $logfile echo "Done Starting Cluster Node" >> $logfile fi # Tail to keep the foreground process active. tail -f /var/log/rabbitmq/* fi Problem is when I tried to do the same setup using kubernetes I am unable to connect to master from slave node. Approach that I took is, I created a pod for master node and another for slave node, passed hostname of the master (currently hard-coded) through environment variable. I also checked the log file at /tmp/rabbitmq.log, it is correctly taking all the environment variables. However it is unable to register with the master. I tried doing it manually also using rabbitmqctl command. But it did't work says host unreachable. Tried changing /etc/hosts file too. As per my understanding pods in kubernetes communicate through services, I guess because of this, passing directly container hostname doesn't work and rabbitmq clusiering work based on hostnames. Have anybody tried any workaround? I want to run master and slaves on different nodes. Below are the content of master and slave pods: apiVersion: v1 kind: Service metadata: name: rabbitmqsvc labels: app: queue-master spec: ports: - port: 5672 name: queue-rw-port - port: 15672 name: queue-mgt-port nodePort: 31606 - port: 5671 name: queue-ssl nodePort: 32718 selector: app: queue-master type: NodePort clusterIP: 10.16.0.121 --- apiVersion: v1 kind: ReplicationController metadata: name: rabbitmq-controller labels: app: queue-master spec: replicas: 1 selector: app: queue-master template: metadata: name: rabbitmq-pod labels: app: queue-master spec: nodeSelector: nodesize: small1 containers: - name: rabbitmq-master image: 172.17.0.1:5000/queue-node ports: - containerPort: 5672 name: queue-rw-port - containerPort: 15672 name: queue-mgt-port - containerPort: 5671 name: queue-ssl SLAVE: apiVersion: v1 kind: Service metadata: name: rabbitmqsvc-slave labels: app: queue-slave spec: ports: - port: 5672 name: queue-rw-port - port: 15672 name: queue-mgt-port nodePort: 31607 - port: 5671 name: queue-ssl nodePort: 32719 selector: app: queue-slave type: NodePort clusterIP: 10.16.0.122 --- apiVersion: v1 kind: ReplicationController metadata: name: rabbitmq-controller-slave labels: app: queue-slave spec: replicas: 1 selector: app: queue-slave template: metadata: name: rabbitmq-pod labels: app: queue-slave spec: nodeSelector: nodesize: small2 containers: - name: rabbitmq-slave image: 172.17.0.1:5000/queue-node env: - name: CLUSTERED value: "true" - name: CLUSTER_WITH value: "rabbitmq-controller-2ll1s" - name: RAM_NODE value: "true" - name: HOST_NAME value: "rabbit2" ports: - containerPort: 5672 name: queue-rw-port - containerPort: 15672 name: queue-mgt-port - containerPort: 5671 name: queue-ssl
If the image was built on a host where Kubernetes is running, then yes, Kubernetes may use it on some conditions: your pod is scheduled on a host where your image exists (pre-built / pre-pulled image scenario) your pod pull policy is set to IfNotPresent (see Updating Images) Using a private registry would make your life easier though, on top of that it's very easy to set up a local Docker registry with Kubernetes.
###Instruction: I have been trying Kubernetes, I have setup a master and minions in AWS. The documentation on docker + Kubernetes seems to be lacking, or maybe I was looking for it in the wrong place. Lets say I build my docker containers on a local box (not in AWS). I donot have a docker registry. Is it possible for me to tell Kubernetes to use my local containers when spinning up pods? Or do my containers need to be put in a docker registry? ### Response: If the image was built on a host where Kubernetes is running, then yes, Kubernetes may use it on some conditions: your pod is scheduled on a host where your image exists (pre-built / pre-pulled image scenario) your pod pull policy is set to IfNotPresent (see Updating Images) Using a private registry would make your life easier though, on top of that it's very easy to set up a local Docker registry with Kubernetes.
I have been trying Kubernetes, I have setup a master and minions in AWS. The documentation on docker + Kubernetes seems to be lacking, or maybe I was looking for it in the wrong place. Lets say I build my docker containers on a local box (not in AWS). I donot have a docker registry. Is it possible for me to tell Kubernetes to use my local containers when spinning up pods? Or do my containers need to be put in a docker registry?
You can't. From the v1 API specs: "ports": [ { "name": "string", "hostPort": 0, "containerPort": 0, "protocol": "string", "hostIP": "string" } ] Each port is uniquely identified and exposing host ports would be an anti-pattern in Kubernetes.
###Instruction: In docker, I can expose a range of ports using "-p 65000-65050:65000-65050". How do I achieve this for kubernetes in a pod.yml or replication-controller.yml? ### Response: You can't. From the v1 API specs: "ports": [ { "name": "string", "hostPort": 0, "containerPort": 0, "protocol": "string", "hostIP": "string" } ] Each port is uniquely identified and exposing host ports would be an anti-pattern in Kubernetes.
In docker, I can expose a range of ports using "-p 65000-65050:65000-65050". How do I achieve this for kubernetes in a pod.yml or replication-controller.yml?
Flocker is not required. The functionality you are seeking is what Kubernetes Volume Plugins provide. The way to think about a Kubernetes Persistent Volume (PV) is that it is a configuration object that stores information about a specific network storage asset. When a user submits a claim, assuming it finds a match, it will bind to one of the Persistent Volumes in the pool of available Persistent Volumes. This means your claim is bound to an object that contains information about a specific network storage asset. When a claim is specified in a Pod or RC, the runtime is able to ascertain the PV bound to the claim and then ascertain which Kubernetes Volume Plugin to use and what parameters to pass itm based on the properties of the PV. As such, wherever your Pods run in the cluster, they will be able to perform a network mount of the storage asset described in the PV. None of this data will be local. The pod can die and be restarted on any node in the cluster and it will reconnect to the same network storage asset specified in the PV. Any Kubernetes Volume Plugin, with the exception of EmptyDir and HostPath, can be specified in a Persistent Volume Definition. So you could create a PV that uses the Ceph RBD volume plugin and you would have the functionality that you seek.
###Instruction: We need to made volume to be managed easily. We ned to use PV volume, but we want to be able to start volume on any node, and data not stored on node (if node crash no problems in this way) so we think about flocker with Ceph backend. What's the best solution for production ? ### Response: Flocker is not required. The functionality you are seeking is what Kubernetes Volume Plugins provide. The way to think about a Kubernetes Persistent Volume (PV) is that it is a configuration object that stores information about a specific network storage asset. When a user submits a claim, assuming it finds a match, it will bind to one of the Persistent Volumes in the pool of available Persistent Volumes. This means your claim is bound to an object that contains information about a specific network storage asset. When a claim is specified in a Pod or RC, the runtime is able to ascertain the PV bound to the claim and then ascertain which Kubernetes Volume Plugin to use and what parameters to pass itm based on the properties of the PV. As such, wherever your Pods run in the cluster, they will be able to perform a network mount of the storage asset described in the PV. None of this data will be local. The pod can die and be restarted on any node in the cluster and it will reconnect to the same network storage asset specified in the PV. Any Kubernetes Volume Plugin, with the exception of EmptyDir and HostPath, can be specified in a Persistent Volume Definition. So you could create a PV that uses the Ceph RBD volume plugin and you would have the functionality that you seek.
We need to made volume to be managed easily. We ned to use PV volume, but we want to be able to start volume on any node, and data not stored on node (if node crash no problems in this way) so we think about flocker with Ceph backend. What's the best solution for production ?
Unfortunately right now the answer is no. The easiest thing to do is to create three replication controllers, each with the desired number of replicas, and each with a podTemplate that uses nodeSelector to pick one node. This has the downside that you are specifying an exact node name, so if that node fails, the pods will go pending. As @bitchecker said, in theory you could use one ReplicationController and use the --max-pods option on each node. This is tricky, though, because system pods like Heapster will be included in the count, and in any event it doesn't work if you want to share the node with pods from other ReplicationControllers. (And it has the same problem from above, where it doesn't really work when the node fails and you want those pods to be rescheduled onto another node). I don't think writing your own scheduler would be the right approach, but you could write your own controller or modify ReplicationController, if you are ambitious. :-) Good luck!
###Instruction: If I have 3 minions in my kubernetes cluster, is there a way I can tell replication controller to create 20% of pods on minion1, 50% on minion2 and 30% on minion 3? ### Response: Unfortunately right now the answer is no. The easiest thing to do is to create three replication controllers, each with the desired number of replicas, and each with a podTemplate that uses nodeSelector to pick one node. This has the downside that you are specifying an exact node name, so if that node fails, the pods will go pending. As @bitchecker said, in theory you could use one ReplicationController and use the --max-pods option on each node. This is tricky, though, because system pods like Heapster will be included in the count, and in any event it doesn't work if you want to share the node with pods from other ReplicationControllers. (And it has the same problem from above, where it doesn't really work when the node fails and you want those pods to be rescheduled onto another node). I don't think writing your own scheduler would be the right approach, but you could write your own controller or modify ReplicationController, if you are ambitious. :-) Good luck!
If I have 3 minions in my kubernetes cluster, is there a way I can tell replication controller to create 20% of pods on minion1, 50% on minion2 and 30% on minion 3?
For anyone wondering what was the cause of the issue, we added more VMs to the cluster and setup resource request/limit on each pod to prevent the whole cluster to run out of resources. This seems to solve it. Alex, thanks again for your help.
###Instruction: Yesterday I recreated a cluster so that it has version 1.1.7 for both master and nodes. After deploying a first service there it's no longer operational as it should be. I can't ssh into the nodes. Deployments fail with a FailedScheduling error. The Kube UI fails with the following response. { "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Failure", "message": "no endpoints available for service \"kube-ui\"", "reason": "ServiceUnavailable", "code": 503 } Resetting the nodes doesn't help here. Any ideas to what could cause this? ### Response: For anyone wondering what was the cause of the issue, we added more VMs to the cluster and setup resource request/limit on each pod to prevent the whole cluster to run out of resources. This seems to solve it. Alex, thanks again for your help.
Yesterday I recreated a cluster so that it has version 1.1.7 for both master and nodes. After deploying a first service there it's no longer operational as it should be. I can't ssh into the nodes. Deployments fail with a FailedScheduling error. The Kube UI fails with the following response. { "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Failure", "message": "no endpoints available for service \"kube-ui\"", "reason": "ServiceUnavailable", "code": 503 } Resetting the nodes doesn't help here. Any ideas to what could cause this?
What is the relationship between the Kubernetes objects and the Google Compute resources? How do I map between the two programmatically? https://github.com/kubernetes/contrib/tree/master/Ingress/controllers/gce#overview (I am aware that I could be using a Kubernetes web ingress object to do the balancing, as explained here, but it looks like Kubernetes Ingress does not yet support HTTPS, which need.) Ingress will support HTTPS in 1.2. This is what the resource will look like: https://github.com/kubernetes/kubernetes/issues/19497#issuecomment-174112834. In the meanwhile you can setup HTTP loadbalancing with the Ingress and hand modify it to support https. Apologies beforehand that this is convoluted, it will get better soon. First create an HTTP Ingress: Create Services of Type=NodePort Make sure you have BackendService quota Create a HTTP Ingress Expose the node port (s) of the service in the Firewall (also as mentioned in https://cloud.google.com/container-engine/docs/tutorials/http-balancer) Wait till kubect describe ing shows HEALTHY for you backends. At this point you should be able to curl your Ingress loadbalancer IP and hit the nginx service (or whatever service you created in step 1). Then do the following, manually through the GCE console: Change the IP of the Ingress resource from "Ephmermal" to "Static" (look for the IP in kubectl get ing in the "External IP addresses" tab) Create your ssl cert. If you just want a self signed cert you can do: openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /tmp/nginx.key -out /tmp/nginx.crt -subj "/CN=nginxsvc/O=nginxsvc" Create a new target HTTPS proxy and forwarding rule for the HTTPS load balancer and assign it to the same (static) IP of the http load balancer. At this point you should be able to curl https://loadbalancer-ip -k
###Instruction: I am setting a couple of services running on Google Container Engine, with traffic coming in through a Google HTTP Load Balancer, using path mapping. There is a good Google tutorial on setting up content-based load-balancing here, but it is all in terms of plain Google Compute objects like instance groups and backend services. I, however, have Kubernetes services, pods and clusters. What is the relationship between the Kubernetes objects and the Google Compute resources? How do I map between the two programmatically? (I am aware that I could be using a Kubernetes web ingress object to do the balancing, as explained here, but it looks like Kubernetes Ingress does not yet support HTTPS, which need.) ### Response: What is the relationship between the Kubernetes objects and the Google Compute resources? How do I map between the two programmatically? https://github.com/kubernetes/contrib/tree/master/Ingress/controllers/gce#overview (I am aware that I could be using a Kubernetes web ingress object to do the balancing, as explained here, but it looks like Kubernetes Ingress does not yet support HTTPS, which need.) Ingress will support HTTPS in 1.2. This is what the resource will look like: https://github.com/kubernetes/kubernetes/issues/19497#issuecomment-174112834. In the meanwhile you can setup HTTP loadbalancing with the Ingress and hand modify it to support https. Apologies beforehand that this is convoluted, it will get better soon. First create an HTTP Ingress: Create Services of Type=NodePort Make sure you have BackendService quota Create a HTTP Ingress Expose the node port (s) of the service in the Firewall (also as mentioned in https://cloud.google.com/container-engine/docs/tutorials/http-balancer) Wait till kubect describe ing shows HEALTHY for you backends. At this point you should be able to curl your Ingress loadbalancer IP and hit the nginx service (or whatever service you created in step 1). Then do the following, manually through the GCE console: Change the IP of the Ingress resource from "Ephmermal" to "Static" (look for the IP in kubectl get ing in the "External IP addresses" tab) Create your ssl cert. If you just want a self signed cert you can do: openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /tmp/nginx.key -out /tmp/nginx.crt -subj "/CN=nginxsvc/O=nginxsvc" Create a new target HTTPS proxy and forwarding rule for the HTTPS load balancer and assign it to the same (static) IP of the http load balancer. At this point you should be able to curl https://loadbalancer-ip -k
I am setting a couple of services running on Google Container Engine, with traffic coming in through a Google HTTP Load Balancer, using path mapping. There is a good Google tutorial on setting up content-based load-balancing here, but it is all in terms of plain Google Compute objects like instance groups and backend services. I, however, have Kubernetes services, pods and clusters. What is the relationship between the Kubernetes objects and the Google Compute resources? How do I map between the two programmatically? (I am aware that I could be using a Kubernetes web ingress object to do the balancing, as explained here, but it looks like Kubernetes Ingress does not yet support HTTPS, which need.)
Liveness and readiness probes are designated to be pod health checks in Kubernetes world.
###Instruction: I simply do CMD ["node", "/dist/app.js"] from my dockerfile to run my master process inside the container. Container is hosted in Kubernetes. When something goes wrong with the pod, kubernetes rc will spin up another one. However, if my master process crashes inside the pod-->container, will Kubernetes know about it? Wondering whats the best way to monitor the process running inside the container, log crashes and restart the process as appropriate so that my client doesn't get 500 when calls gets routed to the crashed container. In the VM world, I used forever, where forever will give me logs + allows me to configure restarts etc.. what is equivalent in container/kubernetes world? Thanks ! ### Response: Liveness and readiness probes are designated to be pod health checks in Kubernetes world.
I simply do CMD ["node", "/dist/app.js"] from my dockerfile to run my master process inside the container. Container is hosted in Kubernetes. When something goes wrong with the pod, kubernetes rc will spin up another one. However, if my master process crashes inside the pod-->container, will Kubernetes know about it? Wondering whats the best way to monitor the process running inside the container, log crashes and restart the process as appropriate so that my client doesn't get 500 when calls gets routed to the crashed container. In the VM world, I used forever, where forever will give me logs + allows me to configure restarts etc.. what is equivalent in container/kubernetes world? Thanks !
Ingress objects help configure HTTP(S) load balancing for a single cluster. They don't have a concept of multiple clusters, so they aren't going to have a configuration language for what you are trying to accomplish (maybe they will with Ubernetes, but they certainly don't today). The upshot is that you can bypass the Ingress configuration and configure the routing manually (after all, Ingress is really just an ease-of-use shortcut for a typical L7 configuration). You can create your own L7 configuration in GCP and set up the path based forwarding to route to different backend groups. You can then assign the backend groups to a NodePort service that you configure in each of your clusters. The rough steps are: Create a NodePort service in each cluster Create an HTTP health check for each service Add a firewall rule to allow http health checks to hit your backends Add a service to the instance group for your cluster (e.g. gcloud compute instance-groups managed set-named-ports ...) Add backend services for the load balancer (e.g. gcloud compute backend-services create ...) Add a backend for your cluster to this backend service (e.g. gcloud compute backend-services add-backend ...) Map that URL to your backend service (e.g. gcloud compute url-maps create ...) Create a load balancing proxy for that backend service (e.g. gcloud compute target-http-proxies create ...) Create a forwarding rule for that proxy (e.g. gcloud compute forwarding-rules create ...)
###Instruction: I have the following setup: k8s cluster A, containing service SA k8s cluster B, containing service SB, and an HTTP ingress that routes traffic to SB Is it possible to add service SA as the backend service for one of the path of the ingress? If so, how do I refer to it in the ingress configuration file? (using selectors in the usual way doesn't work, presumably because we are in different clusters) ### Response: Ingress objects help configure HTTP(S) load balancing for a single cluster. They don't have a concept of multiple clusters, so they aren't going to have a configuration language for what you are trying to accomplish (maybe they will with Ubernetes, but they certainly don't today). The upshot is that you can bypass the Ingress configuration and configure the routing manually (after all, Ingress is really just an ease-of-use shortcut for a typical L7 configuration). You can create your own L7 configuration in GCP and set up the path based forwarding to route to different backend groups. You can then assign the backend groups to a NodePort service that you configure in each of your clusters. The rough steps are: Create a NodePort service in each cluster Create an HTTP health check for each service Add a firewall rule to allow http health checks to hit your backends Add a service to the instance group for your cluster (e.g. gcloud compute instance-groups managed set-named-ports ...) Add backend services for the load balancer (e.g. gcloud compute backend-services create ...) Add a backend for your cluster to this backend service (e.g. gcloud compute backend-services add-backend ...) Map that URL to your backend service (e.g. gcloud compute url-maps create ...) Create a load balancing proxy for that backend service (e.g. gcloud compute target-http-proxies create ...) Create a forwarding rule for that proxy (e.g. gcloud compute forwarding-rules create ...)
I have the following setup: k8s cluster A, containing service SA k8s cluster B, containing service SB, and an HTTP ingress that routes traffic to SB Is it possible to add service SA as the backend service for one of the path of the ingress? If so, how do I refer to it in the ingress configuration file? (using selectors in the usual way doesn't work, presumably because we are in different clusters)
You should use the kubectl rolling update command. For a bit of background, imagine that you have an application that is running in 10 pods. Now you have a new version of your application. You don't want to stop the current version and then start the new version, because you will have a period of time where you aren't serving any user traffic. And if there is an issue with the new version, that period of time may be quite long as you push the new version, detect an issue, remove the new version, and restart the old version. A rolling update will replace your pods one at a time with a new pod running the updated version of your application. This allows you to gradually shift incoming requests to the new version without any downtime. It also allows you to catch issues with your new version while it serving a fraction of incoming requests.
###Instruction: Went through to tutorial without any issues but I'm confused on the best way to push updates to the application. The tutorial mentions a bit about kubectl rolling-update but not really following that. Any feedback on exact steps to use after deploying the app? ### Response: You should use the kubectl rolling update command. For a bit of background, imagine that you have an application that is running in 10 pods. Now you have a new version of your application. You don't want to stop the current version and then start the new version, because you will have a period of time where you aren't serving any user traffic. And if there is an issue with the new version, that period of time may be quite long as you push the new version, detect an issue, remove the new version, and restart the old version. A rolling update will replace your pods one at a time with a new pod running the updated version of your application. This allows you to gradually shift incoming requests to the new version without any downtime. It also allows you to catch issues with your new version while it serving a fraction of incoming requests.
Went through to tutorial without any issues but I'm confused on the best way to push updates to the application. The tutorial mentions a bit about kubectl rolling-update but not really following that. Any feedback on exact steps to use after deploying the app?
According to the docker hub documentation for the postgres image you should be using the environment variable POSTGRES_PASSWORD instead of DB_PASSWORD.
###Instruction: For some reason, the postgres instance isn't being locked down with a password using the following kubernetes script. apiVersion: v1 kind: ReplicationController metadata: name: postgres labels: name: postgres spec: replicas: 1 template: metadata: labels: name: postgres spec: containers: - resources: image: postgres:9.4 name: postgres env: - name: DB_PASS value: password - name: PGDATA value: /var/lib/postgresql/data/pgdata ports: - containerPort: 5432 name: postgres volumeMounts: - mountPath: /var/lib/postgresql/data name: postgres-persistent-storage volumes: - name: postgres-persistent-storage gcePersistentDisk: pdName: postgres-disk fsType: ext4 Any ideas? ### Response: According to the docker hub documentation for the postgres image you should be using the environment variable POSTGRES_PASSWORD instead of DB_PASSWORD.
For some reason, the postgres instance isn't being locked down with a password using the following kubernetes script. apiVersion: v1 kind: ReplicationController metadata: name: postgres labels: name: postgres spec: replicas: 1 template: metadata: labels: name: postgres spec: containers: - resources: image: postgres:9.4 name: postgres env: - name: DB_PASS value: password - name: PGDATA value: /var/lib/postgresql/data/pgdata ports: - containerPort: 5432 name: postgres volumeMounts: - mountPath: /var/lib/postgresql/data name: postgres-persistent-storage volumes: - name: postgres-persistent-storage gcePersistentDisk: pdName: postgres-disk fsType: ext4 Any ideas?
If you only want to run the pod on a single node, you actually don't want to use a DaemonSet. DaemonSets are designed for running a pod on every node, not a single specific node. To run a pod on a specific node, you can use a nodeSelector in the pod specification, as documented in the Node Selection example in the docs. edit: But for anyone reading this that does want to run something on every node in GKE, there are two things I can say: First, DaemonSet will be enabled in GKE in version 1.2, which is planned for March. It isn't enabled in GKE in version 1.1 because it wasn't considered stable enough at the time 1.1 was cut. Second, if you want to run something on every node before 1.2 is out, we recommend creating a replication controller with a number of replicas greater than your number of nodes and asking for a hostPort in the container spec. The hostPort will ensure that no more than one pod from the RC will be run per node.
###Instruction: I have a Google Container Engine cluster with 21 nodes, there is one pod in particular that I need to always be running on a node with a static IP address (for outbound purposes). Kubernetes supports DaemonSets This is a way to have a pod be deployed to a specific node (or in a set of nodes) by giving the node a label that matches the nodeSelector in the DaemonSet. You can then assign a static IP to the VM instance that the labeled node is on. However, GKE doesn't appear to support the DaemonSet kind. $ kubectl create -f go-daemonset.json error validating "go-daemonset.json": error validating data: the server could not find the requested resource; if you choose to ignore these errors, turn validation off with --validate=false $ kubectl create -f go-daemonset.json --validate=false unable to recognize "go-daemonset.json": no kind named "DaemonSet" is registered in versions ["" "v1"] When will this functionality be supported and what are the workarounds? ### Response: If you only want to run the pod on a single node, you actually don't want to use a DaemonSet. DaemonSets are designed for running a pod on every node, not a single specific node. To run a pod on a specific node, you can use a nodeSelector in the pod specification, as documented in the Node Selection example in the docs. edit: But for anyone reading this that does want to run something on every node in GKE, there are two things I can say: First, DaemonSet will be enabled in GKE in version 1.2, which is planned for March. It isn't enabled in GKE in version 1.1 because it wasn't considered stable enough at the time 1.1 was cut. Second, if you want to run something on every node before 1.2 is out, we recommend creating a replication controller with a number of replicas greater than your number of nodes and asking for a hostPort in the container spec. The hostPort will ensure that no more than one pod from the RC will be run per node.
I have a Google Container Engine cluster with 21 nodes, there is one pod in particular that I need to always be running on a node with a static IP address (for outbound purposes). Kubernetes supports DaemonSets This is a way to have a pod be deployed to a specific node (or in a set of nodes) by giving the node a label that matches the nodeSelector in the DaemonSet. You can then assign a static IP to the VM instance that the labeled node is on. However, GKE doesn't appear to support the DaemonSet kind. $ kubectl create -f go-daemonset.json error validating "go-daemonset.json": error validating data: the server could not find the requested resource; if you choose to ignore these errors, turn validation off with --validate=false $ kubectl create -f go-daemonset.json --validate=false unable to recognize "go-daemonset.json": no kind named "DaemonSet" is registered in versions ["" "v1"] When will this functionality be supported and what are the workarounds?
In my circumstance, it was because kube-proxy (v1.1.4) was missing the --proxy-mode=iptables flag. Evidently in 1.1.4, the default is something other than iptables, and specifying that flag made the logs immediately stop spewing those messages.
###Instruction: I got errors in my kube-proxy: E0107 21:48:57.738867 1 proxysocket.go:160] I/O error: read tcp 10.2.11.253:37568: connection reset by peer How can I trace quickly which pod has IP 10.2.11.253? And how can I know which request that was, from which pod to which pod? Or can we change the kube-proxy log level to verbose or debug? I got another errors, same connection reset error, but the IP is a node's IP E0107 21:52:53.483363 1 proxysocket.go:160] I/O error: read tcp 192.168.166.180:11732: connection reset by peer 192.168.166.x is my kubernetes node subnet, but how can kube-proxy forwards request to a node IP? I'm using kubelet 1.0.1 and CoreOS v773.1.0 (docker 1.7.1, kernel 4.1.5) as my cluster nodes. Thanks for any help! ### Response: In my circumstance, it was because kube-proxy (v1.1.4) was missing the --proxy-mode=iptables flag. Evidently in 1.1.4, the default is something other than iptables, and specifying that flag made the logs immediately stop spewing those messages.
I got errors in my kube-proxy: E0107 21:48:57.738867 1 proxysocket.go:160] I/O error: read tcp 10.2.11.253:37568: connection reset by peer How can I trace quickly which pod has IP 10.2.11.253? And how can I know which request that was, from which pod to which pod? Or can we change the kube-proxy log level to verbose or debug? I got another errors, same connection reset error, but the IP is a node's IP E0107 21:52:53.483363 1 proxysocket.go:160] I/O error: read tcp 192.168.166.180:11732: connection reset by peer 192.168.166.x is my kubernetes node subnet, but how can kube-proxy forwards request to a node IP? I'm using kubelet 1.0.1 and CoreOS v773.1.0 (docker 1.7.1, kernel 4.1.5) as my cluster nodes. Thanks for any help!
Unfortunately not all of the various components' command line flags are modifiable when starting a GKE cluster. If you're just trying to run a one-off load test, you could manually modify the flags passed to the Kubelet on each node, but since that flag isn't even controllable by Kubernetes's Salt templates, there isn't even an option to control it with an environment variable. The value was chosen due to performance limitations and will be drastically bumped up (to 100) in version 1.2 of Kubernetes, which is scheduled for release in March.
###Instruction: I am trying to run a load test on GKE, which means that I want to run a high number of tasks, most of which require very little resources. However, most of them fail to schedule saying "Failed for reason PodExceedsMaxPodNumber and possibly others". The number of pods per node is governed by the max-pods flag in Kubelet, and the default is 40. This is clearly too low for what I am trying to do – my CPU utilization doesn't go over 30%. Is there any way I can change this setting when creating a Kubernetes cluster using GKE? ### Response: Unfortunately not all of the various components' command line flags are modifiable when starting a GKE cluster. If you're just trying to run a one-off load test, you could manually modify the flags passed to the Kubelet on each node, but since that flag isn't even controllable by Kubernetes's Salt templates, there isn't even an option to control it with an environment variable. The value was chosen due to performance limitations and will be drastically bumped up (to 100) in version 1.2 of Kubernetes, which is scheduled for release in March.
I am trying to run a load test on GKE, which means that I want to run a high number of tasks, most of which require very little resources. However, most of them fail to schedule saying "Failed for reason PodExceedsMaxPodNumber and possibly others". The number of pods per node is governed by the max-pods flag in Kubelet, and the default is 40. This is clearly too low for what I am trying to do – my CPU utilization doesn't go over 30%. Is there any way I can change this setting when creating a Kubernetes cluster using GKE?
I've seen people use a mix of strategies, but the former (carving into tiers of sizes) is a good place to start. You can use an annotation to target a PV to a PVC if you are planning app and PV at the same time, but if you aren't using dynamic provisioning the tiered sizes will at least give you a chance to split those up. Mostly this comes down to planning what users need - NFS at least benefits that you can lie about size up front ant manage quota on the backend. I know of a few people who have done very very small NFS volumes (100M) and simply scaled those up as necessary.
###Instruction: This question follows on from this one regarding PV, PVC's and shared volumes. I'm trying to get my head around what the best practices are regarding the provisioning of PV's for a NFS shared volume in a K8/OpenShift cluster. As an example, let's say I have a 100GB NFS volume shared at /storage. What is the recommended way to carve that volume up as PV's? Create a group of say 2 x 20GB (pvlarge01 - 02) and 6 x 10GB (pvsmall01 - 06) PV's which can support disparate large and small claims or, create one big PV of 100GB capacity and then just have one PVC that get's shared among pods (this feels wrong...) or ? The fact that you cannot target PVC's to specific PV's (well, until PersistentVolumeSelector is available) freaks me out a bit, in that my natural inclination is to want to target PVC's to specific PV's based on application need. Would love to hear how the community is handling this in production environments? ### Response: I've seen people use a mix of strategies, but the former (carving into tiers of sizes) is a good place to start. You can use an annotation to target a PV to a PVC if you are planning app and PV at the same time, but if you aren't using dynamic provisioning the tiered sizes will at least give you a chance to split those up. Mostly this comes down to planning what users need - NFS at least benefits that you can lie about size up front ant manage quota on the backend. I know of a few people who have done very very small NFS volumes (100M) and simply scaled those up as necessary.
This question follows on from this one regarding PV, PVC's and shared volumes. I'm trying to get my head around what the best practices are regarding the provisioning of PV's for a NFS shared volume in a K8/OpenShift cluster. As an example, let's say I have a 100GB NFS volume shared at /storage. What is the recommended way to carve that volume up as PV's? Create a group of say 2 x 20GB (pvlarge01 - 02) and 6 x 10GB (pvsmall01 - 06) PV's which can support disparate large and small claims or, create one big PV of 100GB capacity and then just have one PVC that get's shared among pods (this feels wrong...) or ? The fact that you cannot target PVC's to specific PV's (well, until PersistentVolumeSelector is available) freaks me out a bit, in that my natural inclination is to want to target PVC's to specific PV's based on application need. Would love to hear how the community is handling this in production environments?
You'll have to connect the new master to the current etcd cluster, or create a new etcd cluster from a snapshot of the old one, in order to preserve state of the cluster. Aside from that, you have to ensure that the nodes point to the new master's IP address. I also suggest looking at HA masters if you are running a version greater than 1.0.X http://kubernetes.io/v1.1/docs/admin/high-availability.html.
###Instruction: How do I replace a downed master node? In particular, how do you replace a node on AWS if you are using the kubernetes tools? If I restart the node, it doesn't start correctly, if I clone the node, it definitely does not. ### Response: You'll have to connect the new master to the current etcd cluster, or create a new etcd cluster from a snapshot of the old one, in order to preserve state of the cluster. Aside from that, you have to ensure that the nodes point to the new master's IP address. I also suggest looking at HA masters if you are running a version greater than 1.0.X http://kubernetes.io/v1.1/docs/admin/high-availability.html.
How do I replace a downed master node? In particular, how do you replace a node on AWS if you are using the kubernetes tools? If I restart the node, it doesn't start correctly, if I clone the node, it definitely does not.
You can always force delete a pod by providing kubectl delete pod NAME --grace-period=0. You can also delete the node (if you don't plan on bringing it back) with kubectl delete node NAME and the pods should get cleaned up.
###Instruction: I have created a Kubernetes cluster in AWS platform and using DaemonSets to create the Pod. After an instance is terminated, the corresponding pod is still in Terminating states. Anyone have idea of cleaning up such Terminating pod in kubernetes master? ### Response: You can always force delete a pod by providing kubectl delete pod NAME --grace-period=0. You can also delete the node (if you don't plan on bringing it back) with kubectl delete node NAME and the pods should get cleaned up.
I have created a Kubernetes cluster in AWS platform and using DaemonSets to create the Pod. After an instance is terminated, the corresponding pod is still in Terminating states. Anyone have idea of cleaning up such Terminating pod in kubernetes master?
i've got and endpoint that is configured with glusterfs ip addresses. i know the posted link, i've followed it but the result is on my first post!
###Instruction: i've a kubernetes cluster with a master node and 3 minions, i've already a glusterfs cluster, every node of kubernetes cluster have glusterfs-client installed and working. i'm trying to run a pod ( a simple mysql ) mounting /var/lib/mysql on glusterfs but i see: Image: mysql:5.6 is ready, container is creating i run: kubectl get event i see: Thu, 18 Feb 2016 10:08:01 +0100 Thu, 18 Feb 2016 10:08:01 +0100 1 mysql-9ym10 Pod scheduled {scheduler } Successfully assigned mysql-9ym10 to nodeXX Thu, 18 Feb 2016 10:08:01 +0100 Thu, 18 Feb 2016 10:08:01 +0100 1 mysql ReplicationController successfulCreate {replication-controller } Created pod: mysql-9ym10 Thu, 18 Feb 2016 10:08:02 +0100 Thu, 18 Feb 2016 10:08:12 +0100 2 mysql-9ym10 Pod failedMount {kubelet nodeXX} Unable to mount volumes for pod "mysql-9ym10_default": exit status 1 Thu, 18 Feb 2016 10:08:02 +0100 Thu, 18 Feb 2016 10:08:12 +0100 2 mysql-9ym10 Pod failedSync {kubelet nodeXX} Error syncing pod, skipping: exit status 1 if i run kubectl describe pod mysql-9ym10 i see: Name: mysql-9ym10 Namespace: default Image(s): mysql:5.6 Node: nodeXX/nodeXX Labels: app=mysql Status: Pending Reason: Message: IP: Replication Controllers: mysql (1/1 replicas created) Containers: mysql: Image: mysql:5.6 State: Waiting Reason: Image: mysql:5.6 is ready, container is creating Ready: False Restart Count: 0 Conditions: Type Status Ready False Events: FirstSeen LastSeen Count From SubobjectPath Reason Message Thu, 18 Feb 2016 10:08:01 +0100 Thu, 18 Feb 2016 10:08:01 +0100 1 {scheduler } scheduled Successfully assigned mysql-9ym10 to nodeXX Thu, 18 Feb 2016 10:08:02 +0100 Thu, 18 Feb 2016 10:10:22 +0100 15 {kubelet nodeXX} failedMount Unable to mount volumes for pod "mysql-9ym10_default": exit status 1 Thu, 18 Feb 2016 10:08:02 +0100 Thu, 18 Feb 2016 10:10:22 +0100 15 {kubelet nodeXX} failedSync Error syncing pod, skipping: exit status 1 this is the yaml file for container: apiVersion: v1 kind: ReplicationController metadata: name: mysql spec: replicas: 1 selector: app: mysql template: metadata: name: mysql labels: app: mysql spec: containers: - name: mysql image: mysql:5.6 ports: - containerPort: 3306 env: - name: MYSQL_ROOT_PASSWORD value: password volumeMounts: - mountPath: /var/lib/mysql name: glusterfsvol volumes: - glusterfs: endpoints: glusterfs-cluster path: glustervolume readOnly: false name: glusterfsvol ### Response: i've got and endpoint that is configured with glusterfs ip addresses. i know the posted link, i've followed it but the result is on my first post!
i've a kubernetes cluster with a master node and 3 minions, i've already a glusterfs cluster, every node of kubernetes cluster have glusterfs-client installed and working. i'm trying to run a pod ( a simple mysql ) mounting /var/lib/mysql on glusterfs but i see: Image: mysql:5.6 is ready, container is creating i run: kubectl get event i see: Thu, 18 Feb 2016 10:08:01 +0100 Thu, 18 Feb 2016 10:08:01 +0100 1 mysql-9ym10 Pod scheduled {scheduler } Successfully assigned mysql-9ym10 to nodeXX Thu, 18 Feb 2016 10:08:01 +0100 Thu, 18 Feb 2016 10:08:01 +0100 1 mysql ReplicationController successfulCreate {replication-controller } Created pod: mysql-9ym10 Thu, 18 Feb 2016 10:08:02 +0100 Thu, 18 Feb 2016 10:08:12 +0100 2 mysql-9ym10 Pod failedMount {kubelet nodeXX} Unable to mount volumes for pod "mysql-9ym10_default": exit status 1 Thu, 18 Feb 2016 10:08:02 +0100 Thu, 18 Feb 2016 10:08:12 +0100 2 mysql-9ym10 Pod failedSync {kubelet nodeXX} Error syncing pod, skipping: exit status 1 if i run kubectl describe pod mysql-9ym10 i see: Name: mysql-9ym10 Namespace: default Image(s): mysql:5.6 Node: nodeXX/nodeXX Labels: app=mysql Status: Pending Reason: Message: IP: Replication Controllers: mysql (1/1 replicas created) Containers: mysql: Image: mysql:5.6 State: Waiting Reason: Image: mysql:5.6 is ready, container is creating Ready: False Restart Count: 0 Conditions: Type Status Ready False Events: FirstSeen LastSeen Count From SubobjectPath Reason Message Thu, 18 Feb 2016 10:08:01 +0100 Thu, 18 Feb 2016 10:08:01 +0100 1 {scheduler } scheduled Successfully assigned mysql-9ym10 to nodeXX Thu, 18 Feb 2016 10:08:02 +0100 Thu, 18 Feb 2016 10:10:22 +0100 15 {kubelet nodeXX} failedMount Unable to mount volumes for pod "mysql-9ym10_default": exit status 1 Thu, 18 Feb 2016 10:08:02 +0100 Thu, 18 Feb 2016 10:10:22 +0100 15 {kubelet nodeXX} failedSync Error syncing pod, skipping: exit status 1 this is the yaml file for container: apiVersion: v1 kind: ReplicationController metadata: name: mysql spec: replicas: 1 selector: app: mysql template: metadata: name: mysql labels: app: mysql spec: containers: - name: mysql image: mysql:5.6 ports: - containerPort: 3306 env: - name: MYSQL_ROOT_PASSWORD value: password volumeMounts: - mountPath: /var/lib/mysql name: glusterfsvol volumes: - glusterfs: endpoints: glusterfs-cluster path: glustervolume readOnly: false name: glusterfsvol
Is there some way I can signal that a pod being regularly restarted is not a crash loop, but is happening by design? Not that I know of. Or is the only way to solve this to turn the sidecar container into a daemon that never exits? This would be the my suggested solution.
###Instruction: I was running a cluster in Kubernetes 1.0, and I had a few containers I wanted to run periodically as a sidecar container in a pod--usually things like pushing or pulling backups. I did this by building a pod with the container that had data I wanted to back up, and the sidecar container for backing it up. The sidecar container was a basic bash script that would execute the backup command, then sleep for however long (say 15 minutes) I wanted to wait between backups, and finally exit with a 0 status code. In 1.0, this worked like a charm. My backup containers were simple and not tied to being run as a daemon; they could be executed almost as a standalone command and work as expected, but the monitor kept them alive and so kept them in a loop. After upgrading to 1.1, I noticed these pods all kept getting put into a CrashLoopBackOff state, which meant that their restarts got delayed. This would have been fine for the sidecar container, but the container producing data was also unavailable during this time, which surprised me. Is there some way I can signal that a pod being regularly restarted is not a crash loop, but is happening by design? Or is the only way to solve this to turn the sidecar container into a daemon that never exits? ### Response: Is there some way I can signal that a pod being regularly restarted is not a crash loop, but is happening by design? Not that I know of. Or is the only way to solve this to turn the sidecar container into a daemon that never exits? This would be the my suggested solution.
I was running a cluster in Kubernetes 1.0, and I had a few containers I wanted to run periodically as a sidecar container in a pod--usually things like pushing or pulling backups. I did this by building a pod with the container that had data I wanted to back up, and the sidecar container for backing it up. The sidecar container was a basic bash script that would execute the backup command, then sleep for however long (say 15 minutes) I wanted to wait between backups, and finally exit with a 0 status code. In 1.0, this worked like a charm. My backup containers were simple and not tied to being run as a daemon; they could be executed almost as a standalone command and work as expected, but the monitor kept them alive and so kept them in a loop. After upgrading to 1.1, I noticed these pods all kept getting put into a CrashLoopBackOff state, which meant that their restarts got delayed. This would have been fine for the sidecar container, but the container producing data was also unavailable during this time, which surprised me. Is there some way I can signal that a pod being regularly restarted is not a crash loop, but is happening by design? Or is the only way to solve this to turn the sidecar container into a daemon that never exits?
I tried to do this using a cluster built from HEAD on GCE and didn't have the same experience. On the master instance, kubectl works. But on the nodes, it isn't configured to communicate with the master: $ kubectl version Client Version: version.Info{Major:"1", Minor:"2+", GitVersion:"v1.2.0-alpha.8.82+c9d33ec1b4044e", GitCommit:"c9d33 ec1b4044e2a330a9b8b7a9204a99b6c6eec", GitTreeState:"clean"} The connection to the server localhost:8080 was refused - did you specify the right host or port? The reason that it works out of the box on the master is that by default kubectl tries to connect to port 8080 on localhost, which is also the insecure port used on the master (until kubernetes#13598 is resolved).
###Instruction: I created a Kube cluster using the kube-up script. If I ssh into the intances, kubectl is configured for the local cluster. My question, how is kubectl detecting the kubeconfig when a cluster is created using kube-up script? ### Response: I tried to do this using a cluster built from HEAD on GCE and didn't have the same experience. On the master instance, kubectl works. But on the nodes, it isn't configured to communicate with the master: $ kubectl version Client Version: version.Info{Major:"1", Minor:"2+", GitVersion:"v1.2.0-alpha.8.82+c9d33ec1b4044e", GitCommit:"c9d33 ec1b4044e2a330a9b8b7a9204a99b6c6eec", GitTreeState:"clean"} The connection to the server localhost:8080 was refused - did you specify the right host or port? The reason that it works out of the box on the master is that by default kubectl tries to connect to port 8080 on localhost, which is also the insecure port used on the master (until kubernetes#13598 is resolved).
I created a Kube cluster using the kube-up script. If I ssh into the intances, kubectl is configured for the local cluster. My question, how is kubectl detecting the kubeconfig when a cluster is created using kube-up script?
Pods are automatically injected with environment variables containing information about services in their namespace. As an example, say your pod is in a namespace that also contains a service named 'foo'. The containers in your pod will be injected with: FOO_SERVICE_HOST=<host service foo is reachable at> FOO_SERVICE_PORT=<port service foo is reachable at> as well as docker-links style environment variables. Check out the docs here.
###Instruction: I have created and deployed successfully one service in kubernetes named rest_api_service. I have another project which needs to call a resource from rest_api_service. So, in order to get out the IP of my service, I tried to inject: @Inject @ServiceName("luz-person-service") String serivceUrl; But my Eclipse shows me this warn message: > No bean is eligible for injection to the injection point [JSR-346 > Β§5.2.2] Therefore I can not build it in wildfly. Currently, I am working only on pure Java EE 7 and I'm using the following dependency: <dependency> <groupId>io.fabric8</groupId> <artifactId>fabric8-cdi</artifactId> <version>2.1.11</version> </dependency> Do you guys have any idea? ### Response: Pods are automatically injected with environment variables containing information about services in their namespace. As an example, say your pod is in a namespace that also contains a service named 'foo'. The containers in your pod will be injected with: FOO_SERVICE_HOST=<host service foo is reachable at> FOO_SERVICE_PORT=<port service foo is reachable at> as well as docker-links style environment variables. Check out the docs here.
I have created and deployed successfully one service in kubernetes named rest_api_service. I have another project which needs to call a resource from rest_api_service. So, in order to get out the IP of my service, I tried to inject: @Inject @ServiceName("luz-person-service") String serivceUrl; But my Eclipse shows me this warn message: > No bean is eligible for injection to the injection point [JSR-346 > Β§5.2.2] Therefore I can not build it in wildfly. Currently, I am working only on pure Java EE 7 and I'm using the following dependency: <dependency> <groupId>io.fabric8</groupId> <artifactId>fabric8-cdi</artifactId> <version>2.1.11</version> </dependency> Do you guys have any idea?
One thing you can try, is to provide OpsCenter with a configuration file for your cluster. If you installed OpsCenter in /etc, that file should be in the /etc/opscenter/clusters directory, and it's filename should match the name of your cluster (with underscores for special characters and spaces). The contents of that file should look something like this: $ cat clusters/DEV_LOAD_TEST.conf [jmx] username = password = port = 7199 [kerberos_client_principals] [kerberos] [agents] [kerberos_hostnames] [kerberos_services] [cassandra] username = devadmin seed_hosts = 10.0.0.4, 10.0.0.9, 10.0.0.6 api_port = 9160 password = reindeerflotilla See Dennis' link above for more detailed config information. Note, that link is for OpsCenter 5.1, so be sure to double-check your version when looking up DataStax documentation.
###Instruction: I'm using Opscenter to manage my Cassandra cluster. Opscenter is started after my Cassandra cluster is already running. When opening Opscenter on my browser at port 8888 I have to manually give the Cassandra master's IP. Due to my deployment (Kubernetes and Docker) it's a schlep to get the IP of the Cassandra master. Is there a config I can edit to automatically connect to the master at startup? I am able to find the master IP using a bash command - I just want to specify it in a config file. ### Response: One thing you can try, is to provide OpsCenter with a configuration file for your cluster. If you installed OpsCenter in /etc, that file should be in the /etc/opscenter/clusters directory, and it's filename should match the name of your cluster (with underscores for special characters and spaces). The contents of that file should look something like this: $ cat clusters/DEV_LOAD_TEST.conf [jmx] username = password = port = 7199 [kerberos_client_principals] [kerberos] [agents] [kerberos_hostnames] [kerberos_services] [cassandra] username = devadmin seed_hosts = 10.0.0.4, 10.0.0.9, 10.0.0.6 api_port = 9160 password = reindeerflotilla See Dennis' link above for more detailed config information. Note, that link is for OpsCenter 5.1, so be sure to double-check your version when looking up DataStax documentation.
I'm using Opscenter to manage my Cassandra cluster. Opscenter is started after my Cassandra cluster is already running. When opening Opscenter on my browser at port 8888 I have to manually give the Cassandra master's IP. Due to my deployment (Kubernetes and Docker) it's a schlep to get the IP of the Cassandra master. Is there a config I can edit to automatically connect to the master at startup? I am able to find the master IP using a bash command - I just want to specify it in a config file.
It's tough to say exactly without knowing more about your app, but the two containers definitely can't use the same port if they're part of the same pod. In kubernetes, each pod gets its own IP address, but each container in the pod shares that same IP address. That's why you can't have more than one of them using the same port unless you split them into separate pods. To get more info, I'd recommend using the kubectl logs [pod] [optional container name] command, which can be used to get the stdout/stderr from a container. The -p flag can be used to get the logs from the most recently failed container.
###Instruction: In GKE, I have a pod with two containers. They use the same image, and the only difference is that I am passing them slightly different flags. One runs fine, the other goes in a crash loop. How can I debug the reason for the failure? My pod definition is apiVersion: v1 kind: ReplicationController metadata: name: doorman-client spec: replicas: 10 selector: app: doorman-client template: metadata: name: doorman-client labels: app: doorman-client spec: containers: - name: doorman-client-proportional resources: limits: cpu: 10m image: gcr.io/google.com/doorman/doorman-client:v0.1.1 command: - client - -port=80 - -count=50 - -initial_capacity=15 - -min_capacity=5 - -max_capacity=2000 - -increase_chance=0.1 - -decrease_chance=0.05 - -step=5 - -resource=proportional - -addr=$(DOORMAN_SERVICE_HOST):$(DOORMAN_SERVICE_PORT_GRPC) - -vmodule=doorman_client=2 - --logtostderr ports: - containerPort: 80 name: http - name: doorman-client-fair resources: limits: cpu: 10m image: gcr.io/google.com/doorman/doorman-client:v0.1.1 command: - client - -port=80 - -count=50 - -initial_capacity=15 - -min_capacity=5 - -max_capacity=2000 - -increase_chance=0.1 - -decrease_chance=0.05 - -step=5 - -resource=fair - -addr=$(DOORMAN_SERVICE_HOST):$(DOORMAN_SERVICE_PORT_GRPC) - -vmodule=doorman_client=2 - --logtostderr ports: - containerPort: 80 name: http kubectl describe gives me the following: 6:06 [0] (szopa szopa-macbookpro):~/GOPATH/src/github.com/youtube/doorman$ kubectl describe pod doorman-client-tylba Name: doorman-client-tylba Namespace: default Image(s): gcr.io/google.com/doorman/doorman-client:v0.1.1,gcr.io/google.com/doorman/doorman-client:v0.1.1 Node: gke-doorman-loadtest-d75f7d0f-node-k9g6/10.240.0.4 Start Time: Sun, 21 Feb 2016 16:05:42 +0100 Labels: app=doorman-client Status: Running Reason: Message: IP: 10.128.4.182 Replication Controllers: doorman-client (10/10 replicas created) Containers: doorman-client-proportional: Container ID: docker://0bdcb8269c5d15a4f99ccc0b0ee04bf3e9fd0db9fd23e9c0661e06564e9105f7 Image: gcr.io/google.com/doorman/doorman-client:v0.1.1 Image ID: docker://a603248608898591c84216dd3172aaa7c335af66a57fe50fd37a42394d5631dc QoS Tier: cpu: Guaranteed Limits: cpu: 10m Requests: cpu: 10m State: Running Started: Sun, 21 Feb 2016 16:05:42 +0100 Ready: True Restart Count: 0 Environment Variables: doorman-client-fair: Container ID: docker://92fea92f1307b943d0ea714441417d4186c5ac6a17798650952ea726d18dba68 Image: gcr.io/google.com/doorman/doorman-client:v0.1.1 Image ID: docker://a603248608898591c84216dd3172aaa7c335af66a57fe50fd37a42394d5631dc QoS Tier: cpu: Guaranteed Limits: cpu: 10m Requests: cpu: 10m State: Running Started: Sun, 21 Feb 2016 16:06:03 +0100 Last Termination State: Terminated Reason: Error Exit Code: 0 Started: Sun, 21 Feb 2016 16:05:43 +0100 Finished: Sun, 21 Feb 2016 16:05:44 +0100 Ready: False Restart Count: 2 Environment Variables: Conditions: Type Status Ready False Volumes: default-token-ihani: Type: Secret (a secret that should populate this volume) SecretName: default-token-ihani Events: FirstSeen LastSeen Count From SubobjectPath Reason Message ───────── ──────── ───── ──── ───────────── ────── ─────── 29s 29s 1 {scheduler } Scheduled Successfully assigned doorman-client-tylba to gke-doorman-loadtest-d75f7d0f-node-k9g6 29s 29s 1 {kubelet gke-doorman-loadtest-d75f7d0f-node-k9g6} implicitly required container POD Pulled Container image "gcr.io/google_containers/pause:0.8.0" already present on machine 29s 29s 1 {kubelet gke-doorman-loadtest-d75f7d0f-node-k9g6} implicitly required container POD Created Created with docker id 5013851c67d9 29s 29s 1 {kubelet gke-doorman-loadtest-d75f7d0f-node-k9g6} implicitly required container POD Started Started with docker id 5013851c67d9 29s 29s 1 {kubelet gke-doorman-loadtest-d75f7d0f-node-k9g6} spec.containers{doorman-client-proportional} Created Created with docker id 0bdcb8269c5d 29s 29s 1 {kubelet gke-doorman-loadtest-d75f7d0f-node-k9g6} spec.containers{doorman-client-proportional} Started Started with docker id 0bdcb8269c5d 29s 29s 1 {kubelet gke-doorman-loadtest-d75f7d0f-node-k9g6} spec.containers{doorman-client-fair} Created Created with docker id ed0928176958 29s 29s 1 {kubelet gke-doorman-loadtest-d75f7d0f-node-k9g6} spec.containers{doorman-client-fair} Started Started with docker id ed0928176958 28s 28s 1 {kubelet gke-doorman-loadtest-d75f7d0f-node-k9g6} spec.containers{doorman-client-fair} Created Created with docker id 0a73290085b6 28s 28s 1 {kubelet gke-doorman-loadtest-d75f7d0f-node-k9g6} spec.containers{doorman-client-fair} Started Started with docker id 0a73290085b6 18s 18s 1 {kubelet gke-doorman-loadtest-d75f7d0f-node-k9g6} spec.containers{doorman-client-fair} Backoff Back-off restarting failed docker container 8s 8s 1 {kubelet gke-doorman-loadtest-d75f7d0f-node-k9g6} spec.containers{doorman-client-fair} Started Started with docker id 92fea92f1307 29s 8s 4 {kubelet gke-doorman-loadtest-d75f7d0f-node-k9g6} spec.containers{doorman-client-fair} Pulled Container image "gcr.io/google.com/doorman/doorman-client:v0.1.1" already present on machine 8s 8s 1 {kubelet gke-doorman-loadtest-d75f7d0f-node-k9g6} spec.containers{doorman-client-fair} Created Created with docker id 92fea92f1307 As you can see, the exit code is zero, with the message being "Error", which is not super helpful. I tried: changing the order of the definitions (firs one always runs, second one always fails). changing the used ports to be different (no effect) changing the name of the ports to be different (no effect). ### Response: It's tough to say exactly without knowing more about your app, but the two containers definitely can't use the same port if they're part of the same pod. In kubernetes, each pod gets its own IP address, but each container in the pod shares that same IP address. That's why you can't have more than one of them using the same port unless you split them into separate pods. To get more info, I'd recommend using the kubectl logs [pod] [optional container name] command, which can be used to get the stdout/stderr from a container. The -p flag can be used to get the logs from the most recently failed container.
In GKE, I have a pod with two containers. They use the same image, and the only difference is that I am passing them slightly different flags. One runs fine, the other goes in a crash loop. How can I debug the reason for the failure? My pod definition is apiVersion: v1 kind: ReplicationController metadata: name: doorman-client spec: replicas: 10 selector: app: doorman-client template: metadata: name: doorman-client labels: app: doorman-client spec: containers: - name: doorman-client-proportional resources: limits: cpu: 10m image: gcr.io/google.com/doorman/doorman-client:v0.1.1 command: - client - -port=80 - -count=50 - -initial_capacity=15 - -min_capacity=5 - -max_capacity=2000 - -increase_chance=0.1 - -decrease_chance=0.05 - -step=5 - -resource=proportional - -addr=$(DOORMAN_SERVICE_HOST):$(DOORMAN_SERVICE_PORT_GRPC) - -vmodule=doorman_client=2 - --logtostderr ports: - containerPort: 80 name: http - name: doorman-client-fair resources: limits: cpu: 10m image: gcr.io/google.com/doorman/doorman-client:v0.1.1 command: - client - -port=80 - -count=50 - -initial_capacity=15 - -min_capacity=5 - -max_capacity=2000 - -increase_chance=0.1 - -decrease_chance=0.05 - -step=5 - -resource=fair - -addr=$(DOORMAN_SERVICE_HOST):$(DOORMAN_SERVICE_PORT_GRPC) - -vmodule=doorman_client=2 - --logtostderr ports: - containerPort: 80 name: http kubectl describe gives me the following: 6:06 [0] (szopa szopa-macbookpro):~/GOPATH/src/github.com/youtube/doorman$ kubectl describe pod doorman-client-tylba Name: doorman-client-tylba Namespace: default Image(s): gcr.io/google.com/doorman/doorman-client:v0.1.1,gcr.io/google.com/doorman/doorman-client:v0.1.1 Node: gke-doorman-loadtest-d75f7d0f-node-k9g6/10.240.0.4 Start Time: Sun, 21 Feb 2016 16:05:42 +0100 Labels: app=doorman-client Status: Running Reason: Message: IP: 10.128.4.182 Replication Controllers: doorman-client (10/10 replicas created) Containers: doorman-client-proportional: Container ID: docker://0bdcb8269c5d15a4f99ccc0b0ee04bf3e9fd0db9fd23e9c0661e06564e9105f7 Image: gcr.io/google.com/doorman/doorman-client:v0.1.1 Image ID: docker://a603248608898591c84216dd3172aaa7c335af66a57fe50fd37a42394d5631dc QoS Tier: cpu: Guaranteed Limits: cpu: 10m Requests: cpu: 10m State: Running Started: Sun, 21 Feb 2016 16:05:42 +0100 Ready: True Restart Count: 0 Environment Variables: doorman-client-fair: Container ID: docker://92fea92f1307b943d0ea714441417d4186c5ac6a17798650952ea726d18dba68 Image: gcr.io/google.com/doorman/doorman-client:v0.1.1 Image ID: docker://a603248608898591c84216dd3172aaa7c335af66a57fe50fd37a42394d5631dc QoS Tier: cpu: Guaranteed Limits: cpu: 10m Requests: cpu: 10m State: Running Started: Sun, 21 Feb 2016 16:06:03 +0100 Last Termination State: Terminated Reason: Error Exit Code: 0 Started: Sun, 21 Feb 2016 16:05:43 +0100 Finished: Sun, 21 Feb 2016 16:05:44 +0100 Ready: False Restart Count: 2 Environment Variables: Conditions: Type Status Ready False Volumes: default-token-ihani: Type: Secret (a secret that should populate this volume) SecretName: default-token-ihani Events: FirstSeen LastSeen Count From SubobjectPath Reason Message ───────── ──────── ───── ──── ───────────── ────── ─────── 29s 29s 1 {scheduler } Scheduled Successfully assigned doorman-client-tylba to gke-doorman-loadtest-d75f7d0f-node-k9g6 29s 29s 1 {kubelet gke-doorman-loadtest-d75f7d0f-node-k9g6} implicitly required container POD Pulled Container image "gcr.io/google_containers/pause:0.8.0" already present on machine 29s 29s 1 {kubelet gke-doorman-loadtest-d75f7d0f-node-k9g6} implicitly required container POD Created Created with docker id 5013851c67d9 29s 29s 1 {kubelet gke-doorman-loadtest-d75f7d0f-node-k9g6} implicitly required container POD Started Started with docker id 5013851c67d9 29s 29s 1 {kubelet gke-doorman-loadtest-d75f7d0f-node-k9g6} spec.containers{doorman-client-proportional} Created Created with docker id 0bdcb8269c5d 29s 29s 1 {kubelet gke-doorman-loadtest-d75f7d0f-node-k9g6} spec.containers{doorman-client-proportional} Started Started with docker id 0bdcb8269c5d 29s 29s 1 {kubelet gke-doorman-loadtest-d75f7d0f-node-k9g6} spec.containers{doorman-client-fair} Created Created with docker id ed0928176958 29s 29s 1 {kubelet gke-doorman-loadtest-d75f7d0f-node-k9g6} spec.containers{doorman-client-fair} Started Started with docker id ed0928176958 28s 28s 1 {kubelet gke-doorman-loadtest-d75f7d0f-node-k9g6} spec.containers{doorman-client-fair} Created Created with docker id 0a73290085b6 28s 28s 1 {kubelet gke-doorman-loadtest-d75f7d0f-node-k9g6} spec.containers{doorman-client-fair} Started Started with docker id 0a73290085b6 18s 18s 1 {kubelet gke-doorman-loadtest-d75f7d0f-node-k9g6} spec.containers{doorman-client-fair} Backoff Back-off restarting failed docker container 8s 8s 1 {kubelet gke-doorman-loadtest-d75f7d0f-node-k9g6} spec.containers{doorman-client-fair} Started Started with docker id 92fea92f1307 29s 8s 4 {kubelet gke-doorman-loadtest-d75f7d0f-node-k9g6} spec.containers{doorman-client-fair} Pulled Container image "gcr.io/google.com/doorman/doorman-client:v0.1.1" already present on machine 8s 8s 1 {kubelet gke-doorman-loadtest-d75f7d0f-node-k9g6} spec.containers{doorman-client-fair} Created Created with docker id 92fea92f1307 As you can see, the exit code is zero, with the message being "Error", which is not super helpful. I tried: changing the order of the definitions (firs one always runs, second one always fails). changing the used ports to be different (no effect) changing the name of the ports to be different (no effect).
GKE clusters actually do have beta features. But Deployments were an alpha feature in the 1.1 release (which is the current supported release) and are graduating to beta for the upcoming 1.2 release. Once they are a beta feature, you will be able to use them in GKE. The rolling update command is really just syntactic sugar around first creating a new replication controller, scaling it up by one, scaling the existing replication controller down by one, and repeating until the old replication controller has size zero. You can do this yourself at a much faster rate if going one pod at a time is too slow. You may also want to file a feature request on github to add a flag to the rolling update command to update multiple pods in parallel.
###Instruction: I need to deploy a web application in gke. The application consists of two pods and needs to scale to ~30 replicas. Rolling updates take ~30s/pod in our setup. Old title: How do I enable the deployments API on GKE cluster? I tried to use deployments as they allow to update multiple pods in parallel. But, as nshttpd pointed out in #google-containers on the kubernetes slack: I may be wrong, but GKE clusters don’t have beta features I thought. so if you want Deployments you’ll have to spin up your own cluster. ### Response: GKE clusters actually do have beta features. But Deployments were an alpha feature in the 1.1 release (which is the current supported release) and are graduating to beta for the upcoming 1.2 release. Once they are a beta feature, you will be able to use them in GKE. The rolling update command is really just syntactic sugar around first creating a new replication controller, scaling it up by one, scaling the existing replication controller down by one, and repeating until the old replication controller has size zero. You can do this yourself at a much faster rate if going one pod at a time is too slow. You may also want to file a feature request on github to add a flag to the rolling update command to update multiple pods in parallel.
I need to deploy a web application in gke. The application consists of two pods and needs to scale to ~30 replicas. Rolling updates take ~30s/pod in our setup. Old title: How do I enable the deployments API on GKE cluster? I tried to use deployments as they allow to update multiple pods in parallel. But, as nshttpd pointed out in #google-containers on the kubernetes slack: I may be wrong, but GKE clusters don’t have beta features I thought. so if you want Deployments you’ll have to spin up your own cluster.
You can put a service in front of those pods by giving each pod some label (for example etcd-service=true), and making a kubernetes service with a selector that matches that label. Use the DNS add-on, and you will get a DNS A record for each endpoint in the service. You can read more in the docs here.
###Instruction: I am trying to implement something like the etcd services that uses the consensus algorithm (https://raft.github.io/). In this case, multiple instances of the etcd services need to be aware of each other. For this to happen, if we have 3 pods of etcd instance in a replication controller, the pods need to be able to talk to each other (at least be able to know the IP of self and all the other pods). Is there a way of achieving this in the replication controller or pod specs without having to use the kubernetes API in the pod container? ### Response: You can put a service in front of those pods by giving each pod some label (for example etcd-service=true), and making a kubernetes service with a selector that matches that label. Use the DNS add-on, and you will get a DNS A record for each endpoint in the service. You can read more in the docs here.
I am trying to implement something like the etcd services that uses the consensus algorithm (https://raft.github.io/). In this case, multiple instances of the etcd services need to be aware of each other. For this to happen, if we have 3 pods of etcd instance in a replication controller, the pods need to be able to talk to each other (at least be able to know the IP of self and all the other pods). Is there a way of achieving this in the replication controller or pod specs without having to use the kubernetes API in the pod container?
Regarding Mesos Masters and Zookeeper instances, to have an even number of nodes is not really a good idea, because of the quorum mechanisms involved. My suggestion would be running three nodes of both services. I assume you want to run this locally? If so, I guess it would make sense to use a preconfigured Vagrant project such as https://github.com/tobilg/coreos-mesos-cluster This launches a three node CoreOS cluster with all the Mesos/Zookeeper services already installed, and etcd and flanneld are also already installed on CoreOS itself. This would mean that you only would have to do the following steps once the cluster is launched: http://kubernetes.io/v1.1/docs/getting-started-guides/mesos.html#deploy-kubernetes-mesos respectively https://coreos.com/kubernetes/docs/latest/getting-started.html http://kubernetes.io/v1.1/docs/getting-started-guides/mesos.html#start-kubernetes-mesos-services
###Instruction: Hi I am setting up Kubernetes on top of Mesos by following http://kubernetes.io/v1.1/docs/getting-started-guides/mesos.html and this is how my current test lab looks like 2 Numbers of mesos master with zookeeper 2 Numbers of mesos slaves with docker and flannel installed Additional mesos slave running Kubernetes-mesos and kubernetes srvices A server with ETCD service which supports both flannel and kubernetes Can you please let me know if this is enough ? Below are the two questions I have Do we really need to have the kubernetes master server here to be configured as a mesos slave? Do we need to install kubernetes package on mesos slaves as well ? The url talks about package installation and configuration only on the kubernetes master..With out kubernetes running on the slaves can the master create pods/services etc on the slaves through mesos scheduler? ### Response: Regarding Mesos Masters and Zookeeper instances, to have an even number of nodes is not really a good idea, because of the quorum mechanisms involved. My suggestion would be running three nodes of both services. I assume you want to run this locally? If so, I guess it would make sense to use a preconfigured Vagrant project such as https://github.com/tobilg/coreos-mesos-cluster This launches a three node CoreOS cluster with all the Mesos/Zookeeper services already installed, and etcd and flanneld are also already installed on CoreOS itself. This would mean that you only would have to do the following steps once the cluster is launched: http://kubernetes.io/v1.1/docs/getting-started-guides/mesos.html#deploy-kubernetes-mesos respectively https://coreos.com/kubernetes/docs/latest/getting-started.html http://kubernetes.io/v1.1/docs/getting-started-guides/mesos.html#start-kubernetes-mesos-services
Hi I am setting up Kubernetes on top of Mesos by following http://kubernetes.io/v1.1/docs/getting-started-guides/mesos.html and this is how my current test lab looks like 2 Numbers of mesos master with zookeeper 2 Numbers of mesos slaves with docker and flannel installed Additional mesos slave running Kubernetes-mesos and kubernetes srvices A server with ETCD service which supports both flannel and kubernetes Can you please let me know if this is enough ? Below are the two questions I have Do we really need to have the kubernetes master server here to be configured as a mesos slave? Do we need to install kubernetes package on mesos slaves as well ? The url talks about package installation and configuration only on the kubernetes master..With out kubernetes running on the slaves can the master create pods/services etc on the slaves through mesos scheduler?
It should work. For Host Path volumes, kube doesn't enforce any policy. But if your FUSE daemon restarts, the mount will become inaccessible. AFAIK, kube does not support mount propagation for volumes.
###Instruction: I know that Kubernetes does not support mounting GCS buckets inside a Pod. But If I use GoogleFuse to mount a GCS bucket on the Node and then expose it to a Pod as a host path will that work? ### Response: It should work. For Host Path volumes, kube doesn't enforce any policy. But if your FUSE daemon restarts, the mount will become inaccessible. AFAIK, kube does not support mount propagation for volumes.
I know that Kubernetes does not support mounting GCS buckets inside a Pod. But If I use GoogleFuse to mount a GCS bucket on the Node and then expose it to a Pod as a host path will that work?
On first: To use a GlusterFS you don't need to install glusterfs-client on kubernetes node. Kubernetes have the volume mounting option for glusterfs by default. To use a glusterfs with kubernetes you need to things. a working glusterfs server. a running volume in the glusterfs server. I assume you have those. If anyone don't then create a glusterfs server and start your volumes with the following commands $ gluster volume create <volume-name> replica 2 transport tcp \ peer1:/directory \ peer2:/directory \ force $ gluster volume start <vonlume-name> $ sudo gluster volume info if this is ok, you need an kubernetes endpoint to use with the pod. as far an example a end point is like this. kind: Endpoints apiVersion: v1 metadata: name: glusterfs subsets: - addresses: - ip: peer1 ports: - port: 1 - addresses: - ip: peer2 ports: - port: 1 And at third mount the gfs volume to a pod with the end point. containers: - name: mysql image: mysql:5.6 ports: - containerPort: 3306 env: - name: MYSQL_ROOT_PASSWORD value: password volumeMounts: - mountPath: /var/lib/mysql name: glusterfsvol volumes: - glusterfs: endpoints: glusterfs-cluster path: <volume-name> name: glusterfsvol **The path must match the volume name with the glusterfs. this all should work fine.
###Instruction: i've a kubernetes cluster with a master node and 3 minions, i've already a glusterfs cluster, every node of kubernetes cluster have glusterfs-client installed and working. i'm trying to run a pod ( a simple mysql ) mounting /var/lib/mysql on glusterfs but i see: Image: mysql:5.6 is ready, container is creating i run: kubectl get event i see: Thu, 18 Feb 2016 10:08:01 +0100 Thu, 18 Feb 2016 10:08:01 +0100 1 mysql-9ym10 Pod scheduled {scheduler } Successfully assigned mysql-9ym10 to nodeXX Thu, 18 Feb 2016 10:08:01 +0100 Thu, 18 Feb 2016 10:08:01 +0100 1 mysql ReplicationController successfulCreate {replication-controller } Created pod: mysql-9ym10 Thu, 18 Feb 2016 10:08:02 +0100 Thu, 18 Feb 2016 10:08:12 +0100 2 mysql-9ym10 Pod failedMount {kubelet nodeXX} Unable to mount volumes for pod "mysql-9ym10_default": exit status 1 Thu, 18 Feb 2016 10:08:02 +0100 Thu, 18 Feb 2016 10:08:12 +0100 2 mysql-9ym10 Pod failedSync {kubelet nodeXX} Error syncing pod, skipping: exit status 1 if i run kubectl describe pod mysql-9ym10 i see: Name: mysql-9ym10 Namespace: default Image(s): mysql:5.6 Node: nodeXX/nodeXX Labels: app=mysql Status: Pending Reason: Message: IP: Replication Controllers: mysql (1/1 replicas created) Containers: mysql: Image: mysql:5.6 State: Waiting Reason: Image: mysql:5.6 is ready, container is creating Ready: False Restart Count: 0 Conditions: Type Status Ready False Events: FirstSeen LastSeen Count From SubobjectPath Reason Message Thu, 18 Feb 2016 10:08:01 +0100 Thu, 18 Feb 2016 10:08:01 +0100 1 {scheduler } scheduled Successfully assigned mysql-9ym10 to nodeXX Thu, 18 Feb 2016 10:08:02 +0100 Thu, 18 Feb 2016 10:10:22 +0100 15 {kubelet nodeXX} failedMount Unable to mount volumes for pod "mysql-9ym10_default": exit status 1 Thu, 18 Feb 2016 10:08:02 +0100 Thu, 18 Feb 2016 10:10:22 +0100 15 {kubelet nodeXX} failedSync Error syncing pod, skipping: exit status 1 this is the yaml file for container: apiVersion: v1 kind: ReplicationController metadata: name: mysql spec: replicas: 1 selector: app: mysql template: metadata: name: mysql labels: app: mysql spec: containers: - name: mysql image: mysql:5.6 ports: - containerPort: 3306 env: - name: MYSQL_ROOT_PASSWORD value: password volumeMounts: - mountPath: /var/lib/mysql name: glusterfsvol volumes: - glusterfs: endpoints: glusterfs-cluster path: glustervolume readOnly: false name: glusterfsvol ### Response: On first: To use a GlusterFS you don't need to install glusterfs-client on kubernetes node. Kubernetes have the volume mounting option for glusterfs by default. To use a glusterfs with kubernetes you need to things. a working glusterfs server. a running volume in the glusterfs server. I assume you have those. If anyone don't then create a glusterfs server and start your volumes with the following commands $ gluster volume create <volume-name> replica 2 transport tcp \ peer1:/directory \ peer2:/directory \ force $ gluster volume start <vonlume-name> $ sudo gluster volume info if this is ok, you need an kubernetes endpoint to use with the pod. as far an example a end point is like this. kind: Endpoints apiVersion: v1 metadata: name: glusterfs subsets: - addresses: - ip: peer1 ports: - port: 1 - addresses: - ip: peer2 ports: - port: 1 And at third mount the gfs volume to a pod with the end point. containers: - name: mysql image: mysql:5.6 ports: - containerPort: 3306 env: - name: MYSQL_ROOT_PASSWORD value: password volumeMounts: - mountPath: /var/lib/mysql name: glusterfsvol volumes: - glusterfs: endpoints: glusterfs-cluster path: <volume-name> name: glusterfsvol **The path must match the volume name with the glusterfs. this all should work fine.
i've a kubernetes cluster with a master node and 3 minions, i've already a glusterfs cluster, every node of kubernetes cluster have glusterfs-client installed and working. i'm trying to run a pod ( a simple mysql ) mounting /var/lib/mysql on glusterfs but i see: Image: mysql:5.6 is ready, container is creating i run: kubectl get event i see: Thu, 18 Feb 2016 10:08:01 +0100 Thu, 18 Feb 2016 10:08:01 +0100 1 mysql-9ym10 Pod scheduled {scheduler } Successfully assigned mysql-9ym10 to nodeXX Thu, 18 Feb 2016 10:08:01 +0100 Thu, 18 Feb 2016 10:08:01 +0100 1 mysql ReplicationController successfulCreate {replication-controller } Created pod: mysql-9ym10 Thu, 18 Feb 2016 10:08:02 +0100 Thu, 18 Feb 2016 10:08:12 +0100 2 mysql-9ym10 Pod failedMount {kubelet nodeXX} Unable to mount volumes for pod "mysql-9ym10_default": exit status 1 Thu, 18 Feb 2016 10:08:02 +0100 Thu, 18 Feb 2016 10:08:12 +0100 2 mysql-9ym10 Pod failedSync {kubelet nodeXX} Error syncing pod, skipping: exit status 1 if i run kubectl describe pod mysql-9ym10 i see: Name: mysql-9ym10 Namespace: default Image(s): mysql:5.6 Node: nodeXX/nodeXX Labels: app=mysql Status: Pending Reason: Message: IP: Replication Controllers: mysql (1/1 replicas created) Containers: mysql: Image: mysql:5.6 State: Waiting Reason: Image: mysql:5.6 is ready, container is creating Ready: False Restart Count: 0 Conditions: Type Status Ready False Events: FirstSeen LastSeen Count From SubobjectPath Reason Message Thu, 18 Feb 2016 10:08:01 +0100 Thu, 18 Feb 2016 10:08:01 +0100 1 {scheduler } scheduled Successfully assigned mysql-9ym10 to nodeXX Thu, 18 Feb 2016 10:08:02 +0100 Thu, 18 Feb 2016 10:10:22 +0100 15 {kubelet nodeXX} failedMount Unable to mount volumes for pod "mysql-9ym10_default": exit status 1 Thu, 18 Feb 2016 10:08:02 +0100 Thu, 18 Feb 2016 10:10:22 +0100 15 {kubelet nodeXX} failedSync Error syncing pod, skipping: exit status 1 this is the yaml file for container: apiVersion: v1 kind: ReplicationController metadata: name: mysql spec: replicas: 1 selector: app: mysql template: metadata: name: mysql labels: app: mysql spec: containers: - name: mysql image: mysql:5.6 ports: - containerPort: 3306 env: - name: MYSQL_ROOT_PASSWORD value: password volumeMounts: - mountPath: /var/lib/mysql name: glusterfsvol volumes: - glusterfs: endpoints: glusterfs-cluster path: glustervolume readOnly: false name: glusterfsvol
EmptyDir volumes are inherently bound to the lifecycle of a single pod and can't be shared amongst pods in replication controllers or otherwise. If you want to share volumes amongst pods, the best choices right now are NFS or gluster, in a persistent volume. See an example here: https://github.com/kubernetes/examples/blob/master/staging/volumes/nfs/README.md
###Instruction: Found this example for Kubernetes EmptyDir volume apiVersion: v1 kind: Pod metadata: name: www spec: containers: - name: nginx image: nginx volumeMounts: - mountPath: /srv/www name: www-data readOnly: true - name: git-monitor image: kubernetes/git-monitor env: - name: GIT_REPO value: http://github.com/some/repo.git volumeMounts: - mountPath: /data name: www-data volumes: - name: www-data emptyDir: {} I want to volume mount between 2 pods. I am creating these pods using 2 different Replication Controllers. The replication controllers looks like this Replication Controller 1: apiVersion: v1 kind: ReplicationController metadata: name: node-worker labels: name: node-worker spec: replicas: 1 selector: name: node-worker template: metadata: labels: name: node-worker spec: containers: - name: node-worker image: image/node-worker volumeMounts: - mountPath: /mnt/test name: deployment-volume volumes: - name: deployment-volume emptyDir: {} Replication Controller 2: apiVersion: v1 kind: ReplicationController metadata: name: node-manager labels: name: node-manager spec: replicas: 1 selector: name: node-manager template: metadata: labels: name: node-manager spec: containers: - name: node-manager image: image/node-manager volumeMounts: - mountPath: /mnt/test name: deployment-volume volumes: - name: deployment-volume emptyDir: {} Can Kubernetes emptyDir volume be used for this scenario? ### Response: EmptyDir volumes are inherently bound to the lifecycle of a single pod and can't be shared amongst pods in replication controllers or otherwise. If you want to share volumes amongst pods, the best choices right now are NFS or gluster, in a persistent volume. See an example here: https://github.com/kubernetes/examples/blob/master/staging/volumes/nfs/README.md
Found this example for Kubernetes EmptyDir volume apiVersion: v1 kind: Pod metadata: name: www spec: containers: - name: nginx image: nginx volumeMounts: - mountPath: /srv/www name: www-data readOnly: true - name: git-monitor image: kubernetes/git-monitor env: - name: GIT_REPO value: http://github.com/some/repo.git volumeMounts: - mountPath: /data name: www-data volumes: - name: www-data emptyDir: {} I want to volume mount between 2 pods. I am creating these pods using 2 different Replication Controllers. The replication controllers looks like this Replication Controller 1: apiVersion: v1 kind: ReplicationController metadata: name: node-worker labels: name: node-worker spec: replicas: 1 selector: name: node-worker template: metadata: labels: name: node-worker spec: containers: - name: node-worker image: image/node-worker volumeMounts: - mountPath: /mnt/test name: deployment-volume volumes: - name: deployment-volume emptyDir: {} Replication Controller 2: apiVersion: v1 kind: ReplicationController metadata: name: node-manager labels: name: node-manager spec: replicas: 1 selector: name: node-manager template: metadata: labels: name: node-manager spec: containers: - name: node-manager image: image/node-manager volumeMounts: - mountPath: /mnt/test name: deployment-volume volumes: - name: deployment-volume emptyDir: {} Can Kubernetes emptyDir volume be used for this scenario?
Kubernetes does not currently reschedule pods to rebalance them across nodes, but planning has begun to include a rescheduler for this purpose in our 1.3 release in a few months.
###Instruction: We have been using AWS-ECS for scheduling our containers. The most pressing problem I see with ECS, is the 'resource fragmentation'. Say I have following task definitions/pods with respective resource requirements: POD CPU Desired Total_CPU - task1 1024 2 2048 - task2 512 4 2048 - task3 256 1 256 (Keeping only CPU requirements for simplicity) If we consider VM with available CPU=2048, we need at least 3 VMs to get above services running. Going further, for running Blue Green deployment we would theoretically need just one more VM with 2048 CPU units. However, as the deployments happen over time, the containers will be dispersed in all available VMs. This will make the Blue Green deployment take too much time (or even fail), due to unavailability of resources. So, for the Blue Green deployment to predictably work, we would need many more extra VMs (more $$$) in the cluster. I would like to know whether Kubernetes provides such an feature that will consolidate the pods to keep minimum resource fragmentation. ### Response: Kubernetes does not currently reschedule pods to rebalance them across nodes, but planning has begun to include a rescheduler for this purpose in our 1.3 release in a few months.
We have been using AWS-ECS for scheduling our containers. The most pressing problem I see with ECS, is the 'resource fragmentation'. Say I have following task definitions/pods with respective resource requirements: POD CPU Desired Total_CPU - task1 1024 2 2048 - task2 512 4 2048 - task3 256 1 256 (Keeping only CPU requirements for simplicity) If we consider VM with available CPU=2048, we need at least 3 VMs to get above services running. Going further, for running Blue Green deployment we would theoretically need just one more VM with 2048 CPU units. However, as the deployments happen over time, the containers will be dispersed in all available VMs. This will make the Blue Green deployment take too much time (or even fail), due to unavailability of resources. So, for the Blue Green deployment to predictably work, we would need many more extra VMs (more $$$) in the cluster. I would like to know whether Kubernetes provides such an feature that will consolidate the pods to keep minimum resource fragmentation.
Dataproc runs Spark on YARN, so you need to set master to 'yarn-client'. You also need to point Spark at your YARN ResourceManager, which requires a under-documented SparkConf -> Hadoop Configuration conversion. You also have to tell Spark about HDFS on the cluster, so it can stage resources for YARN. You could use Google Cloud Storage instead of HDFS, if you baked The Google Cloud Storage Connector for Hadoop into your image. Try: import pyspark conf = pyspark.SparkConf() conf.setMaster('yarn-client') conf.setAppName('My Jupyter Notebook') # 'spark.hadoop.foo.bar' sets key 'foo.bar' in the Hadoop Configuaration. conf.set('spark.hadoop.yarn.resourcemanager.address', '<spark-master-hostname>') conf.set('spark.hadoop.fs.default.name', 'hdfs://<spark-master-hostname>/') sc = pyspark.SparkContext(conf=conf) For a more permanent config, you could bake these into a local file 'core-site.xml' as described here, place that in a local directory, and set HADOOP_CONF_DIR to that directory in your environment. It's also worth noting that while being in the same Zone is important for performance, it is being in the same Network and allowing TCP between internal IP addresses in that network that allows your VMs to communicate. If you are using the default network, then the default-allow-internal firewall rule, should be sufficient. Hope that helps.
###Instruction: I don't know if this is already answered in SO but I couldn't find a solution to my problem. I have an IPython notebook running in a docker container in Google Container Engine, the container is based on this image jupyter/all-spark-notebook I have also a spark cluster created with google cloud dataproc Spark master and the notebook are running in different VMs but in the same region and zone. My problem is that I'm trying to connect to the spark master from the IPython notebook but without success. I use this snippet of code in my python notebook import pyspark conf = pyspark.SparkConf() conf.setMaster("spark://<spark-master-ip or spark-master-hostname>:7077") I just started working with spark, so I'm sure I'm missing something (authentication, security ...), What I found over there is connecting a local browser over an SSH tunnel Somebody already did this kind of set up? Thank you in advance ### Response: Dataproc runs Spark on YARN, so you need to set master to 'yarn-client'. You also need to point Spark at your YARN ResourceManager, which requires a under-documented SparkConf -> Hadoop Configuration conversion. You also have to tell Spark about HDFS on the cluster, so it can stage resources for YARN. You could use Google Cloud Storage instead of HDFS, if you baked The Google Cloud Storage Connector for Hadoop into your image. Try: import pyspark conf = pyspark.SparkConf() conf.setMaster('yarn-client') conf.setAppName('My Jupyter Notebook') # 'spark.hadoop.foo.bar' sets key 'foo.bar' in the Hadoop Configuaration. conf.set('spark.hadoop.yarn.resourcemanager.address', '<spark-master-hostname>') conf.set('spark.hadoop.fs.default.name', 'hdfs://<spark-master-hostname>/') sc = pyspark.SparkContext(conf=conf) For a more permanent config, you could bake these into a local file 'core-site.xml' as described here, place that in a local directory, and set HADOOP_CONF_DIR to that directory in your environment. It's also worth noting that while being in the same Zone is important for performance, it is being in the same Network and allowing TCP between internal IP addresses in that network that allows your VMs to communicate. If you are using the default network, then the default-allow-internal firewall rule, should be sufficient. Hope that helps.
I don't know if this is already answered in SO but I couldn't find a solution to my problem. I have an IPython notebook running in a docker container in Google Container Engine, the container is based on this image jupyter/all-spark-notebook I have also a spark cluster created with google cloud dataproc Spark master and the notebook are running in different VMs but in the same region and zone. My problem is that I'm trying to connect to the spark master from the IPython notebook but without success. I use this snippet of code in my python notebook import pyspark conf = pyspark.SparkConf() conf.setMaster("spark://<spark-master-ip or spark-master-hostname>:7077") I just started working with spark, so I'm sure I'm missing something (authentication, security ...), What I found over there is connecting a local browser over an SSH tunnel Somebody already did this kind of set up? Thank you in advance
Dirty workaround (not tested): you can scale down rc to 0 and then up to original size => it'll be "pod" restart. Or you can use 2 active(non 0 size)/passive(size 0) rc, which will be included in the same service. And you will be scaling them up/down. Tagging it means a complicated script to always remove old tagged images (useless someone has a trick here). Tagging is nice explicit process. Kubernetes Garbage collection will delete your old images automatically. Hopefully you know, that if you are using only latest tag, then rollback can be impossible. I recommend to set up tag system, for example :latest_stable, :latest_dev, :2nd_latest_stable, .... These tags will be only "pointers" and your CI will be moving them. Then you can define and script some smart registry delete tag policy, e.g. all tags older than 2nd_latest stable can be deleted safely. You know your app, so you can set up policy, which will fits your needs and release policy. Tag example - start point builds 1/2/3 (build id, git id, build time, ...) - build 1 is :production and :canary, all tags are pushed: # docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE image 3 a21348af4283 37 seconds ago 125.1 MB image 2 7dda7c549d2d 50 seconds ago 125.1 MB image production e53856d910b8 58 seconds ago 125.1 MB image canary e53856d910b8 58 seconds ago 125.1 MB image 1 e53856d910b8 58 seconds ago 125.1 MB Build 2 is going to be :canary: # docker tag -f image:2 image:canary # docker push image:canary # docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE image 3 a21348af4283 6 minutes ago 125.1 MB image canary 7dda7c549d2d 6 minutes ago 125.1 MB image 2 7dda7c549d2d 6 minutes ago 125.1 MB image production e53856d910b8 7 minutes ago 125.1 MB image 1 e53856d910b8 7 minutes ago 125.1 MB Tests OK, build 2 is stable - it'll be :production: # docker tag -f image:2 image:production # docker push image:production # docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE image 3 a21348af4283 9 minutes ago 125.1 MB image 2 7dda7c549d2d 9 minutes ago 125.1 MB image canary 7dda7c549d2d 9 minutes ago 125.1 MB image production 7dda7c549d2d 9 minutes ago 125.1 MB image 1 e53856d910b8 10 minutes ago 125.1 MB Homework: actually build 2 is not stable -> set :production to build 1 (rollback) and :canary to build 3 (test fix in build 3). If you are using only :latest, this rollback is impossible kubectl rolling update/rollback will use explicit :id and your cleaning script can use policy: all tags older than :production can be deleted. Unfortunately I don't have experience with Kubernetes deployment.
###Instruction: I've a new Docker image and I'd like ideally to perform a smooth upgrade to it and either forget the previous deployed version or keep only the previous version but not all previously deployed versions. Kubernetes Pods will retrieve upon being restarted the latest image if it's tagged :latest or imagePullPolicy: Always. However unless the image tag changed, doing a kubectl apply or kubectl replace will not restart Pods and hence will not trigger pulling the latest image. Tagging it means a complicated script to always remove old tagged images (useless someone has a trick here). Doing a kubectl rolling-update ... --image ... is possible if there is a single container per pod only. What works and is eventually clean and always gets the latest is deleting the namespace and re-creating all pods/rc/services... How can I ask Kubernetes to use my new images nicely even if there is more than one container per Pod? ### Response: Dirty workaround (not tested): you can scale down rc to 0 and then up to original size => it'll be "pod" restart. Or you can use 2 active(non 0 size)/passive(size 0) rc, which will be included in the same service. And you will be scaling them up/down. Tagging it means a complicated script to always remove old tagged images (useless someone has a trick here). Tagging is nice explicit process. Kubernetes Garbage collection will delete your old images automatically. Hopefully you know, that if you are using only latest tag, then rollback can be impossible. I recommend to set up tag system, for example :latest_stable, :latest_dev, :2nd_latest_stable, .... These tags will be only "pointers" and your CI will be moving them. Then you can define and script some smart registry delete tag policy, e.g. all tags older than 2nd_latest stable can be deleted safely. You know your app, so you can set up policy, which will fits your needs and release policy. Tag example - start point builds 1/2/3 (build id, git id, build time, ...) - build 1 is :production and :canary, all tags are pushed: # docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE image 3 a21348af4283 37 seconds ago 125.1 MB image 2 7dda7c549d2d 50 seconds ago 125.1 MB image production e53856d910b8 58 seconds ago 125.1 MB image canary e53856d910b8 58 seconds ago 125.1 MB image 1 e53856d910b8 58 seconds ago 125.1 MB Build 2 is going to be :canary: # docker tag -f image:2 image:canary # docker push image:canary # docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE image 3 a21348af4283 6 minutes ago 125.1 MB image canary 7dda7c549d2d 6 minutes ago 125.1 MB image 2 7dda7c549d2d 6 minutes ago 125.1 MB image production e53856d910b8 7 minutes ago 125.1 MB image 1 e53856d910b8 7 minutes ago 125.1 MB Tests OK, build 2 is stable - it'll be :production: # docker tag -f image:2 image:production # docker push image:production # docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE image 3 a21348af4283 9 minutes ago 125.1 MB image 2 7dda7c549d2d 9 minutes ago 125.1 MB image canary 7dda7c549d2d 9 minutes ago 125.1 MB image production 7dda7c549d2d 9 minutes ago 125.1 MB image 1 e53856d910b8 10 minutes ago 125.1 MB Homework: actually build 2 is not stable -> set :production to build 1 (rollback) and :canary to build 3 (test fix in build 3). If you are using only :latest, this rollback is impossible kubectl rolling update/rollback will use explicit :id and your cleaning script can use policy: all tags older than :production can be deleted. Unfortunately I don't have experience with Kubernetes deployment.
I've a new Docker image and I'd like ideally to perform a smooth upgrade to it and either forget the previous deployed version or keep only the previous version but not all previously deployed versions. Kubernetes Pods will retrieve upon being restarted the latest image if it's tagged :latest or imagePullPolicy: Always. However unless the image tag changed, doing a kubectl apply or kubectl replace will not restart Pods and hence will not trigger pulling the latest image. Tagging it means a complicated script to always remove old tagged images (useless someone has a trick here). Doing a kubectl rolling-update ... --image ... is possible if there is a single container per pod only. What works and is eventually clean and always gets the latest is deleting the namespace and re-creating all pods/rc/services... How can I ask Kubernetes to use my new images nicely even if there is more than one container per Pod?
It's probably a region issue; I'm guessing that the bucket is created in another region than Kubernetes tries to access. Looks like the aws cmdline tool is confused about the region: aws: error: argument --region: expected one argument When it can't determine the region, it defaults to one of the us regions. EDIT: the S3 sync is triggered by script cluster/aws/util.sh. The command executed is aws s3 sync --region ${s3_bucket_location} --exact-timestamps ${local_dir} "s3://${AWS_S3_BUCKET}/${staging_path}/". You can add an echo ${s3_bucket_location} before the line above. It should give you more information on what the region is set to.
###Instruction: Downloaded Kubernetes 1.1.8 from: https://github.com/kubernetes/kubernetes/releases/download/v1.1.8/kubernetes.tar.gz Followed the instructions at: https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/aws.md And got the following error: kubernetes-1.1.8 > ./kubernetes/cluster/kube-up.sh ... Starting cluster using provider: aws ... calling verify-prereqs ... calling kube-up Starting cluster using os distro: vivid Uploading to Amazon S3 Creating kubernetes-staging-0eaf81fbc51209dd47c13b6d8b424149 make_bucket: s3://kubernetes-staging-0eaf81fbc51209dd47c13b6d8b424149/ A client error (NoSuchBucket) occurred when calling the GetBucketLocation operation: The specified bucket does not exist +++ Staging server tars to S3 Storage: kubernetes-staging-0eaf81fbc51209dd47c13b6d8b424149/devel usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters] To see help text, you can run: aws help aws <command> help aws <command> <subcommand> help aws: error: argument --region: expected one argument AWS Console showed that the bucket was created but was empty. ### Response: It's probably a region issue; I'm guessing that the bucket is created in another region than Kubernetes tries to access. Looks like the aws cmdline tool is confused about the region: aws: error: argument --region: expected one argument When it can't determine the region, it defaults to one of the us regions. EDIT: the S3 sync is triggered by script cluster/aws/util.sh. The command executed is aws s3 sync --region ${s3_bucket_location} --exact-timestamps ${local_dir} "s3://${AWS_S3_BUCKET}/${staging_path}/". You can add an echo ${s3_bucket_location} before the line above. It should give you more information on what the region is set to.
Downloaded Kubernetes 1.1.8 from: https://github.com/kubernetes/kubernetes/releases/download/v1.1.8/kubernetes.tar.gz Followed the instructions at: https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/aws.md And got the following error: kubernetes-1.1.8 > ./kubernetes/cluster/kube-up.sh ... Starting cluster using provider: aws ... calling verify-prereqs ... calling kube-up Starting cluster using os distro: vivid Uploading to Amazon S3 Creating kubernetes-staging-0eaf81fbc51209dd47c13b6d8b424149 make_bucket: s3://kubernetes-staging-0eaf81fbc51209dd47c13b6d8b424149/ A client error (NoSuchBucket) occurred when calling the GetBucketLocation operation: The specified bucket does not exist +++ Staging server tars to S3 Storage: kubernetes-staging-0eaf81fbc51209dd47c13b6d8b424149/devel usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters] To see help text, you can run: aws help aws <command> help aws <command> <subcommand> help aws: error: argument --region: expected one argument AWS Console showed that the bucket was created but was empty.
There is not a way to enable ABAC mode on Google Container Engine. If you need fine-grained control over the parameters passed to any of the master components you have to run Kubernetes on GCE instead.
###Instruction: I would like to enable the ABAC mode for the Kubernetes Cluster I'm using in Google's Container Engine. (more specifically, I would like to restrict access to the API service for the default service account which is automatically assigned to all pods). However, since --authorization-mode=ABAC is a command line argument for kube-apiserver and since the API server is managed in Google Container Engine, I didn't find a way to enable authorization for my cluster. Is there a way to enable ABAC mode on GCE? I'm currently running Kubernetes v1.1.7 on server and nodes. ### Response: There is not a way to enable ABAC mode on Google Container Engine. If you need fine-grained control over the parameters passed to any of the master components you have to run Kubernetes on GCE instead.
I would like to enable the ABAC mode for the Kubernetes Cluster I'm using in Google's Container Engine. (more specifically, I would like to restrict access to the API service for the default service account which is automatically assigned to all pods). However, since --authorization-mode=ABAC is a command line argument for kube-apiserver and since the API server is managed in Google Container Engine, I didn't find a way to enable authorization for my cluster. Is there a way to enable ABAC mode on GCE? I'm currently running Kubernetes v1.1.7 on server and nodes.
I'm sorry, the Spark example was broken, in multiple ways. The issue: https://github.com/kubernetes/kubernetes/issues/17463 It now works, as of 2/25/2016, and is passing our continuous testing, at least at HEAD (and the next Kubernetes 1.2 release). Note that DNS is required, though it is set up by default in a number of cloud provider implementations, including GCE and GKE.
###Instruction: I'm bringing up Spark on Kubernetes according to this example: https://github.com/kubernetes/kubernetes/tree/master/examples/spark For some reason, I'm having problems getting the master to listen on :7077 for connections from worker nodes. It appears that connections aren't being proxied down from the service. If I bring the service up, then bring the master controller up with the $SPARK_MASTER_IP set to spark-master, it correctly resolves to the service IP but cannot bind the port. If I set the ip to localhost instead, it binds a local port and comes up -- since services should forward socket connections down to the pod endpoint this should be fine, so we move on. Now I bring up workers. They attempt to connect to the service IP on :7077 and cannot. It seems as if connections to the service aren't making it down to the endpoint. Except... I also have a webui service configured as in the example. If I connect to it with kubectl --proxy I can get down to the web service that's served on :8080 from spark-master, by hitting it through the webui service. Yet the nearly identically-configured spark-master service on port 7077 gives no love. If I configure the master to bind a local IP, it comes up but doesn't get connections from the service. If I configure it to bind through the service, the bind fails and it can't come up at all. I'm running out of ideas as to why this might be happening -- any assistance is appreciated. I'm happy to furnish more debugging info on request. ### Response: I'm sorry, the Spark example was broken, in multiple ways. The issue: https://github.com/kubernetes/kubernetes/issues/17463 It now works, as of 2/25/2016, and is passing our continuous testing, at least at HEAD (and the next Kubernetes 1.2 release). Note that DNS is required, though it is set up by default in a number of cloud provider implementations, including GCE and GKE.
I'm bringing up Spark on Kubernetes according to this example: https://github.com/kubernetes/kubernetes/tree/master/examples/spark For some reason, I'm having problems getting the master to listen on :7077 for connections from worker nodes. It appears that connections aren't being proxied down from the service. If I bring the service up, then bring the master controller up with the $SPARK_MASTER_IP set to spark-master, it correctly resolves to the service IP but cannot bind the port. If I set the ip to localhost instead, it binds a local port and comes up -- since services should forward socket connections down to the pod endpoint this should be fine, so we move on. Now I bring up workers. They attempt to connect to the service IP on :7077 and cannot. It seems as if connections to the service aren't making it down to the endpoint. Except... I also have a webui service configured as in the example. If I connect to it with kubectl --proxy I can get down to the web service that's served on :8080 from spark-master, by hitting it through the webui service. Yet the nearly identically-configured spark-master service on port 7077 gives no love. If I configure the master to bind a local IP, it comes up but doesn't get connections from the service. If I configure it to bind through the service, the bind fails and it can't come up at all. I'm running out of ideas as to why this might be happening -- any assistance is appreciated. I'm happy to furnish more debugging info on request.
You need to set three environment variables before running ./hack/local-up-cluster.h: $ export CONTAINER_RUNTIME=rkt $ export RKT_PATH=$PATH_TO_RKT_BINARY $ export RKT_STAGE1_IMAGE=PATH=$PATH_TO_STAGE1_IMAGE This is described in the docs for getting started with a local rkt cluster.
###Instruction: I'm using a Chromebook Pixel 2, and it's easier to get Rocket working than Docker. I recently installed Rocket 1.1 into /usr/local/bin, and have a clone of the Kubernetes GitHub repo. When I try to use ./hack/local-up-cluster.sh to start a cluster, it eventually fails with this message: Failed to successfully run 'docker ps', please verify that docker is installed and $DOCKER_HOST is set correctly. According to the docs, k8s supports Rocket. Can someone please guide me about how to start a local cluster without a working Docker installation? Thanks in advance. ### Response: You need to set three environment variables before running ./hack/local-up-cluster.h: $ export CONTAINER_RUNTIME=rkt $ export RKT_PATH=$PATH_TO_RKT_BINARY $ export RKT_STAGE1_IMAGE=PATH=$PATH_TO_STAGE1_IMAGE This is described in the docs for getting started with a local rkt cluster.
I'm using a Chromebook Pixel 2, and it's easier to get Rocket working than Docker. I recently installed Rocket 1.1 into /usr/local/bin, and have a clone of the Kubernetes GitHub repo. When I try to use ./hack/local-up-cluster.sh to start a cluster, it eventually fails with this message: Failed to successfully run 'docker ps', please verify that docker is installed and $DOCKER_HOST is set correctly. According to the docs, k8s supports Rocket. Can someone please guide me about how to start a local cluster without a working Docker installation? Thanks in advance.
It isn't base64 encoded for security, it is to allow binary content to be stored in secrets. You likely should not commit secret definitions to source control.
###Instruction: According to the Kubernetes secrets docs, creating a secret is as easy as base64-encoding the data and placing it in a file. How then, if base64 can be decoded as easily as it's encoded, can we secure/encrypt the secret values in the file? It would be nice to be able to commit the secret files into source control, however simply committing the file with base64-encoded data is in no way secure. For example, here is the example given in the docs: apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque data: password: dmFsdWUtMg0K username: dmFsdWUtMQ0K If you went to base64decode.org, you would see that those password/username values simply are "value-2". This file is unfit for source control. How can we secure the data in the file so that it is safe for source control? Or is this considered bad practice, and we should just add the file to .gitignore? ### Response: It isn't base64 encoded for security, it is to allow binary content to be stored in secrets. You likely should not commit secret definitions to source control.
According to the Kubernetes secrets docs, creating a secret is as easy as base64-encoding the data and placing it in a file. How then, if base64 can be decoded as easily as it's encoded, can we secure/encrypt the secret values in the file? It would be nice to be able to commit the secret files into source control, however simply committing the file with base64-encoded data is in no way secure. For example, here is the example given in the docs: apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque data: password: dmFsdWUtMg0K username: dmFsdWUtMQ0K If you went to base64decode.org, you would see that those password/username values simply are "value-2". This file is unfit for source control. How can we secure the data in the file so that it is safe for source control? Or is this considered bad practice, and we should just add the file to .gitignore?
Kubernetes resource specifications have two fields, request and limit. limits place a cap on how much of a resource a container can use. For memory, if a container goes above its limits, it will be OOM killed. For CPU, its usage may be throttled. requests are different in that they ensure the node that the pod is put on has at least that much capacity available for it. If you want to make sure that your pods will be able to grow to a particular size without the node running out of resources, specify a request of that size. This will limit how many pods you can schedule, though -- a 10G node will only be able to fit 2 pods with a 5G memory request.
###Instruction: The resource limit of Pod has been set as: resource limit cpu: 500m memory: 5Gi and there's 10G mem left on the node. I've created 5 pods in a short time successfully, and the node maybe still have some mem left, e.g. 8G. The mem usage is growing as the time goes on, and reach the limit (5G x 5 = 25G > 10G), then the node will be out of response. In order to ensure the usability, is there a way to set the resource limit on the node? Update The core problem is that pod memory usage does not always equal to the limit, especially in the time when it just starts. So there can be unlimited pods created as soon as possible, then make all nodes full load. That's not good. There might be something to allocate resources rather than setting the limit. Update 2 I've tested again for the limits and resources: resources: limits: cpu: 500m memory: 5Gi requests: cpu: 500m memory: 5Gi The total mem is 15G and left 14G, but 3 pods are scheduled and running successfully: > free -mh total used free shared buff/cache available Mem: 15G 1.1G 8.3G 3.4M 6.2G 14G Swap: 0B 0B 0B > docker stats CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O 44eaa3e2d68c 0.63% 1.939 GB / 5.369 GB 36.11% 0 B / 0 B 47.84 MB / 0 B 87099000037c 0.58% 2.187 GB / 5.369 GB 40.74% 0 B / 0 B 48.01 MB / 0 B d5954ab37642 0.58% 1.936 GB / 5.369 GB 36.07% 0 B / 0 B 47.81 MB / 0 B It seems that the node will be crushed soon XD Update 3 Now I change the resources limits, request 5G and limit 8G: resources: limits: cpu: 500m memory: 5Gi requests: cpu: 500m memory: 8Gi The results are: According to the k8s source code about the resource check: The total memory is only 15G, and all the pods needs 24G, so all the pods may be killed. (my single one container will cost more than 16G usually if not limited.) It means that you'd better keep the requests exactly equals to the limits in order to avoid pod killed or node crush. As if the requests value is not specified, it will be set to the limit as default, so what exactly requests used for? I think only limits is totally enough, or IMO, on the contrary of what K8s claimed, I rather like to set the resource request greater than the limit, in order to ensure the usability of nodes. Update 4 Kubernetes 1.1 schedule the pods mem requests via the formula: (capacity - memoryRequested) >= podRequest.memory It seems that kubernetes is not caring about memory usage as Vishnu Kannan said. So the node will be crushed if the mem used much by other apps. Fortunately, from the commit e64fe822, the formula has been changed as: (allocatable - memoryRequested) >= podRequest.memory waiting for the k8s v1.2! ### Response: Kubernetes resource specifications have two fields, request and limit. limits place a cap on how much of a resource a container can use. For memory, if a container goes above its limits, it will be OOM killed. For CPU, its usage may be throttled. requests are different in that they ensure the node that the pod is put on has at least that much capacity available for it. If you want to make sure that your pods will be able to grow to a particular size without the node running out of resources, specify a request of that size. This will limit how many pods you can schedule, though -- a 10G node will only be able to fit 2 pods with a 5G memory request.
The resource limit of Pod has been set as: resource limit cpu: 500m memory: 5Gi and there's 10G mem left on the node. I've created 5 pods in a short time successfully, and the node maybe still have some mem left, e.g. 8G. The mem usage is growing as the time goes on, and reach the limit (5G x 5 = 25G > 10G), then the node will be out of response. In order to ensure the usability, is there a way to set the resource limit on the node? Update The core problem is that pod memory usage does not always equal to the limit, especially in the time when it just starts. So there can be unlimited pods created as soon as possible, then make all nodes full load. That's not good. There might be something to allocate resources rather than setting the limit. Update 2 I've tested again for the limits and resources: resources: limits: cpu: 500m memory: 5Gi requests: cpu: 500m memory: 5Gi The total mem is 15G and left 14G, but 3 pods are scheduled and running successfully: > free -mh total used free shared buff/cache available Mem: 15G 1.1G 8.3G 3.4M 6.2G 14G Swap: 0B 0B 0B > docker stats CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O 44eaa3e2d68c 0.63% 1.939 GB / 5.369 GB 36.11% 0 B / 0 B 47.84 MB / 0 B 87099000037c 0.58% 2.187 GB / 5.369 GB 40.74% 0 B / 0 B 48.01 MB / 0 B d5954ab37642 0.58% 1.936 GB / 5.369 GB 36.07% 0 B / 0 B 47.81 MB / 0 B It seems that the node will be crushed soon XD Update 3 Now I change the resources limits, request 5G and limit 8G: resources: limits: cpu: 500m memory: 5Gi requests: cpu: 500m memory: 8Gi The results are: According to the k8s source code about the resource check: The total memory is only 15G, and all the pods needs 24G, so all the pods may be killed. (my single one container will cost more than 16G usually if not limited.) It means that you'd better keep the requests exactly equals to the limits in order to avoid pod killed or node crush. As if the requests value is not specified, it will be set to the limit as default, so what exactly requests used for? I think only limits is totally enough, or IMO, on the contrary of what K8s claimed, I rather like to set the resource request greater than the limit, in order to ensure the usability of nodes. Update 4 Kubernetes 1.1 schedule the pods mem requests via the formula: (capacity - memoryRequested) >= podRequest.memory It seems that kubernetes is not caring about memory usage as Vishnu Kannan said. So the node will be crushed if the mem used much by other apps. Fortunately, from the commit e64fe822, the formula has been changed as: (allocatable - memoryRequested) >= podRequest.memory waiting for the k8s v1.2!
There is an open feature request for this, but it's not yet supported: https://github.com/kubernetes/kubernetes/issues/15478
###Instruction: below is docker run command: docker run --log-driver=syslog How to set log-driver in kubsernetes pods. Can you show examples to me? Thanks... ### Response: There is an open feature request for this, but it's not yet supported: https://github.com/kubernetes/kubernetes/issues/15478
below is docker run command: docker run --log-driver=syslog How to set log-driver in kubsernetes pods. Can you show examples to me? Thanks...
Kubelet already backs off container restarts automatically. In that case, you'll see something like the following: $ cluster/kubectl.sh get pods NAME READY STATUS RESTARTS AGE ... mem-besteffort-zpnpm 0/1 CrashLoopBackOff 4 3m If you find that it restarts the containers too frequently, you could do something as simple as sleep 60 before executing the actual command.
###Instruction: I am using a ReplicationController to create a pod running redis container. The redis container is monitored by Redis Sentinel. And there is a problem, if redis crashed and restart too fast, it may cause trouble to Redis Sentinel when the voting is in progress. { "apiVersion": "v1", "kind": "ReplicationController", "metadata": { "name": "redis", "labels": { "name" : "redis" } }, "spec": { "replicas": 1, "selector": { "name":"redis" }, "template": { "metadata": { "labels": { "name":"redis" } }, "spec": { "volumes": [ //... ], "containers": [ //... ], "restartPolicy": "Always" } } } } Would it be possible to delay the restart ? i.e. Restart the container after 60 seconds from last crash ### Response: Kubelet already backs off container restarts automatically. In that case, you'll see something like the following: $ cluster/kubectl.sh get pods NAME READY STATUS RESTARTS AGE ... mem-besteffort-zpnpm 0/1 CrashLoopBackOff 4 3m If you find that it restarts the containers too frequently, you could do something as simple as sleep 60 before executing the actual command.
I am using a ReplicationController to create a pod running redis container. The redis container is monitored by Redis Sentinel. And there is a problem, if redis crashed and restart too fast, it may cause trouble to Redis Sentinel when the voting is in progress. { "apiVersion": "v1", "kind": "ReplicationController", "metadata": { "name": "redis", "labels": { "name" : "redis" } }, "spec": { "replicas": 1, "selector": { "name":"redis" }, "template": { "metadata": { "labels": { "name":"redis" } }, "spec": { "volumes": [ //... ], "containers": [ //... ], "restartPolicy": "Always" } } } } Would it be possible to delay the restart ? i.e. Restart the container after 60 seconds from last crash
You didn't mention what gitlab image you used. It might be useful to look at others', such as: https://github.com/sameersbn/docker-gitlab/ It looks like that configures several paths explicitly via environment variables. If you're using Docker containers on your runner, you'd need to install kubectl in your Docker image rather than use an executable from the host. https://github.com/gitlabhq/gitlabhq/blob/master/doc/ci/docker/using_docker_images.md
###Instruction: I want to use gitlab-ci (or any other CI, whatever works) to deploy to google container engine using kubectl run I have a working gitlab runner on google compute engine but can't figure out how to install gcloud and kubectl in so the runner has their executable and config in their path. Running pwd through th gitlab ci file shows /home/gitlab-runner and running whoami shows root, so I installed kubectl as user gitlab-runner and as user root but gitlab-ci-multirunner still complains $ gcloud config list bash: line 35: gcloud: command not found ERROR: Build failed with: exit code 1 same for kubectl. I probably have a flawed understanding on how gitlab-runner spawns the builds, but it can access the docker install so how do I get it to access the gcloud / kubectl install? Is there a more straightforward version to deploy to kubernetes / google container engine? Thanks so much! ### Response: You didn't mention what gitlab image you used. It might be useful to look at others', such as: https://github.com/sameersbn/docker-gitlab/ It looks like that configures several paths explicitly via environment variables. If you're using Docker containers on your runner, you'd need to install kubectl in your Docker image rather than use an executable from the host. https://github.com/gitlabhq/gitlabhq/blob/master/doc/ci/docker/using_docker_images.md
I want to use gitlab-ci (or any other CI, whatever works) to deploy to google container engine using kubectl run I have a working gitlab runner on google compute engine but can't figure out how to install gcloud and kubectl in so the runner has their executable and config in their path. Running pwd through th gitlab ci file shows /home/gitlab-runner and running whoami shows root, so I installed kubectl as user gitlab-runner and as user root but gitlab-ci-multirunner still complains $ gcloud config list bash: line 35: gcloud: command not found ERROR: Build failed with: exit code 1 same for kubectl. I probably have a flawed understanding on how gitlab-runner spawns the builds, but it can access the docker install so how do I get it to access the gcloud / kubectl install? Is there a more straightforward version to deploy to kubernetes / google container engine? Thanks so much!
You must quote all of the values that you want to set as environment variables that the yaml parser might interpret as a non-string type. For example, in influxdb-grafana-controller.yaml the values true and false are quoted because they could be interpreted as booleans. The same constraint applies to purely numerical values.
###Instruction: I tried to create a pod with a particular environment for uwsgi configuration , but it was this message : failed to load "phptime.yml": JSON: I can not unpack the number in the value of the string type Go when I tried to run this command : kubectl create -f phptime.yml I found that trouble in environments that has names like this: UWSGI_HTTP-MODIFIER1 or UWSGI_PHP-SAPI-NAME or UWSGI_MASTER-AS-ROOT but with environments that has a next names all ok: UWSGI_HTTP or UWSGI_INCLUDE A lot of our containers took configuration from environments and I need include all of my conf environments. This is my rc conf: containers: - name: phptime image: ownregistry/phpweb:0.5 env: - name: UWSGI_UID value: go - name: UWSGI_GID value: go - name: UWSGI_INCLUDE value: /var/lib/go-agent/pipelines/test/test-dev0/.uwsgi_dev.ini - name: UWSGI_PHP-SAPI-NAME value: apache - name: UWSGI_HTTP value: :8086 - name: UWSGI_HTTP-MODIFIER1 value: 14 - name: UWSGI_PIDFILE value: '/tmp/uwsgi.pid' - name: UWSGI_MASTER-FIFO value: '/tmp/fifo0' - name: UWSGI_MASTER-AS-ROOT value: 'true' - name: UWSGI_MASTER value: 'true' ports: - containerPort: 8086 resources: limits: cpu: 500m memory: 200Mi requests: cpu: 500m memory: 200Mi volumeMounts: - mountPath: /var/lib/go-agent/pipelines/test/test-dev0/ name: site readOnly: true volumes: - hostPath: path: /home/user/www/ name: site Is this kubernetes issue or it`s my? How to solve this? Thanks! ### Response: You must quote all of the values that you want to set as environment variables that the yaml parser might interpret as a non-string type. For example, in influxdb-grafana-controller.yaml the values true and false are quoted because they could be interpreted as booleans. The same constraint applies to purely numerical values.
I tried to create a pod with a particular environment for uwsgi configuration , but it was this message : failed to load "phptime.yml": JSON: I can not unpack the number in the value of the string type Go when I tried to run this command : kubectl create -f phptime.yml I found that trouble in environments that has names like this: UWSGI_HTTP-MODIFIER1 or UWSGI_PHP-SAPI-NAME or UWSGI_MASTER-AS-ROOT but with environments that has a next names all ok: UWSGI_HTTP or UWSGI_INCLUDE A lot of our containers took configuration from environments and I need include all of my conf environments. This is my rc conf: containers: - name: phptime image: ownregistry/phpweb:0.5 env: - name: UWSGI_UID value: go - name: UWSGI_GID value: go - name: UWSGI_INCLUDE value: /var/lib/go-agent/pipelines/test/test-dev0/.uwsgi_dev.ini - name: UWSGI_PHP-SAPI-NAME value: apache - name: UWSGI_HTTP value: :8086 - name: UWSGI_HTTP-MODIFIER1 value: 14 - name: UWSGI_PIDFILE value: '/tmp/uwsgi.pid' - name: UWSGI_MASTER-FIFO value: '/tmp/fifo0' - name: UWSGI_MASTER-AS-ROOT value: 'true' - name: UWSGI_MASTER value: 'true' ports: - containerPort: 8086 resources: limits: cpu: 500m memory: 200Mi requests: cpu: 500m memory: 200Mi volumeMounts: - mountPath: /var/lib/go-agent/pipelines/test/test-dev0/ name: site readOnly: true volumes: - hostPath: path: /home/user/www/ name: site Is this kubernetes issue or it`s my? How to solve this? Thanks!
http://kubernetes.io/v1.0/docs/user-guide/services.html#type-nodeport: If you set the type field to "NodePort", the Kubernetes master will allocate a port from a flag-configured range (default: 30000-32767). There is a risk of low ports conflict with real things running on host. That's the reason, why ports are limited. You are still able to configure kube-apiserver - flag --service-node-port-range= http://kubernetes.io/v1.0/docs/admin/kube-apiserver.html
###Instruction: apiVersion: v1 kind: Service metadata: name: nginx labels: name: nginx spec: ports: # the port that this service should serve on - port: 80 targetPort: 80 nodePort: 30000 selector: name: nginx type: NodePort apiVersion: v1 kind: Service metadata: name: nginx labels: name: nginx spec: ports: # the port that this service should serve on - port: 80 targetPort: 80 nodePort: 30000 selector: name: nginx type: LoadBalancer Two methods all support loadbalancer. And the port range is between 30000-32000, why? If I want to use port 3000, what should I do? ### Response: http://kubernetes.io/v1.0/docs/user-guide/services.html#type-nodeport: If you set the type field to "NodePort", the Kubernetes master will allocate a port from a flag-configured range (default: 30000-32767). There is a risk of low ports conflict with real things running on host. That's the reason, why ports are limited. You are still able to configure kube-apiserver - flag --service-node-port-range= http://kubernetes.io/v1.0/docs/admin/kube-apiserver.html
apiVersion: v1 kind: Service metadata: name: nginx labels: name: nginx spec: ports: # the port that this service should serve on - port: 80 targetPort: 80 nodePort: 30000 selector: name: nginx type: NodePort apiVersion: v1 kind: Service metadata: name: nginx labels: name: nginx spec: ports: # the port that this service should serve on - port: 80 targetPort: 80 nodePort: 30000 selector: name: nginx type: LoadBalancer Two methods all support loadbalancer. And the port range is between 30000-32000, why? If I want to use port 3000, what should I do?
This seems to be specific to that pod/image you are running. Can you check the logs and see why that pod exited ? Can you try any other image from docker hub ?
###Instruction: I'm getting a weird bug: The pod is reported as running via k8s api-server. But the container that runnig the application was actually exited, only the pause container gcr.io/google_containers/pause:0.8.0 is running, not the actual container. $ docker ps -a | grep ms-issue 1754ddbbfbd8 agencyrev/workflow.microservice.issue:v0.0.9 "npm start" 2 days ago Exited (1) 11 hours ago k8s_workflow-microservice-issue.458c077c_rc--ms-issue--v0.0.9-btryt_staging_18d44bae-dac7-11e5-889c-00155d08db02_965dee2f 30c0addd88ef gcr.io/google_containers/pause:0.8.0 "/pause" 2 days ago Up 2 days k8s_POD.b5de0404_rc--ms-issue--v0.0.9-btryt_staging_18d44bae-dac7-11e5-889c-00155d08db02_e427af83 As you can see, the app container was exited 11 hours ago, but the /pause::0.8.0 is still running, that why it is reported as running. I noticed this issue because I kept getting error Dial failed: connection refused in the kube-proxy. And not just this pod, I got some other pods (same host) that ran into this as well. I don't know what caused it, but is that possible? And how? I'm using kubernetes version v1.1.7 $ kubetctl version Client Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.7", GitCommit:"e4e6878293a339e4087dae684647c9e53f1cf9f0", GitTreeState:"clean"} Server Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.7", GitCommit:"e4e6878293a339e4087dae684647c9e53f1cf9f0", GitTreeState:"clean"} $ docker version Client version: 1.7.1 Client API version: 1.19 Go version (client): go1.4.2 Git commit (client): 2c2c52b-dirty OS/Arch (client): linux/amd64 Server version: 1.7.1 Server API version: 1.19 Go version (server): go1.4.2 Git commit (server): 2c2c52b-dirty OS/Arch (server): linux/amd64 $ uname -a Linux dev-coreos-k8s_14 4.1.5-coreos #2 SMP Thu Aug 13 09:18:45 UTC 2015 x86_64 Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz GenuineIntel GNU/Linux The issue above leads to other issue, that I cannot stop the pod without --grace-period=0 option (the status was always at Terminating with default grace-period 30s). And even if after the pod was stopped, the pause container is still there. I had to stop it with docker stop ### Response: This seems to be specific to that pod/image you are running. Can you check the logs and see why that pod exited ? Can you try any other image from docker hub ?
I'm getting a weird bug: The pod is reported as running via k8s api-server. But the container that runnig the application was actually exited, only the pause container gcr.io/google_containers/pause:0.8.0 is running, not the actual container. $ docker ps -a | grep ms-issue 1754ddbbfbd8 agencyrev/workflow.microservice.issue:v0.0.9 "npm start" 2 days ago Exited (1) 11 hours ago k8s_workflow-microservice-issue.458c077c_rc--ms-issue--v0.0.9-btryt_staging_18d44bae-dac7-11e5-889c-00155d08db02_965dee2f 30c0addd88ef gcr.io/google_containers/pause:0.8.0 "/pause" 2 days ago Up 2 days k8s_POD.b5de0404_rc--ms-issue--v0.0.9-btryt_staging_18d44bae-dac7-11e5-889c-00155d08db02_e427af83 As you can see, the app container was exited 11 hours ago, but the /pause::0.8.0 is still running, that why it is reported as running. I noticed this issue because I kept getting error Dial failed: connection refused in the kube-proxy. And not just this pod, I got some other pods (same host) that ran into this as well. I don't know what caused it, but is that possible? And how? I'm using kubernetes version v1.1.7 $ kubetctl version Client Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.7", GitCommit:"e4e6878293a339e4087dae684647c9e53f1cf9f0", GitTreeState:"clean"} Server Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.7", GitCommit:"e4e6878293a339e4087dae684647c9e53f1cf9f0", GitTreeState:"clean"} $ docker version Client version: 1.7.1 Client API version: 1.19 Go version (client): go1.4.2 Git commit (client): 2c2c52b-dirty OS/Arch (client): linux/amd64 Server version: 1.7.1 Server API version: 1.19 Go version (server): go1.4.2 Git commit (server): 2c2c52b-dirty OS/Arch (server): linux/amd64 $ uname -a Linux dev-coreos-k8s_14 4.1.5-coreos #2 SMP Thu Aug 13 09:18:45 UTC 2015 x86_64 Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz GenuineIntel GNU/Linux The issue above leads to other issue, that I cannot stop the pod without --grace-period=0 option (the status was always at Terminating with default grace-period 30s). And even if after the pod was stopped, the pause container is still there. I had to stop it with docker stop
I had a similar requirement (for the Go build system): wanted to know when scheduled vs. available CPU or memory was > 1, and scale out nodes when that was true (or, more accurately, when it was ~.8). There's not a built-in metric, but as you suggest you can do it with a custom metric. This was all done in Go, but it will give you the basic idea: Create the metrics (memory and CPU, in my case Put values to the metrics The key takeaway IMO is that you have to iterate over each pod in the cluster to determine how much capacity is consumed, then iterate over each node in the cluster to determine how much capacity is available. It's then just a matter of pointing your autoscaler to the custom metric(s). Big big big thing worth noting: I ultimately determined that scaling on the built-in CPU utilization metric was just as good as (if not better than, but more on that in a bit) than the custom metric. Each pod we scheduled pegged the CPU, so when pods were maxed out so was CPU. The build-in CPU utilization metric is probably better because you don't have the latency that comes with periodically putting custom metrics.
###Instruction: When I resize a replication controller using kubectl, if the cluster does not have enough resource, there will have one or more pods always in pending. Is there has any tool will auto resize GKE cluster when the resource is running out? ### Response: I had a similar requirement (for the Go build system): wanted to know when scheduled vs. available CPU or memory was > 1, and scale out nodes when that was true (or, more accurately, when it was ~.8). There's not a built-in metric, but as you suggest you can do it with a custom metric. This was all done in Go, but it will give you the basic idea: Create the metrics (memory and CPU, in my case Put values to the metrics The key takeaway IMO is that you have to iterate over each pod in the cluster to determine how much capacity is consumed, then iterate over each node in the cluster to determine how much capacity is available. It's then just a matter of pointing your autoscaler to the custom metric(s). Big big big thing worth noting: I ultimately determined that scaling on the built-in CPU utilization metric was just as good as (if not better than, but more on that in a bit) than the custom metric. Each pod we scheduled pegged the CPU, so when pods were maxed out so was CPU. The build-in CPU utilization metric is probably better because you don't have the latency that comes with periodically putting custom metrics.
When I resize a replication controller using kubectl, if the cluster does not have enough resource, there will have one or more pods always in pending. Is there has any tool will auto resize GKE cluster when the resource is running out?
First reaction is that if you already have your own API server, and it works, there is no need to rewrite the API in kubernetes style. I'd just try to reuse the thing that works. If you do want to rewrite, here are my thoughts: If you need lots of SSHServices, and you need lots of people to use your API for creating SSHServices, then it makes sense to represent the parameters of the ssh service as a ThirdParty resource. But if you have just 1 or a few SSHServices, and you update it infrequently, then I would not create a ThirdParty resource for it. I would just write an RC that runs the SSH service Pod mount a secret (later configMap) volume that contains a configuration file, in the format of your choice. The config file would include the AllowedGroups. Once you have v1.2 with config map, which will be like in a month, you will be able to update the config by POSTing a new configmap to the apiserver, without needing the SSH service to restart. (It should watch the config file for changes). Basically, you can think of a configMap as a simpler version of ThirdParty resource. As far as SSHUsers, you could use a ThirdParty resource and have the SSH controller watch the SSHUsers endpoint for changes. (Come to think of it, I'm not sure how you watch a third party resource.) Or maybe you want to just put the BelongToGroups information into the same ConfigMap. This gives you the "transactionality" you wanted. It just means that updates to the config are serialized and require an operator or cron job to push the config. Maybe that is not so bad?
###Instruction: I am building an opinionated PaaS like service on top of Kubernetes ecosystem. I have a desire to model an SSHService and SSHUser, I'll either extend Kubernetes api server by registering new types/schemas (looks pretty simple) or using custom resources via ThirdPartyResource http://kubernetes.io/v1.1/docs/design/extending-api.html I previous built my own API server on non-kubernetes infrastructure. The way I modelled it was somewhat as below, so an admin would do via restful actions: 1) Create SSH Service 2) Create SSh User 3) Add User to SSH Service The third action would run on the SSH Service resource, which would check the universe to ensure an SSH User with name ref existed within the universe before adding it to its allowed user array attribute. In Kubernetes I don't think cross resource transaction are supported, or intentional looking at how other things are modeled ** (for example I can create a pod with secret volume referring a secret name that does not exist and this is accepted). So in Kubernetes world I intend to 1) Create SSh Service with .Spec.AllowedGroups [str] 2) Create SSH User with .Spec.BelongToGroups [str] where groups is just an array of group names as strings A kubernetes client will watch for changes to ssh service and ssh users where the sets change update back to the API a secret volume (later configmap volume) for passwd/shadow to be used in the SSH container Is this a sane approach to model custom resources? ### Response: First reaction is that if you already have your own API server, and it works, there is no need to rewrite the API in kubernetes style. I'd just try to reuse the thing that works. If you do want to rewrite, here are my thoughts: If you need lots of SSHServices, and you need lots of people to use your API for creating SSHServices, then it makes sense to represent the parameters of the ssh service as a ThirdParty resource. But if you have just 1 or a few SSHServices, and you update it infrequently, then I would not create a ThirdParty resource for it. I would just write an RC that runs the SSH service Pod mount a secret (later configMap) volume that contains a configuration file, in the format of your choice. The config file would include the AllowedGroups. Once you have v1.2 with config map, which will be like in a month, you will be able to update the config by POSTing a new configmap to the apiserver, without needing the SSH service to restart. (It should watch the config file for changes). Basically, you can think of a configMap as a simpler version of ThirdParty resource. As far as SSHUsers, you could use a ThirdParty resource and have the SSH controller watch the SSHUsers endpoint for changes. (Come to think of it, I'm not sure how you watch a third party resource.) Or maybe you want to just put the BelongToGroups information into the same ConfigMap. This gives you the "transactionality" you wanted. It just means that updates to the config are serialized and require an operator or cron job to push the config. Maybe that is not so bad?
I am building an opinionated PaaS like service on top of Kubernetes ecosystem. I have a desire to model an SSHService and SSHUser, I'll either extend Kubernetes api server by registering new types/schemas (looks pretty simple) or using custom resources via ThirdPartyResource http://kubernetes.io/v1.1/docs/design/extending-api.html I previous built my own API server on non-kubernetes infrastructure. The way I modelled it was somewhat as below, so an admin would do via restful actions: 1) Create SSH Service 2) Create SSh User 3) Add User to SSH Service The third action would run on the SSH Service resource, which would check the universe to ensure an SSH User with name ref existed within the universe before adding it to its allowed user array attribute. In Kubernetes I don't think cross resource transaction are supported, or intentional looking at how other things are modeled ** (for example I can create a pod with secret volume referring a secret name that does not exist and this is accepted). So in Kubernetes world I intend to 1) Create SSh Service with .Spec.AllowedGroups [str] 2) Create SSH User with .Spec.BelongToGroups [str] where groups is just an array of group names as strings A kubernetes client will watch for changes to ssh service and ssh users where the sets change update back to the API a secret volume (later configmap volume) for passwd/shadow to be used in the SSH container Is this a sane approach to model custom resources?
I had the same problem, and I solved it by installing nfs-common in every Kubernetes nodes. apt-get install -y nfs-common My nodes were installed without nfs-common. Kubernetes will ask each node to mount the NFS into a specific directory to be available to the pod. As mount.nfs was not found, the mounting process failed. Good luck!
###Instruction: I have a Kubernetes setup installed in my Ubuntu machine. I'm trying to setup a nfs volume and mount it to a container according to this http://kubernetes.io/v1.1/examples/nfs/ document. nfs service and pod configurations kind: Service apiVersion: v1 metadata: name: nfs-server spec: ports: - port: 2049 selector: role: nfs-server --- apiVersion: v1 kind: Pod metadata: name: nfs-server labels: role: nfs-server spec: containers: - name: nfs-server image: jsafrane/nfs-data ports: - name: nfs containerPort: 2049 securityContext: privileged: true pod configuration to mount nfs volume apiVersion: v1 kind: Pod metadata: name: nfs-web spec: containers: - name: web image: nginx ports: - name: web containerPort: 80 volumeMounts: # name must match the volume name below - name: nfs mountPath: "/usr/share/nginx/html" volumes: - name: nfs nfs: # FIXME: use the right hostname server: 192.168.3.201 path: "/" When I run kubectl describe nfs-web I get following output mentioning it was unable to mount nfs volume. What could be the reason for that? Name: nfs-web Namespace: default Image(s): nginx Node: 192.168.1.114/192.168.1.114 Start Time: Sun, 06 Dec 2015 08:31:06 +0530 Labels: <none> Status: Pending Reason: Message: IP: Replication Controllers: <none> Containers: web: Container ID: Image: nginx Image ID: State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Environment Variables: Conditions: Type Status Ready False Volumes: nfs: Type: NFS (an NFS mount that lasts the lifetime of a pod) Server: 192.168.3.201 Path: / ReadOnly: false default-token-nh698: Type: Secret (a secret that should populate this volume) SecretName: default-token-nh698 Events: FirstSeen LastSeen Count From SubobjectPath Reason Message ───────── ──────── ───── ──── ───────────── ────── ─────── 36s 36s 1 {scheduler } Scheduled Successfully assigned nfs-web to 192.168.1.114 36s 2s 5 {kubelet 192.168.1.114} FailedMount Unable to mount volumes for pod "nfs-web_default": exit status 32 36s 2s 5 {kubelet 192.168.1.114} FailedSync Error syncing pod, skipping: exit status 32 ### Response: I had the same problem, and I solved it by installing nfs-common in every Kubernetes nodes. apt-get install -y nfs-common My nodes were installed without nfs-common. Kubernetes will ask each node to mount the NFS into a specific directory to be available to the pod. As mount.nfs was not found, the mounting process failed. Good luck!
I have a Kubernetes setup installed in my Ubuntu machine. I'm trying to setup a nfs volume and mount it to a container according to this http://kubernetes.io/v1.1/examples/nfs/ document. nfs service and pod configurations kind: Service apiVersion: v1 metadata: name: nfs-server spec: ports: - port: 2049 selector: role: nfs-server --- apiVersion: v1 kind: Pod metadata: name: nfs-server labels: role: nfs-server spec: containers: - name: nfs-server image: jsafrane/nfs-data ports: - name: nfs containerPort: 2049 securityContext: privileged: true pod configuration to mount nfs volume apiVersion: v1 kind: Pod metadata: name: nfs-web spec: containers: - name: web image: nginx ports: - name: web containerPort: 80 volumeMounts: # name must match the volume name below - name: nfs mountPath: "/usr/share/nginx/html" volumes: - name: nfs nfs: # FIXME: use the right hostname server: 192.168.3.201 path: "/" When I run kubectl describe nfs-web I get following output mentioning it was unable to mount nfs volume. What could be the reason for that? Name: nfs-web Namespace: default Image(s): nginx Node: 192.168.1.114/192.168.1.114 Start Time: Sun, 06 Dec 2015 08:31:06 +0530 Labels: <none> Status: Pending Reason: Message: IP: Replication Controllers: <none> Containers: web: Container ID: Image: nginx Image ID: State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Environment Variables: Conditions: Type Status Ready False Volumes: nfs: Type: NFS (an NFS mount that lasts the lifetime of a pod) Server: 192.168.3.201 Path: / ReadOnly: false default-token-nh698: Type: Secret (a secret that should populate this volume) SecretName: default-token-nh698 Events: FirstSeen LastSeen Count From SubobjectPath Reason Message ───────── ──────── ───── ──── ───────────── ────── ─────── 36s 36s 1 {scheduler } Scheduled Successfully assigned nfs-web to 192.168.1.114 36s 2s 5 {kubelet 192.168.1.114} FailedMount Unable to mount volumes for pod "nfs-web_default": exit status 32 36s 2s 5 {kubelet 192.168.1.114} FailedSync Error syncing pod, skipping: exit status 32
Via separate Ingress objects, you should be able to direct HTTP and HTTPS traffic to different ports and/or services, and hostnames can map to distinct ports and/or services. However, Ingress currently assumes port 80 for HTTP and 443 for HTTPS for ingress, so it's not currently possible to direct traffic to ports 81 and 4431 via Ingress. I recommend using a different hostname for the admin interface. Note that TLS is only supported in release 1.2.
###Instruction: I have a Kubernetes service that expose two ports, one for the external interface (target port 8080) and one for the admin interface (target port 8081). I'd like to make both of these endpoints accessible over the internet using https so thus I think I want use an ingress resource. The ingress resource creates a static IP for the load balancer and I'd like to route port 80 and 443 to the external interface (8080) and port 81 (http) and port 4431 (https) to the admin interface (8081). Is this possible and if so how? ### Response: Via separate Ingress objects, you should be able to direct HTTP and HTTPS traffic to different ports and/or services, and hostnames can map to distinct ports and/or services. However, Ingress currently assumes port 80 for HTTP and 443 for HTTPS for ingress, so it's not currently possible to direct traffic to ports 81 and 4431 via Ingress. I recommend using a different hostname for the admin interface. Note that TLS is only supported in release 1.2.
I have a Kubernetes service that expose two ports, one for the external interface (target port 8080) and one for the admin interface (target port 8081). I'd like to make both of these endpoints accessible over the internet using https so thus I think I want use an ingress resource. The ingress resource creates a static IP for the load balancer and I'd like to route port 80 and 443 to the external interface (8080) and port 81 (http) and port 4431 (https) to the admin interface (8081). Is this possible and if so how?
All of the master components in Kubernetes run inside of containers. Due to limitations of Docker, the kubelet agent has been difficult to get running in a container. The Kubernetes folks have been working on this for the last year (see kubernetes#4869), and with Docker 1.10 it looks like it is getting close to working.
###Instruction: I've used Docker Swarm - I can put the management and the agents in docker containers. Can I do the same with Kubernetes? I don't want to pollute my machine. ### Response: All of the master components in Kubernetes run inside of containers. Due to limitations of Docker, the kubelet agent has been difficult to get running in a container. The Kubernetes folks have been working on this for the last year (see kubernetes#4869), and with Docker 1.10 it looks like it is getting close to working.
I've used Docker Swarm - I can put the management and the agents in docker containers. Can I do the same with Kubernetes? I don't want to pollute my machine.
You can configure Kubernetes networking in a number of different ways when configuring the cluster, and a few different ways on a per-pod basis. If you want to try verifying whether the docker networking arrangement is the problem, set hostNetwork to true in your pod specification and give it another try (example here). This is the equivalent of the docker --net=host setting.
###Instruction: I setup a kubernetes cluster with 2 powerful physical servers (32 cores + 64GB memory.) Everything runs very smooth except the bad network performance I observed. As comparison: I run my service on such physical machine directly (one instance). Have a client machine in the same network subset calling the service. The rps can goes to 10k easily. While when I put the exact same service in kubernetes version 1.1.7, one pod (instance) of the service in launched and expose the service by ExternalIP in service yaml file. With the same client, the rps drops to 4k. Even after I switched to iptable mode of kube-proxy, it doesn't seem help a lot. When I search around, I saw this document https://www.percona.com/blog/2016/02/05/measuring-docker-cpu-network-overhead/ Seems the docker port-forwarding is the network bottleneck. While other network mode of docker: like --net=host, bridge network, or containers sharing network don't have such performance drop. Wondering whether Kubernetes team already aware of such network performance drop? Since docker containers are launched and managed by Kubernetes. Is there anyway to tune the kubernetest to use other network mode of docker? ### Response: You can configure Kubernetes networking in a number of different ways when configuring the cluster, and a few different ways on a per-pod basis. If you want to try verifying whether the docker networking arrangement is the problem, set hostNetwork to true in your pod specification and give it another try (example here). This is the equivalent of the docker --net=host setting.
I setup a kubernetes cluster with 2 powerful physical servers (32 cores + 64GB memory.) Everything runs very smooth except the bad network performance I observed. As comparison: I run my service on such physical machine directly (one instance). Have a client machine in the same network subset calling the service. The rps can goes to 10k easily. While when I put the exact same service in kubernetes version 1.1.7, one pod (instance) of the service in launched and expose the service by ExternalIP in service yaml file. With the same client, the rps drops to 4k. Even after I switched to iptable mode of kube-proxy, it doesn't seem help a lot. When I search around, I saw this document https://www.percona.com/blog/2016/02/05/measuring-docker-cpu-network-overhead/ Seems the docker port-forwarding is the network bottleneck. While other network mode of docker: like --net=host, bridge network, or containers sharing network don't have such performance drop. Wondering whether Kubernetes team already aware of such network performance drop? Since docker containers are launched and managed by Kubernetes. Is there anyway to tune the kubernetest to use other network mode of docker?
Inspecting the .containerPort with printf "%T" .containerPort shows that it's a float64. if you compare your port with a trailing 5445.0 it should work. You also have some unnecessary if statements. --template='{{range .spec.template.spec.containers}}{{range .ports}}{{if eq .containerPort 5445.0}}True{{end}}{{end}}{{end}}' Your example was also missing the -o="go-template" flag to specify the output as a Go template.
###Instruction: I have to check if my application container in a replication controller runs on a certain port. Here is the command with the go template string that I'm using. kubectl get rc my-rc --namespace=nightly --template='{{range .spec.template.spec.containers}}{{if .ports}}{{range .ports}}{{if .containerPort}}{{if eq .containerPort 5445}}{{end}}{{end}}{{end}}{{end}}{{end}}' I think it is not a string comparison since it is a port. even string comparison throws an error "error calling eq: incompatible types for comparison'" I could just fetch an array of container ports and do the comparison outside but want to get it done inside the go template. I am new to Go lang. Appreciate any suggestions to accomplish this using template string or using a template file.. Thanks ### Response: Inspecting the .containerPort with printf "%T" .containerPort shows that it's a float64. if you compare your port with a trailing 5445.0 it should work. You also have some unnecessary if statements. --template='{{range .spec.template.spec.containers}}{{range .ports}}{{if eq .containerPort 5445.0}}True{{end}}{{end}}{{end}}' Your example was also missing the -o="go-template" flag to specify the output as a Go template.
I have to check if my application container in a replication controller runs on a certain port. Here is the command with the go template string that I'm using. kubectl get rc my-rc --namespace=nightly --template='{{range .spec.template.spec.containers}}{{if .ports}}{{range .ports}}{{if .containerPort}}{{if eq .containerPort 5445}}{{end}}{{end}}{{end}}{{end}}{{end}}' I think it is not a string comparison since it is a port. even string comparison throws an error "error calling eq: incompatible types for comparison'" I could just fetch an array of container ports and do the comparison outside but want to get it done inside the go template. I am new to Go lang. Appreciate any suggestions to accomplish this using template string or using a template file.. Thanks
It looks like because 1.2 is now gzip'ing the user-data there is a problem that the user data is changing when aws copies it to the new Launch Configuration. Even pasting the user-data in manually when copying the Launch Configuration still fails... Update: Copying it from the output of the aws cli (aws autoscaling describe-launch-configurations) into the new launch configuration in the console does work. (Making sure you check "Input is already base64 encoded" on the user-data input)
###Instruction: With k8s version 1.1 I could copy the k8s launch config, change the instance type and then point my asg at the new lc to add other instance types to my cluster. With version 1.2a8 there are 4 storage devices that don’t copy over… eg "DeviceName": "/dev/sdc”, "VirtualName": β€œephemeral0”. Anyone know the best way to add instances of a different type to the k8s cluster in 1.2? Looks like I have to update the block device mappings for the lc from the aws cli. … and since you can’t modify a lc, I’d have to describe the k8s one and create a new one based on that description - which is a bit of a long way around this. ### Response: It looks like because 1.2 is now gzip'ing the user-data there is a problem that the user data is changing when aws copies it to the new Launch Configuration. Even pasting the user-data in manually when copying the Launch Configuration still fails... Update: Copying it from the output of the aws cli (aws autoscaling describe-launch-configurations) into the new launch configuration in the console does work. (Making sure you check "Input is already base64 encoded" on the user-data input)
With k8s version 1.1 I could copy the k8s launch config, change the instance type and then point my asg at the new lc to add other instance types to my cluster. With version 1.2a8 there are 4 storage devices that don’t copy over… eg "DeviceName": "/dev/sdc”, "VirtualName": β€œephemeral0”. Anyone know the best way to add instances of a different type to the k8s cluster in 1.2? Looks like I have to update the block device mappings for the lc from the aws cli. … and since you can’t modify a lc, I’d have to describe the k8s one and create a new one based on that description - which is a bit of a long way around this.
Here is the primary debugging document for Services: http://kubernetes.io/docs/user-guide/debugging-services/ LoadBalancer creates an external resource. What exactly that resource is depends on your Cloud Provider - some of them don't support it at all (in this case, you might want to try NodePort instead). Both Google and Amazon support external load balancers. Overall, when asking these questions it's extremely helpful to know if you are running on Google Container Engine, Google Compute Engine, Amazon Web Services, Digital Ocean, Vagrant, or whatever, because the answer depends on that. Showing all your configs and all your existing Kubnernetes resources (kubectl get pods, kubectl get services) along with your Dockerfiles or which images you are using will also help. For Google (GKE or GCE), you would verify the load balancer exists: gcloud compute forwarding-rules list The external load balancer will map port 80 to an arbitrary Node, but then the Kubernetes proxy will map that to an ephemeral port on the correct node that actually has a Pod with that label, then it will map to the container port. So you have to figure out which step along the way isn't working. Unfortunately all those kube-proxy and iptables jumps are quite difficult to follow, so usually I would first double check all my Pods exist and have labels that match the selector of the Service. I would double check that my container is exposing the right port, I am using the right name for the port, etc. You might want to create some other Pods that just make calls to the Service (using the environment variables or KubeDNS, see the Kubernetes service documentation if you don't know what I'm referring to) and verify it's accessible internally before debugging the load balancer. Some other good debugging steps: Verify that your Kubernetes Service exists: kubectl get services kubectl get pods Check your logs of your pod kubectl logs <pod name> Check that your service is created internally by printing the environment variable for it kubectl exec <pod name> -- printenv GUESTBOOK_SERVICE_HOST try creating a new pod and see if the service can be reached internally through GUESTBOOK_SERVICE_HOST and GUESTBOOK_SERVICE_PORT. kubectl describe pod <pod name> will give the instance id of the pod, you can SSH to it and run Docker and verify your container is running, attach to it, etc. If you really want to get into the IP tables debugging, try sudo iptables-save
###Instruction: I have set up a simple Kubernetes load balancer service in front of a Node.js container, which should be exposing port 80, but I can't get a response out of it. How can I debug how the load balancer is handling requests to port 80? Are there logs I can inspect? I have set up a load balancer service and a replication controller as described in the Kubernetes guestbook example. The service/load balancer spec is similar to this: { "kind":"Service", "apiVersion":"v1", "metadata":{ "name":"guestbook", "labels":{ "app":"guestbook" } }, "spec":{ "ports": [ { "port":3000, "targetPort":"http-server" } ], "selector":{ "app":"guestbook" }, "type": "LoadBalancer" } } As for my hosting platform, I'm using AWS and the OS is CoreOS alpha (976.0.0). Kubectl is at version 1.1.2. Kubernetes Info $ ~/.local/bin/kubectl --kubeconfig=/etc/kubernetes/kube.conf get pods NAME READY STATUS RESTARTS AGE busybox-sleep 1/1 Running 0 18m web-s0s5w 1/1 Running 0 12h $ ~/.local/bin/kubectl --kubeconfig=/etc/kubernetes/kube.conf get services NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE kubernetes 10.3.0.1 <none> 443/TCP <none> 1d web 10.3.0.171 ### Response: Here is the primary debugging document for Services: http://kubernetes.io/docs/user-guide/debugging-services/ LoadBalancer creates an external resource. What exactly that resource is depends on your Cloud Provider - some of them don't support it at all (in this case, you might want to try NodePort instead). Both Google and Amazon support external load balancers. Overall, when asking these questions it's extremely helpful to know if you are running on Google Container Engine, Google Compute Engine, Amazon Web Services, Digital Ocean, Vagrant, or whatever, because the answer depends on that. Showing all your configs and all your existing Kubnernetes resources (kubectl get pods, kubectl get services) along with your Dockerfiles or which images you are using will also help. For Google (GKE or GCE), you would verify the load balancer exists: gcloud compute forwarding-rules list The external load balancer will map port 80 to an arbitrary Node, but then the Kubernetes proxy will map that to an ephemeral port on the correct node that actually has a Pod with that label, then it will map to the container port. So you have to figure out which step along the way isn't working. Unfortunately all those kube-proxy and iptables jumps are quite difficult to follow, so usually I would first double check all my Pods exist and have labels that match the selector of the Service. I would double check that my container is exposing the right port, I am using the right name for the port, etc. You might want to create some other Pods that just make calls to the Service (using the environment variables or KubeDNS, see the Kubernetes service documentation if you don't know what I'm referring to) and verify it's accessible internally before debugging the load balancer. Some other good debugging steps: Verify that your Kubernetes Service exists: kubectl get services kubectl get pods Check your logs of your pod kubectl logs <pod name> Check that your service is created internally by printing the environment variable for it kubectl exec <pod name> -- printenv GUESTBOOK_SERVICE_HOST try creating a new pod and see if the service can be reached internally through GUESTBOOK_SERVICE_HOST and GUESTBOOK_SERVICE_PORT. kubectl describe pod <pod name> will give the instance id of the pod, you can SSH to it and run Docker and verify your container is running, attach to it, etc. If you really want to get into the IP tables debugging, try sudo iptables-save
I have set up a simple Kubernetes load balancer service in front of a Node.js container, which should be exposing port 80, but I can't get a response out of it. How can I debug how the load balancer is handling requests to port 80? Are there logs I can inspect? I have set up a load balancer service and a replication controller as described in the Kubernetes guestbook example. The service/load balancer spec is similar to this: { "kind":"Service", "apiVersion":"v1", "metadata":{ "name":"guestbook", "labels":{ "app":"guestbook" } }, "spec":{ "ports": [ { "port":3000, "targetPort":"http-server" } ], "selector":{ "app":"guestbook" }, "type": "LoadBalancer" } } As for my hosting platform, I'm using AWS and the OS is CoreOS alpha (976.0.0). Kubectl is at version 1.1.2. Kubernetes Info $ ~/.local/bin/kubectl --kubeconfig=/etc/kubernetes/kube.conf get pods NAME READY STATUS RESTARTS AGE busybox-sleep 1/1 Running 0 18m web-s0s5w 1/1 Running 0 12h $ ~/.local/bin/kubectl --kubeconfig=/etc/kubernetes/kube.conf get services NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE kubernetes 10.3.0.1 <none> 443/TCP <none> 1d web 10.3.0.171