text
stringlengths
151
31.8k
instruction
stringlengths
49
30.3k
output
stringlengths
30
28.3k
input
stringclasses
1 value
###Instruction: I'm able to connect to an ElastiCache Redis instance in a VPC from EC2 instances. But I would like to know if there is a way to connect to an ElastiCache Redis node outside of Amazon EC2 instances, such as from my local dev setup or VPS instances provided by other vendors. Currently when trying from my local set up: redis-cli -h my-node-endpoint -p 6379 I only get a timeout after some time. ### Response: SSH port forwarding should do the trick. Try running this from you client. ssh -f -N -L 6379:<your redis node endpoint>:6379 <your EC2 node that you use to connect to redis> Then from your client redis-cli -h 127.0.0.1 -p 6379 It works for me. Please note that default port for redis is 6379 not 6739. And also make sure you allow the security group of the EC2 node that you are using to connect to your redis instance into your Cache security group. Also, AWS now supports accessing your cluster more info here
I'm able to connect to an ElastiCache Redis instance in a VPC from EC2 instances. But I would like to know if there is a way to connect to an ElastiCache Redis node outside of Amazon EC2 instances, such as from my local dev setup or VPS instances provided by other vendors. Currently when trying from my local set up: redis-cli -h my-node-endpoint -p 6379 I only get a timeout after some time.
SSH port forwarding should do the trick. Try running this from you client. ssh -f -N -L 6379:<your redis node endpoint>:6379 <your EC2 node that you use to connect to redis> Then from your client redis-cli -h 127.0.0.1 -p 6379 It works for me. Please note that default port for redis is 6379 not 6739. And also make sure you allow the security group of the EC2 node that you are using to connect to your redis instance into your Cache security group. Also, AWS now supports accessing your cluster more info here
###Instruction: I've got the following ReplicationController JSON defined: { "id": "PHPController", "kind": "ReplicationController", "apiVersion": "v1beta1", "desiredState": { "replicas": 2, "replicaSelector": {"name": "php"}, "podTemplate": { "desiredState": { "manifest": { "version": "v1beta1", "id": "PHPController", "volumes": [{ "name": "wordpress", "path": "/mnt/nfs/wordpress_a", "hostDir": "/mnt/nfs/wordpress_a"}], "containers": [{ "name": "php", "image": "internaluser/php53", "ports": [{"containerPort": 80, "hostPort": 9021}], "volumeMounts": [{"name": "wordpress", "mountPath": "/mnt/nfs/wordpress_a"}] }] } }, "labels": {"name": "php"} }}, "labels": {"name": "php"} } The container starts correctly when run with "docker run -t -i -p 0.0.0.0:9021:80 -v /mnt/nfs/wordpress_a:/mnt/nfs/wordpress_a:rw internaluser/php53". /mnt/nfs/wordpress_a is an NFS share, mounted on all of the minions. Each minion has full RW access and I have verified that the share is present. After creating the pod containers with the Replication Controller, I can see that the volume was never actually bound, and/or incorrectly mounted: "Volumes": { "/mnt/nfs/wordpress_a": "/var/lib/docker/vfs/dir/8b5dc8477958f5c1b894e68ab9412b41e81a34ef16dac81f0f9d4884352a90b7" }, "VolumesRW": { "/mnt/nfs/wordpress_a": true } "HostConfig": { "Binds": null, "ContainerIDFile": "", "LxcConf": null, "Privileged": false, "PortBindings": { "80/tcp": [ { "HostIp": "", "HostPort": "9021" } ] }, I find it strange that the container believes /mnt/nfs/wordpress_a is mapped to "/var/lib/docker/vfs/dir/8b5dc8477958f5c1b894e68ab9412b41e81a34ef16dac81f0f9d4884352a90b7". From the kubelet log: Desired [10.101.4.15]: [{Namespace:etcd Name:c823da9e-4437-11e4-a3b1-0050568421eb Manifest:{Version:v1beta1 ID:c823da9e-4437-11e4-a3b1-0050568421eb UUID:c823da9e-4437-11e4-a3b1-0050568421eb Volumes:[{Name:wordpress Source:}] Containers:[{Name:php Image:internaluser/php53 Command:[] WorkingDir: Ports:[{Name: HostPort:9021 ContainerPort:80 Protocol:TCP HostIP:}] Env:[{Name:SERVICE_HOST Value:10.1.1.1}] Memory:0 CPU:0 VolumeMounts:[{Name:wordpress ReadOnly:false MountPath:/mnt/nfs/wordpress_a}] LivenessProbe: Lifecycle: Privileged:false}] RestartPolicy:{Always:0xa99a20 OnFailure: Never:}}}] Does anyone have experience with this sort of thing? I've been driving myself crazy troubleshooting this. Thanks! ### Response: Solved. The volumes syntax was incorrect. https://github.com/GoogleCloudPlatform/kubernetes/issues/1446
I've got the following ReplicationController JSON defined: { "id": "PHPController", "kind": "ReplicationController", "apiVersion": "v1beta1", "desiredState": { "replicas": 2, "replicaSelector": {"name": "php"}, "podTemplate": { "desiredState": { "manifest": { "version": "v1beta1", "id": "PHPController", "volumes": [{ "name": "wordpress", "path": "/mnt/nfs/wordpress_a", "hostDir": "/mnt/nfs/wordpress_a"}], "containers": [{ "name": "php", "image": "internaluser/php53", "ports": [{"containerPort": 80, "hostPort": 9021}], "volumeMounts": [{"name": "wordpress", "mountPath": "/mnt/nfs/wordpress_a"}] }] } }, "labels": {"name": "php"} }}, "labels": {"name": "php"} } The container starts correctly when run with "docker run -t -i -p 0.0.0.0:9021:80 -v /mnt/nfs/wordpress_a:/mnt/nfs/wordpress_a:rw internaluser/php53". /mnt/nfs/wordpress_a is an NFS share, mounted on all of the minions. Each minion has full RW access and I have verified that the share is present. After creating the pod containers with the Replication Controller, I can see that the volume was never actually bound, and/or incorrectly mounted: "Volumes": { "/mnt/nfs/wordpress_a": "/var/lib/docker/vfs/dir/8b5dc8477958f5c1b894e68ab9412b41e81a34ef16dac81f0f9d4884352a90b7" }, "VolumesRW": { "/mnt/nfs/wordpress_a": true } "HostConfig": { "Binds": null, "ContainerIDFile": "", "LxcConf": null, "Privileged": false, "PortBindings": { "80/tcp": [ { "HostIp": "", "HostPort": "9021" } ] }, I find it strange that the container believes /mnt/nfs/wordpress_a is mapped to "/var/lib/docker/vfs/dir/8b5dc8477958f5c1b894e68ab9412b41e81a34ef16dac81f0f9d4884352a90b7". From the kubelet log: Desired [10.101.4.15]: [{Namespace:etcd Name:c823da9e-4437-11e4-a3b1-0050568421eb Manifest:{Version:v1beta1 ID:c823da9e-4437-11e4-a3b1-0050568421eb UUID:c823da9e-4437-11e4-a3b1-0050568421eb Volumes:[{Name:wordpress Source:}] Containers:[{Name:php Image:internaluser/php53 Command:[] WorkingDir: Ports:[{Name: HostPort:9021 ContainerPort:80 Protocol:TCP HostIP:}] Env:[{Name:SERVICE_HOST Value:10.1.1.1}] Memory:0 CPU:0 VolumeMounts:[{Name:wordpress ReadOnly:false MountPath:/mnt/nfs/wordpress_a}] LivenessProbe: Lifecycle: Privileged:false}] RestartPolicy:{Always:0xa99a20 OnFailure: Never:}}}] Does anyone have experience with this sort of thing? I've been driving myself crazy troubleshooting this. Thanks!
Solved. The volumes syntax was incorrect. https://github.com/GoogleCloudPlatform/kubernetes/issues/1446
###Instruction: I'm trying to eval Kubernetes. I interesting to mount kubernetes on CoreOS cluster, and official documentation not have much, only have two references to blog of CoreOS; I'm currently using guide to running kubernetes example on CoreOS part 1. My apiserver.service: [Unit] ConditionFileIsExecutable=/opt/kubernetes/bin/apiserver Description=Kubernetes API Server [Unit] ConditionFileIsExecutable=/opt/kubernetes/bin/controller-manager Description=Kubernetes Controller Manager Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/opt/kubernetes/bin/controller-manager \ --etcd_servers=http://127.0.0.1:4001 \ --master=127.0.0.1:8080 \ --logtostderr=true Restart=on-failure RestartSec=1 [Install] WantedBy=multi-user.target Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/opt/kubernetes/bin/apiserver \ --address=127.0.0.1 \ --port=8080 \ --etcd_servers=http://127.0.0.1:4001 \ --machines=127.0.0.1 \ --logtostderr=true Restart=on-failure RestartSec=1 [Install] WantedBy=multi-user.target My controller-manager.service: [Unit] ConditionFileIsExecutable=/opt/kubernetes/bin/controller-manager Description=Kubernetes Controller Manager Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/opt/kubernetes/bin/controller-manager \ --etcd_servers=http://127.0.0.1:4001 \ --master=127.0.0.1:8080 \ --logtostderr=true Restart=on-failure RestartSec=1 [Install] WantedBy=multi-user.target My kubelet.service: [Unit] ConditionFileIsExecutable=/opt/kubernetes/bin/kubelet Description=Kubernetes Kubelet Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/opt/kubernetes/bin/kubelet \ --address=127.0.0.1 \ --port=10250 \ --hostname_override=127.0.0.1 \ --etcd_servers=http://127.0.0.1:4001 \ --logtostderr=true Restart=on-failure RestartSec=1 [Install] WantedBy=multi-user.target My proxy.service [Unit] ConditionFileIsExecutable=/opt/kubernetes/bin/proxy Description=Kubernetes Proxy Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/opt/kubernetes/bin/proxy --etcd_servers=http://127.0.0.1:4001 --logtostderr=true Restart=on-failure RestartSec=1 [Install] WantedBy=multi-user.target The problem arises when I create a Kubernetes pod redis. When I execute command: /opt/kubernetes/bin/kubecfg -h http://127.0.0.1:8080 -c kubernetes-coreos/pods/redis.json create /pods the error outputs after a long time waiting: {Kind:"", ID:"", CreationTimestamp:"", SelfLink:"", ResourceVersion:0x0}, Status:"failure", Details:"failed to find fit for api.Pod{JSONBase:api.JSONBase{Kind:\"\", ID:\"redis\", CreationTimestamp:\"\", SelfLink:\"\", ResourceVersion:0x0}, Labels:map[string]string{\"name\":\"redis\"}, DesiredState:api.PodState{Manifest:api.ContainerManifest{Version:\"v1beta1\", ID:\"redis\", Volumes:[]api.Volume(nil), Containers:[]api.Container{api.Container{Name:\"redis\", Image:\"registry.vc.datys.cu:5000/redis\", Command:[]string(nil), WorkingDir:\"\", Ports:[]api.Port{api.Port{Name:\"\", HostPort:6379, ContainerPort:6379, Protocol:\"\", HostIP:\"\"}}, Env:[]api.EnvVar(nil), Memory:0, CPU:0, VolumeMounts:[]api.VolumeMount(nil), LivenessProbe:api.LivenessProbe{Enabled:false, Type:\"\", HTTPGet:api.HTTPGetProbe{Path:\"\", Port:\"\", Host:\"\"}, InitialDelaySeconds:0}}}}, Status:\"\", Host:\"\", HostIP:\"\", Info:api.PodInfo(nil)}, CurrentState:api.PodState{Manifest:api.ContainerManifest{Version:\"\", ID:\"\", Volumes:[]api.Volume(nil), Containers:[]api.Container(nil)}, Status:\"\", Host:\"\", HostIP:\"\", Info:api.PodInfo(nil)}}", Code:500} NOTE: When I execute: sudo systemctl status proxy return: ● proxy.service - Kubernetes Proxy Loaded: loaded (/etc/systemd/system/proxy.service; disabled) Active: active (running) since Fri 2014-08-08 14:21:36 UTC; 8s ago Docs: https://github.com/GoogleCloudPlatform/kubernetes Main PID: 1036 (proxy) CGroup: /system.slice/proxy.service └─1036 /opt/kubernetes/bin/proxy --etcd_servers=http://127.0.0.1:4001 --logtostderr=true Aug 08 14:21:42 core-01 proxy[1036]: I0808 14:21:42.074694 01036 logs.go:38] etcd DEBUG: [recv.success. http://127.0.0.1:4001/v2/keys/registry/ser...rted=true] Aug 08 14:21:42 core-01 proxy[1036]: E0808 14:21:42.074763 01036 etcd.go:115] Failed to get the key registry/services: 100: Key not found (/registry) [57] Aug 08 14:21:42 core-01 proxy[1036]: E0808 14:21:42.074791 01036 etcd.go:75] Failed to get any services: 100: Key not found (/registry) [57] Aug 08 14:21:44 core-01 proxy[1036]: I0808 14:21:44.075337 01036 logs.go:38] etcd DEBUG: get [registry/services/specs http://127.0.0.1:4001] [%!s(MISSING)] Aug 08 14:21:44 core-01 proxy[1036]: I0808 14:21:44.075501 01036 logs.go:38] etcd DEBUG: [Connecting to etcd: attempt 1 for keys/registry/services...rted=true] Aug 08 14:21:44 core-01 proxy[1036]: I0808 14:21:44.075528 01036 logs.go:38] etcd DEBUG: [send.request.to http://127.0.0.1:4001/v2/keys/registry/...thod GET] Aug 08 14:21:44 core-01 proxy[1036]: I0808 14:21:44.078524 01036 logs.go:38] etcd DEBUG: [recv.response.from http://127.0.0.1:4001/v2/keys/registr...rted=true] Aug 08 14:21:44 core-01 proxy[1036]: I0808 14:21:44.078824 01036 logs.go:38] etcd DEBUG: [recv.success. http://127.0.0.1:4001/v2/keys/registry/ser...rted=true] Aug 08 14:21:44 core-01 proxy[1036]: E0808 14:21:44.078897 01036 etcd.go:115] Failed to get the key registry/services: 100: Key not found (/registry) [57] Aug 08 14:21:44 core-01 proxy[1036]: E0808 14:21:44.078925 01036 etcd.go:75] Failed to get any services: 100: Key not found (/registry) [57] Hint: Some lines were ellipsized, use -l to show in full. And when execute: /opt/kubernetes/bin/kubecfg -h http://127.0.0.1:8080 list /pods return: Name Image(s) Host Labels ---------- ---------- ---------- ---------- redis dockerfile/redis 127.0.0.1/ name=redis What's the problem? I appreciate any idea or collaboration. EDIT 1: My redis.json: { "id": "redis", "desiredState": { "manifest": { "version": "v1beta1", "id": "redis", "containers": [{ "name": "redis", "image": "registry.vc.datys.cu:5000/redis", "ports": [{ "containerPort": 6379, "hostPort": 6379 }] }] } }, "labels": { "name": "redis" } } ### Response: The errors you are seeing in your log is that the pod you are trying to create has a port conflict with the Kubernetes API server. Both are trying to use port 8080. Fix this issue by changing the pod to run on another port. You'll have to clean things up first. So remove the redis pod using kubecfg command. Update the pod.json file, and try again. Also please follow the quickstart guide found at the https://github.com/kelseyhightower/kubernetes-coreos. Kubernetes is a fast moving project, so that blog will become out of date fairly quickly.
I'm trying to eval Kubernetes. I interesting to mount kubernetes on CoreOS cluster, and official documentation not have much, only have two references to blog of CoreOS; I'm currently using guide to running kubernetes example on CoreOS part 1. My apiserver.service: [Unit] ConditionFileIsExecutable=/opt/kubernetes/bin/apiserver Description=Kubernetes API Server [Unit] ConditionFileIsExecutable=/opt/kubernetes/bin/controller-manager Description=Kubernetes Controller Manager Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/opt/kubernetes/bin/controller-manager \ --etcd_servers=http://127.0.0.1:4001 \ --master=127.0.0.1:8080 \ --logtostderr=true Restart=on-failure RestartSec=1 [Install] WantedBy=multi-user.target Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/opt/kubernetes/bin/apiserver \ --address=127.0.0.1 \ --port=8080 \ --etcd_servers=http://127.0.0.1:4001 \ --machines=127.0.0.1 \ --logtostderr=true Restart=on-failure RestartSec=1 [Install] WantedBy=multi-user.target My controller-manager.service: [Unit] ConditionFileIsExecutable=/opt/kubernetes/bin/controller-manager Description=Kubernetes Controller Manager Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/opt/kubernetes/bin/controller-manager \ --etcd_servers=http://127.0.0.1:4001 \ --master=127.0.0.1:8080 \ --logtostderr=true Restart=on-failure RestartSec=1 [Install] WantedBy=multi-user.target My kubelet.service: [Unit] ConditionFileIsExecutable=/opt/kubernetes/bin/kubelet Description=Kubernetes Kubelet Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/opt/kubernetes/bin/kubelet \ --address=127.0.0.1 \ --port=10250 \ --hostname_override=127.0.0.1 \ --etcd_servers=http://127.0.0.1:4001 \ --logtostderr=true Restart=on-failure RestartSec=1 [Install] WantedBy=multi-user.target My proxy.service [Unit] ConditionFileIsExecutable=/opt/kubernetes/bin/proxy Description=Kubernetes Proxy Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/opt/kubernetes/bin/proxy --etcd_servers=http://127.0.0.1:4001 --logtostderr=true Restart=on-failure RestartSec=1 [Install] WantedBy=multi-user.target The problem arises when I create a Kubernetes pod redis. When I execute command: /opt/kubernetes/bin/kubecfg -h http://127.0.0.1:8080 -c kubernetes-coreos/pods/redis.json create /pods the error outputs after a long time waiting: {Kind:"", ID:"", CreationTimestamp:"", SelfLink:"", ResourceVersion:0x0}, Status:"failure", Details:"failed to find fit for api.Pod{JSONBase:api.JSONBase{Kind:\"\", ID:\"redis\", CreationTimestamp:\"\", SelfLink:\"\", ResourceVersion:0x0}, Labels:map[string]string{\"name\":\"redis\"}, DesiredState:api.PodState{Manifest:api.ContainerManifest{Version:\"v1beta1\", ID:\"redis\", Volumes:[]api.Volume(nil), Containers:[]api.Container{api.Container{Name:\"redis\", Image:\"registry.vc.datys.cu:5000/redis\", Command:[]string(nil), WorkingDir:\"\", Ports:[]api.Port{api.Port{Name:\"\", HostPort:6379, ContainerPort:6379, Protocol:\"\", HostIP:\"\"}}, Env:[]api.EnvVar(nil), Memory:0, CPU:0, VolumeMounts:[]api.VolumeMount(nil), LivenessProbe:api.LivenessProbe{Enabled:false, Type:\"\", HTTPGet:api.HTTPGetProbe{Path:\"\", Port:\"\", Host:\"\"}, InitialDelaySeconds:0}}}}, Status:\"\", Host:\"\", HostIP:\"\", Info:api.PodInfo(nil)}, CurrentState:api.PodState{Manifest:api.ContainerManifest{Version:\"\", ID:\"\", Volumes:[]api.Volume(nil), Containers:[]api.Container(nil)}, Status:\"\", Host:\"\", HostIP:\"\", Info:api.PodInfo(nil)}}", Code:500} NOTE: When I execute: sudo systemctl status proxy return: ● proxy.service - Kubernetes Proxy Loaded: loaded (/etc/systemd/system/proxy.service; disabled) Active: active (running) since Fri 2014-08-08 14:21:36 UTC; 8s ago Docs: https://github.com/GoogleCloudPlatform/kubernetes Main PID: 1036 (proxy) CGroup: /system.slice/proxy.service └─1036 /opt/kubernetes/bin/proxy --etcd_servers=http://127.0.0.1:4001 --logtostderr=true Aug 08 14:21:42 core-01 proxy[1036]: I0808 14:21:42.074694 01036 logs.go:38] etcd DEBUG: [recv.success. http://127.0.0.1:4001/v2/keys/registry/ser...rted=true] Aug 08 14:21:42 core-01 proxy[1036]: E0808 14:21:42.074763 01036 etcd.go:115] Failed to get the key registry/services: 100: Key not found (/registry) [57] Aug 08 14:21:42 core-01 proxy[1036]: E0808 14:21:42.074791 01036 etcd.go:75] Failed to get any services: 100: Key not found (/registry) [57] Aug 08 14:21:44 core-01 proxy[1036]: I0808 14:21:44.075337 01036 logs.go:38] etcd DEBUG: get [registry/services/specs http://127.0.0.1:4001] [%!s(MISSING)] Aug 08 14:21:44 core-01 proxy[1036]: I0808 14:21:44.075501 01036 logs.go:38] etcd DEBUG: [Connecting to etcd: attempt 1 for keys/registry/services...rted=true] Aug 08 14:21:44 core-01 proxy[1036]: I0808 14:21:44.075528 01036 logs.go:38] etcd DEBUG: [send.request.to http://127.0.0.1:4001/v2/keys/registry/...thod GET] Aug 08 14:21:44 core-01 proxy[1036]: I0808 14:21:44.078524 01036 logs.go:38] etcd DEBUG: [recv.response.from http://127.0.0.1:4001/v2/keys/registr...rted=true] Aug 08 14:21:44 core-01 proxy[1036]: I0808 14:21:44.078824 01036 logs.go:38] etcd DEBUG: [recv.success. http://127.0.0.1:4001/v2/keys/registry/ser...rted=true] Aug 08 14:21:44 core-01 proxy[1036]: E0808 14:21:44.078897 01036 etcd.go:115] Failed to get the key registry/services: 100: Key not found (/registry) [57] Aug 08 14:21:44 core-01 proxy[1036]: E0808 14:21:44.078925 01036 etcd.go:75] Failed to get any services: 100: Key not found (/registry) [57] Hint: Some lines were ellipsized, use -l to show in full. And when execute: /opt/kubernetes/bin/kubecfg -h http://127.0.0.1:8080 list /pods return: Name Image(s) Host Labels ---------- ---------- ---------- ---------- redis dockerfile/redis 127.0.0.1/ name=redis What's the problem? I appreciate any idea or collaboration. EDIT 1: My redis.json: { "id": "redis", "desiredState": { "manifest": { "version": "v1beta1", "id": "redis", "containers": [{ "name": "redis", "image": "registry.vc.datys.cu:5000/redis", "ports": [{ "containerPort": 6379, "hostPort": 6379 }] }] } }, "labels": { "name": "redis" } }
The errors you are seeing in your log is that the pod you are trying to create has a port conflict with the Kubernetes API server. Both are trying to use port 8080. Fix this issue by changing the pod to run on another port. You'll have to clean things up first. So remove the redis pod using kubecfg command. Update the pod.json file, and try again. Also please follow the quickstart guide found at the https://github.com/kelseyhightower/kubernetes-coreos. Kubernetes is a fast moving project, so that blog will become out of date fairly quickly.
###Instruction: Can you think of Azure Resource Manager as the equivalent to what kubernetes is for Docker? ### Response: I think that the two are slightly different (caveat: I have only cursory knowledge of Resource Manager) Azure Resource Manager lets you think about a collection of separate resources as a single composite application. Much like Google's Deployment Manager. It makes it easier to create repeatable deployments, and make sense of a big collection of heterogeneous resources as belonging to a single app. Kubernetes is on the other hand turns a collection of virtual machines into a new resource type (a cluster). It goes beyond configuration and deployment of resources and acts as a runtime environment for distributed apps. So it has an API that can be used during runtime to deploy and wire in your containers, dynamically scale up/scale down your cluster, and it will make sure that your intent is being met (if you ask for three running containers of a certain type, it will make sure that there are always three healthy containers of that type running).
Can you think of Azure Resource Manager as the equivalent to what kubernetes is for Docker?
I think that the two are slightly different (caveat: I have only cursory knowledge of Resource Manager) Azure Resource Manager lets you think about a collection of separate resources as a single composite application. Much like Google's Deployment Manager. It makes it easier to create repeatable deployments, and make sense of a big collection of heterogeneous resources as belonging to a single app. Kubernetes is on the other hand turns a collection of virtual machines into a new resource type (a cluster). It goes beyond configuration and deployment of resources and acts as a runtime environment for distributed apps. So it has an API that can be used during runtime to deploy and wire in your containers, dynamically scale up/scale down your cluster, and it will make sure that your intent is being met (if you ask for three running containers of a certain type, it will make sure that there are always three healthy containers of that type running).
###Instruction: On AWS, I'm hosting Multiple (totally different) Domains on EC2 covered by an ELB on top. I already have 1 Wildcard SSL Cert for 1 Domain and its childs. (xxxx.site1.com) Then now can I add one more Single SSL Cert (on same ELB) for 1 another different Domain, like (www.site2.com) please? I'm asking this because some Articles are saying, it won't work and just crush. Please kindly advise. ### Response: No. The only way you could do it is if you use a second port for HTTPS connections (other than 443) which doesn't apply to real world scenarios since 443 is the default port for HTTPS Having said that, you can simply create a second ELB and assign your second wildcard certificate to it. You can also forward your traffic to the same backend server as the one where the first ELB is forwarding its traffic to. Hope this helps.
On AWS, I'm hosting Multiple (totally different) Domains on EC2 covered by an ELB on top. I already have 1 Wildcard SSL Cert for 1 Domain and its childs. (xxxx.site1.com) Then now can I add one more Single SSL Cert (on same ELB) for 1 another different Domain, like (www.site2.com) please? I'm asking this because some Articles are saying, it won't work and just crush. Please kindly advise.
No. The only way you could do it is if you use a second port for HTTPS connections (other than 443) which doesn't apply to real world scenarios since 443 is the default port for HTTPS Having said that, you can simply create a second ELB and assign your second wildcard certificate to it. You can also forward your traffic to the same backend server as the one where the first ELB is forwarding its traffic to. Hope this helps.
###Instruction: I'm trying to follow the directions to get the Google Cloud Platform kubernetes GuestBook example running. I've got a "kubernetes-guestbook-example" project ID with billing enabled in the Google Developer's Console under my account and i do a "gcloud auth login" to ensure i'm running as that account. Step zero says that i should make sure i have "turned up a Kubernetes cluster". I think i've done this. I also run "gcloud config set project kubernetes-guestbook-example". Then i run "hack/dev-build-and-up.sh" from the kubernetes subdirectory and it produces: Building local go components Building release tree Packaging release Building launch script Uploading to Google Storage Release pushed (devel/jamesfremen/r20141001-192247). Starting cluster using provider: gce Release: gs://kubernetes-releases-68782/devel/jamesfremen/r20141001-192247 Project: kubernetes-guestbook-example (autodetected from gcloud config) Error: The resource 'projects/kubernetes-guestbook-example' was not found It seems to be a minor path issue. I think it's looking for examples/guestbook but i'm not sure how to configure it properly. Any help would be appreciated! ### Response: It's been a while, but i recall that the problem occurred because billing must be enabled for the application in the Google Developer Console. An easy catch once you get used to the platform but non-obvious when you're climbing the learning curve.
I'm trying to follow the directions to get the Google Cloud Platform kubernetes GuestBook example running. I've got a "kubernetes-guestbook-example" project ID with billing enabled in the Google Developer's Console under my account and i do a "gcloud auth login" to ensure i'm running as that account. Step zero says that i should make sure i have "turned up a Kubernetes cluster". I think i've done this. I also run "gcloud config set project kubernetes-guestbook-example". Then i run "hack/dev-build-and-up.sh" from the kubernetes subdirectory and it produces: Building local go components Building release tree Packaging release Building launch script Uploading to Google Storage Release pushed (devel/jamesfremen/r20141001-192247). Starting cluster using provider: gce Release: gs://kubernetes-releases-68782/devel/jamesfremen/r20141001-192247 Project: kubernetes-guestbook-example (autodetected from gcloud config) Error: The resource 'projects/kubernetes-guestbook-example' was not found It seems to be a minor path issue. I think it's looking for examples/guestbook but i'm not sure how to configure it properly. Any help would be appreciated!
It's been a while, but i recall that the problem occurred because billing must be enabled for the application in the Google Developer Console. An easy catch once you get used to the platform but non-obvious when you're climbing the learning curve.
###Instruction: So I figured I should start using Ansible Galaxy when possible, instead of writing my own roles. I just installed my first role and it was installed to /etc/local/ansible/roles (I am on OSX). Now I wonder how you install this roles where I actually need it? Do I just copy the role to where I need it or is there an Ansible way of doing it? ### Response: Yes, you would copy them according to a sample project structure: site.yml webservers.yml fooservers.yml kubernetes.yaml roles/ common/ files/ templates/ tasks/ handlers/ vars/ meta/ webservers/ files/ templates/ tasks/ handlers/ vars/ meta/ kubernetes/ files/ templates/ tasks/ handlers/ vars/ meta/ or you can just run ansible-galaxy with the -p ROLES_PATH or --roles-path=ROLES_PATH option to install it under /your/project/root You can also use the /etc/local/ansible directory as your project root if you'd like to. Additionally, you can get help by running the command ansible-galaxy install --help
So I figured I should start using Ansible Galaxy when possible, instead of writing my own roles. I just installed my first role and it was installed to /etc/local/ansible/roles (I am on OSX). Now I wonder how you install this roles where I actually need it? Do I just copy the role to where I need it or is there an Ansible way of doing it?
Yes, you would copy them according to a sample project structure: site.yml webservers.yml fooservers.yml kubernetes.yaml roles/ common/ files/ templates/ tasks/ handlers/ vars/ meta/ webservers/ files/ templates/ tasks/ handlers/ vars/ meta/ kubernetes/ files/ templates/ tasks/ handlers/ vars/ meta/ or you can just run ansible-galaxy with the -p ROLES_PATH or --roles-path=ROLES_PATH option to install it under /your/project/root You can also use the /etc/local/ansible directory as your project root if you'd like to. Additionally, you can get help by running the command ansible-galaxy install --help
###Instruction: What exactly is the difference between Apache's Mesos and Google's Kubernetes? I understand both are server cluster management software. Can anyone elaborate where the main differences are - when would which framework be preferred? Why would you want to use Kubernetes on top of Mesosphere? ### Response: Kubernetes is an open source project that brings 'Google style' cluster management capabilities to the world of virtual machines, or 'on the metal' scenarios. It works very well with modern operating system environments (like CoreOS or Red Hat Atomic) that offer up lightweight computing 'nodes' that are managed for you. It is written in Golang and is lightweight, modular, portable and extensible. We (the Kubernetes team) are working with a number of different technology companies (including Mesosphere who curate the Mesos open source project) to establish Kubernetes as the standard way to interact with computing clusters. The idea is to reproduce the patterns that we see people needing to build cluster applications based on our experience at Google. Some of these concepts include: pods — a way to group containers together replication controllers — a way to handle the lifecycle of containers labels — a way to find and query containers, and services — a set of containers performing a common function. So with Kubernetes alone you will have something that is simple, easy to get up-and-running, portable and extensible that adds 'cluster' as a noun to the things that you manage in the lightest weight manner possible. Run an application on a cluster, and stop worrying about an individual machine. In this case, cluster is a flexible resource just like a VM. It is a logical computing unit. Turn it up, use it, resize it, turn it down quickly and easily. With Mesos, there is a fair amount of overlap in terms of the basic vision, but the products are at quite different points in their lifecycle and have different sweet spots. Mesos is a distributed systems kernel that stitches together a lot of different machines into a logical computer. It was born for a world where you own a lot of physical resources to create a big static computing cluster. The great thing about it is that lots of modern scalable data processing application run well on Mesos (Hadoop, Kafka, Spark) and it is nice because you can run them all on the same basic resource pool, along with your new age container packaged apps. It is somewhat more heavy weight than the Kubernetes project, but is getting easier and easier to manage thanks to the work of folks like Mesosphere. Now what gets really interesting is that Mesos is currently being adapted to add a lot of the Kubernetes concepts and to support the Kubernetes API. So it will be a gateway to getting more capabilities for your Kubernetes app (high availability master, more advanced scheduling semantics, ability to scale to a very large number of nodes) if you need them, and is well suited to run production workloads (Kubernetes is still in an alpha state). When asked, I tend to say: Kubernetes is a great place to start if you are new to the clustering world; it is the quickest, easiest and lightest way to kick the tires and start experimenting with cluster oriented development. It offers a very high level of portability since it is being supported by a lot of different providers (Microsoft, IBM, Red Hat, CoreOs, MesoSphere, VMWare, etc). If you have existing workloads (Hadoop, Spark, Kafka, etc), Mesos gives you a framework that let's you interleave those workloads with each other, and mix in a some of the new stuff including Kubernetes apps. Mesos gives you an escape valve if you need capabilities that are not yet implemented by the community in the Kubernetes framework.
What exactly is the difference between Apache's Mesos and Google's Kubernetes? I understand both are server cluster management software. Can anyone elaborate where the main differences are - when would which framework be preferred? Why would you want to use Kubernetes on top of Mesosphere?
Kubernetes is an open source project that brings 'Google style' cluster management capabilities to the world of virtual machines, or 'on the metal' scenarios. It works very well with modern operating system environments (like CoreOS or Red Hat Atomic) that offer up lightweight computing 'nodes' that are managed for you. It is written in Golang and is lightweight, modular, portable and extensible. We (the Kubernetes team) are working with a number of different technology companies (including Mesosphere who curate the Mesos open source project) to establish Kubernetes as the standard way to interact with computing clusters. The idea is to reproduce the patterns that we see people needing to build cluster applications based on our experience at Google. Some of these concepts include: pods — a way to group containers together replication controllers — a way to handle the lifecycle of containers labels — a way to find and query containers, and services — a set of containers performing a common function. So with Kubernetes alone you will have something that is simple, easy to get up-and-running, portable and extensible that adds 'cluster' as a noun to the things that you manage in the lightest weight manner possible. Run an application on a cluster, and stop worrying about an individual machine. In this case, cluster is a flexible resource just like a VM. It is a logical computing unit. Turn it up, use it, resize it, turn it down quickly and easily. With Mesos, there is a fair amount of overlap in terms of the basic vision, but the products are at quite different points in their lifecycle and have different sweet spots. Mesos is a distributed systems kernel that stitches together a lot of different machines into a logical computer. It was born for a world where you own a lot of physical resources to create a big static computing cluster. The great thing about it is that lots of modern scalable data processing application run well on Mesos (Hadoop, Kafka, Spark) and it is nice because you can run them all on the same basic resource pool, along with your new age container packaged apps. It is somewhat more heavy weight than the Kubernetes project, but is getting easier and easier to manage thanks to the work of folks like Mesosphere. Now what gets really interesting is that Mesos is currently being adapted to add a lot of the Kubernetes concepts and to support the Kubernetes API. So it will be a gateway to getting more capabilities for your Kubernetes app (high availability master, more advanced scheduling semantics, ability to scale to a very large number of nodes) if you need them, and is well suited to run production workloads (Kubernetes is still in an alpha state). When asked, I tend to say: Kubernetes is a great place to start if you are new to the clustering world; it is the quickest, easiest and lightest way to kick the tires and start experimenting with cluster oriented development. It offers a very high level of portability since it is being supported by a lot of different providers (Microsoft, IBM, Red Hat, CoreOs, MesoSphere, VMWare, etc). If you have existing workloads (Hadoop, Spark, Kafka, etc), Mesos gives you a framework that let's you interleave those workloads with each other, and mix in a some of the new stuff including Kubernetes apps. Mesos gives you an escape valve if you need capabilities that are not yet implemented by the community in the Kubernetes framework.
###Instruction: I would like to create a kubernetes pod that contains 2 containers, both with different images, so I can start both containers together. Currently I have tried the following configuration: { "id": "podId", "desiredState": { "manifest": { "version": "v1beta1", "id": "podId", "containers": [{ "name": "type1", "image": "local/image" }, { "name": "type2", "image": "local/secondary" }] } }, "labels": { "name": "imageTest" } } However when I execute kubecfg -c app.json create /pods I get the following error: F0909 08:40:13.028433 01141 kubecfg.go:283] Got request error: request [&http.Request{Method:"POST", URL:(*url.URL)(0xc20800ee00), Proto:"HTTP/1.1", ProtoMajor:1, ProtoMinor:1, Header:http.Header{}, B ody:ioutil.nopCloser{Reader:(*bytes.Buffer)(0xc20800ed20)}, ContentLength:396, TransferEncoding:[]string(nil), Close:false, Host:"127.0.0.1:8080", Form:url.Values(nil), PostForm:url.Values(nil), Multi partForm:(*multipart.Form)(nil), Trailer:http.Header(nil), RemoteAddr:"", RequestURI:"", TLS:(*tls.ConnectionState)(nil)}] failed (500) 500 Internal Server Error: {"kind":"Status","creationTimestamp": null,"apiVersion":"v1beta1","status":"failure","message":"failed to find fit for api.Pod{JSONBase:api.JSONBase{Kind:\"\", ID:\"SSH podId\", CreationTimestamp:util.Time{Time:time.Time{sec:63545848813, nsec :0x14114e1, loc:(*time.Location)(0xb9a720)}}, SelfLink:\"\", ResourceVersion:0x0, APIVersion:\"\"}, Labels:map[string]string{\"name\":\"imageTest\"}, DesiredState:api.PodState{Manifest:api.ContainerMa nifest{Version:\"v1beta1\", ID:\"podId\", Volumes:[]api.Volume(nil), Containers:[]api.Container{api.Container{Name:\"type1\", Image:\"local/image\", Command:[]string(nil), WorkingDir:\"\", Ports:[]ap i.Port(nil), Env:[]api.EnvVar(nil), Memory:0, CPU:0, VolumeMounts:[]api.VolumeMount(nil), LivenessProbe:(*api.LivenessProbe)(nil)}, api.Container{Name:\"type2\", Image:\"local/secondary\", Command:[]string(n il), WorkingDir:\"\", Ports:[]api.Port(nil), Env:[]api.EnvVar(nil), Memory:0, CPU:0, VolumeMounts:[]api.VolumeMount(nil), LivenessProbe:(*api.LivenessProbe)(nil)}}}, Status:\"\", Host:\"\", HostIP:\"\ ", PodIP:\"\", Info:api.PodInfo(nil), RestartPolicy:api.RestartPolicy{Type:\"RestartAlways\"}}, CurrentState:api.PodState{Manifest:api.ContainerManifest{Version:\"\", ID:\"\", Volumes:[]api.Volume(nil ), Containers:[]api.Container(nil)}, Status:\"\", Host:\"\", HostIP:\"\", PodIP:\"\", Info:api.PodInfo(nil), RestartPolicy:api.RestartPolicy{Type:\"\"}}}","code":500} How can I modify the configuration accordingly? Running kubernetes on a vagrant vm (yungsang/coreos). ### Response: The error in question here is "failed to find fit". This generally happens when you have a port conflict (try and use the same hostPort too many times or perhaps you don't have any worker nodes/minions. I'd suggest you either use the Vagrant file that is in the Kubernetes git repo (see http://kubernetes.io) as we have been trying to make sure that stays working as Kubernetes is under very active development. If you want to make it work with the CoreOS single machine set up, I suggest you hop on IRC (#google-containers on freenode) and try and get in touch with Kelsey Hightower.
I would like to create a kubernetes pod that contains 2 containers, both with different images, so I can start both containers together. Currently I have tried the following configuration: { "id": "podId", "desiredState": { "manifest": { "version": "v1beta1", "id": "podId", "containers": [{ "name": "type1", "image": "local/image" }, { "name": "type2", "image": "local/secondary" }] } }, "labels": { "name": "imageTest" } } However when I execute kubecfg -c app.json create /pods I get the following error: F0909 08:40:13.028433 01141 kubecfg.go:283] Got request error: request [&http.Request{Method:"POST", URL:(*url.URL)(0xc20800ee00), Proto:"HTTP/1.1", ProtoMajor:1, ProtoMinor:1, Header:http.Header{}, B ody:ioutil.nopCloser{Reader:(*bytes.Buffer)(0xc20800ed20)}, ContentLength:396, TransferEncoding:[]string(nil), Close:false, Host:"127.0.0.1:8080", Form:url.Values(nil), PostForm:url.Values(nil), Multi partForm:(*multipart.Form)(nil), Trailer:http.Header(nil), RemoteAddr:"", RequestURI:"", TLS:(*tls.ConnectionState)(nil)}] failed (500) 500 Internal Server Error: {"kind":"Status","creationTimestamp": null,"apiVersion":"v1beta1","status":"failure","message":"failed to find fit for api.Pod{JSONBase:api.JSONBase{Kind:\"\", ID:\"SSH podId\", CreationTimestamp:util.Time{Time:time.Time{sec:63545848813, nsec :0x14114e1, loc:(*time.Location)(0xb9a720)}}, SelfLink:\"\", ResourceVersion:0x0, APIVersion:\"\"}, Labels:map[string]string{\"name\":\"imageTest\"}, DesiredState:api.PodState{Manifest:api.ContainerMa nifest{Version:\"v1beta1\", ID:\"podId\", Volumes:[]api.Volume(nil), Containers:[]api.Container{api.Container{Name:\"type1\", Image:\"local/image\", Command:[]string(nil), WorkingDir:\"\", Ports:[]ap i.Port(nil), Env:[]api.EnvVar(nil), Memory:0, CPU:0, VolumeMounts:[]api.VolumeMount(nil), LivenessProbe:(*api.LivenessProbe)(nil)}, api.Container{Name:\"type2\", Image:\"local/secondary\", Command:[]string(n il), WorkingDir:\"\", Ports:[]api.Port(nil), Env:[]api.EnvVar(nil), Memory:0, CPU:0, VolumeMounts:[]api.VolumeMount(nil), LivenessProbe:(*api.LivenessProbe)(nil)}}}, Status:\"\", Host:\"\", HostIP:\"\ ", PodIP:\"\", Info:api.PodInfo(nil), RestartPolicy:api.RestartPolicy{Type:\"RestartAlways\"}}, CurrentState:api.PodState{Manifest:api.ContainerManifest{Version:\"\", ID:\"\", Volumes:[]api.Volume(nil ), Containers:[]api.Container(nil)}, Status:\"\", Host:\"\", HostIP:\"\", PodIP:\"\", Info:api.PodInfo(nil), RestartPolicy:api.RestartPolicy{Type:\"\"}}}","code":500} How can I modify the configuration accordingly? Running kubernetes on a vagrant vm (yungsang/coreos).
The error in question here is "failed to find fit". This generally happens when you have a port conflict (try and use the same hostPort too many times or perhaps you don't have any worker nodes/minions. I'd suggest you either use the Vagrant file that is in the Kubernetes git repo (see http://kubernetes.io) as we have been trying to make sure that stays working as Kubernetes is under very active development. If you want to make it work with the CoreOS single machine set up, I suggest you hop on IRC (#google-containers on freenode) and try and get in touch with Kelsey Hightower.
###Instruction: I am not sure either what I am trying to do is possible or correct way. One of my colleague spinup kubernetes gce cluster (with 1 master and 4 minions.) in a project which is shared with me as owner access. After setup he shared his ~/.kubernetes_auth keys along with .kubecfg.crt, .kubecfg.ca.crt and .kubecfg.key. I copied all of the at my home folder and setup the kubernetes workspace. I also set the project name as a default project in geconfig. and now I can connect to the master and slaves using 'gcutil ssh --zone us-central1-b kubernetes-master' But when I try to list of existing pods using 'cluster/kubecfg.sh list pods' I see "F1017 21:05:31.037148 18021 kubecfg.go:422] Got request error: Get https://107.178.208.109/api/v1beta1/pods?namespace=default: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "ChangeMe") I tried to debug from my side but failed to come any conclusion. Any sort of clue will be helpful. ### Response: You can also copy the cert files off of the master again. They are located in /usr/share/nginx on the master.
I am not sure either what I am trying to do is possible or correct way. One of my colleague spinup kubernetes gce cluster (with 1 master and 4 minions.) in a project which is shared with me as owner access. After setup he shared his ~/.kubernetes_auth keys along with .kubecfg.crt, .kubecfg.ca.crt and .kubecfg.key. I copied all of the at my home folder and setup the kubernetes workspace. I also set the project name as a default project in geconfig. and now I can connect to the master and slaves using 'gcutil ssh --zone us-central1-b kubernetes-master' But when I try to list of existing pods using 'cluster/kubecfg.sh list pods' I see "F1017 21:05:31.037148 18021 kubecfg.go:422] Got request error: Get https://107.178.208.109/api/v1beta1/pods?namespace=default: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "ChangeMe") I tried to debug from my side but failed to come any conclusion. Any sort of clue will be helpful.
You can also copy the cert files off of the master again. They are located in /usr/share/nginx on the master.
###Instruction: I tried to create a new cluster in Container Engine in the Google Developers Console. It finished pretty quickly with a yellow triangle with an exclamation point. I'm assuming that means it didn't work. Any idea what I could be doing wrong? ### Response: There's a few things that could go wrong. The best option to figure out what's wrong in your situation is to try using the gcloud command line tool, which gives better error information. Information about how to install and use it is in Container Engine's documentation. Other than the default network being removed (as mentioned by Robert Bailey), you may be trying to create more VM instances than you have quota for. You can check what your quota is on the developer console under Compute > Compute Engine > Quota. You're most likely to go over quota on either CPUs or in-use IP addresses, since each VM created is given an ephemeral IP address.
I tried to create a new cluster in Container Engine in the Google Developers Console. It finished pretty quickly with a yellow triangle with an exclamation point. I'm assuming that means it didn't work. Any idea what I could be doing wrong?
There's a few things that could go wrong. The best option to figure out what's wrong in your situation is to try using the gcloud command line tool, which gives better error information. Information about how to install and use it is in Container Engine's documentation. Other than the default network being removed (as mentioned by Robert Bailey), you may be trying to create more VM instances than you have quota for. You can check what your quota is on the developer console under Compute > Compute Engine > Quota. You're most likely to go over quota on either CPUs or in-use IP addresses, since each VM created is given an ephemeral IP address.
###Instruction: How do I run a docker image that I built locally on Google Container Engine? ### Response: You can push your image to Google Container Registry and reference them from your pod manifest. Detailed instructions Assuming you have a DOCKER_HOST properly setup , a GKE cluster running the last version of Kubernetes and Google Cloud SDK installed. Setup some environment variables gcloud components update kubectl gcloud config set project <your-project> gcloud config set compute/zone <your-cluster-zone> gcloud config set container/cluster <your-cluster-name> gcloud container clusters get-credentials <your-cluster-name> Tag your image docker tag <your-image> gcr.io/<your-project>/<your-image> Push your image gcloud docker push gcr.io/<your-project>/<your-image> Create a pod manifest for your container: my-pod.yaml id: my-pod kind: Pod apiVersion: v1 desiredState: manifest: containers: - name: <container-name> image: gcr.io/<your-project>/<your-image> ... Schedule this pod kubectl create -f my-pod.yaml Repeat from step (4) for each pod you want to run. You can have multiple definitions in a single file using a line with --- as delimiter.
How do I run a docker image that I built locally on Google Container Engine?
You can push your image to Google Container Registry and reference them from your pod manifest. Detailed instructions Assuming you have a DOCKER_HOST properly setup , a GKE cluster running the last version of Kubernetes and Google Cloud SDK installed. Setup some environment variables gcloud components update kubectl gcloud config set project <your-project> gcloud config set compute/zone <your-cluster-zone> gcloud config set container/cluster <your-cluster-name> gcloud container clusters get-credentials <your-cluster-name> Tag your image docker tag <your-image> gcr.io/<your-project>/<your-image> Push your image gcloud docker push gcr.io/<your-project>/<your-image> Create a pod manifest for your container: my-pod.yaml id: my-pod kind: Pod apiVersion: v1 desiredState: manifest: containers: - name: <container-name> image: gcr.io/<your-project>/<your-image> ... Schedule this pod kubectl create -f my-pod.yaml Repeat from step (4) for each pod you want to run. You can have multiple definitions in a single file using a line with --- as delimiter.
###Instruction: I understand the Container Engine is currently on alpha and not yet complete. From the docs I assume there is no auto-scaling of pods (e.g. depending on CPU load) yet, correct? I'd love to be able to configure a replication controller to automatically add pods (and VM instances) when the average CPU load reaches a defined threshold. Is this somewhere on the near future roadmap? Or is it possible to use the Compute Engine Autoscaler for this? (if so, how?) ### Response: As we work towards a Beta release, we're definitely looking at integrating the Google Compute Engine AutoScaler. There are actually two different kinds of scaling: Scaling up/down the number of worker nodes in the cluster depending on # of containers in the cluster Scaling pods up and down. Since Kubernetes is an OSS project as well, we'd also like to add a Kubernetes native autoscaler that can scale replication controllers. It's definitely something that's on the roadmap. I expect we will actually have multiple autoscaler implementations, since it can be very application specific...
I understand the Container Engine is currently on alpha and not yet complete. From the docs I assume there is no auto-scaling of pods (e.g. depending on CPU load) yet, correct? I'd love to be able to configure a replication controller to automatically add pods (and VM instances) when the average CPU load reaches a defined threshold. Is this somewhere on the near future roadmap? Or is it possible to use the Compute Engine Autoscaler for this? (if so, how?)
As we work towards a Beta release, we're definitely looking at integrating the Google Compute Engine AutoScaler. There are actually two different kinds of scaling: Scaling up/down the number of worker nodes in the cluster depending on # of containers in the cluster Scaling pods up and down. Since Kubernetes is an OSS project as well, we'd also like to add a Kubernetes native autoscaler that can scale replication controllers. It's definitely something that's on the roadmap. I expect we will actually have multiple autoscaler implementations, since it can be very application specific...
###Instruction: If I start a Google Container Engine cluster like this: gcloud container clusters --zone=$ZONE create $CLUSTER_NAME I get three worker nodes. How can I create a cluster with more? ### Response: It's possible to create a different number of worker nodes by using the --num-nodes option when you create the cluster, like this: gcloud container clusters --zone=$ZONE create $CLUSTER_NAME --num-nodes=5
If I start a Google Container Engine cluster like this: gcloud container clusters --zone=$ZONE create $CLUSTER_NAME I get three worker nodes. How can I create a cluster with more?
It's possible to create a different number of worker nodes by using the --num-nodes option when you create the cluster, like this: gcloud container clusters --zone=$ZONE create $CLUSTER_NAME --num-nodes=5
###Instruction: I'm very interesting in the new Google Cloud Service: Google Container Engine, namely in be a able to write systems that can scale using containers' properties. I saw the StackOverflow questions: Autoscaling in Google Container Engine Increasing the cluster size in Google Container Engine And I understood that the auto-scale (and other features) are planned, however, I didn't see any release dates. When are the referred auto-scale features/integrations be released/available? When will the Google Container Engine reach Beta (leave Alpha)? Does Google Container Engine have a roadmap with release dates that can be consulted? ### Response: Kubernetes roadmap is here: https://github.com/kubernetes/kubernetes/blob/master/docs/roadmap.md Unfortunately, Google Container Engine hasn't released a roadmap yet.
I'm very interesting in the new Google Cloud Service: Google Container Engine, namely in be a able to write systems that can scale using containers' properties. I saw the StackOverflow questions: Autoscaling in Google Container Engine Increasing the cluster size in Google Container Engine And I understood that the auto-scale (and other features) are planned, however, I didn't see any release dates. When are the referred auto-scale features/integrations be released/available? When will the Google Container Engine reach Beta (leave Alpha)? Does Google Container Engine have a roadmap with release dates that can be consulted?
Kubernetes roadmap is here: https://github.com/kubernetes/kubernetes/blob/master/docs/roadmap.md Unfortunately, Google Container Engine hasn't released a roadmap yet.
###Instruction: There is problem , I can't link my pod container with persistent storage This is config of my pod, where elastic is the name of the attached disk (same region, mounted and formatted as should), when I start the pod with this config I have this error: Unable to mount volumes for pod elastic.etcd I could link my container to any other type of volume either emptyDir or hostDir and all work fine. But in the case of the mounted disk not. And I really can't find some good example about persitsentDisk volumes. id: elastic kind: Pod apiVersion: v1beta1 desiredState: manifest: version: v1beta1 id: elastic volumes: - name: elastic-persistent-storage source: persistentDisk: pdName : elastic fsType : ext4 containers: - name: elastic image: dockerfile/elasticsearch cpu: 1000 volumeMounts: - name: elastic-persistent-storage mountPath: /data ports: - name: elastic containerPort: 9200 hostPort: 9200 labels: name: elastic role: storage elastic is the name of the disk in the same project, same region and attached to master node of claster. Also it is formatted and mounted. Thanks! ### Response: There is an example of mounting a PD in the github documentation: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/volumes.md#creating-a-pd PDs in GCE can only be attached to a single VM (in read/write mode), so if the disk is already attached to your master then it will not be possible to also attach it to the node on which your pod is scheduled. Try detaching the PD from the master and then scheduling the pod.
There is problem , I can't link my pod container with persistent storage This is config of my pod, where elastic is the name of the attached disk (same region, mounted and formatted as should), when I start the pod with this config I have this error: Unable to mount volumes for pod elastic.etcd I could link my container to any other type of volume either emptyDir or hostDir and all work fine. But in the case of the mounted disk not. And I really can't find some good example about persitsentDisk volumes. id: elastic kind: Pod apiVersion: v1beta1 desiredState: manifest: version: v1beta1 id: elastic volumes: - name: elastic-persistent-storage source: persistentDisk: pdName : elastic fsType : ext4 containers: - name: elastic image: dockerfile/elasticsearch cpu: 1000 volumeMounts: - name: elastic-persistent-storage mountPath: /data ports: - name: elastic containerPort: 9200 hostPort: 9200 labels: name: elastic role: storage elastic is the name of the disk in the same project, same region and attached to master node of claster. Also it is formatted and mounted. Thanks!
There is an example of mounting a PD in the github documentation: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/volumes.md#creating-a-pd PDs in GCE can only be attached to a single VM (in read/write mode), so if the disk is already attached to your master then it will not be possible to also attach it to the node on which your pod is scheduled. Try detaching the PD from the master and then scheduling the pod.
###Instruction: I am not sure either what I am trying to do is possible or correct way. One of my colleague spinup kubernetes gce cluster (with 1 master and 4 minions.) in a project which is shared with me as owner access. After setup he shared his ~/.kubernetes_auth keys along with .kubecfg.crt, .kubecfg.ca.crt and .kubecfg.key. I copied all of the at my home folder and setup the kubernetes workspace. I also set the project name as a default project in geconfig. and now I can connect to the master and slaves using 'gcutil ssh --zone us-central1-b kubernetes-master' But when I try to list of existing pods using 'cluster/kubecfg.sh list pods' I see "F1017 21:05:31.037148 18021 kubecfg.go:422] Got request error: Get https://107.178.208.109/api/v1beta1/pods?namespace=default: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "ChangeMe") I tried to debug from my side but failed to come any conclusion. Any sort of clue will be helpful. ### Response: It is probably due to a not implemented feature, see this issue: https://github.com/GoogleCloudPlatform/kubernetes/issues/1886 you can copy the files from /usr/share/nginx/... on the master into your home dir and try again.
I am not sure either what I am trying to do is possible or correct way. One of my colleague spinup kubernetes gce cluster (with 1 master and 4 minions.) in a project which is shared with me as owner access. After setup he shared his ~/.kubernetes_auth keys along with .kubecfg.crt, .kubecfg.ca.crt and .kubecfg.key. I copied all of the at my home folder and setup the kubernetes workspace. I also set the project name as a default project in geconfig. and now I can connect to the master and slaves using 'gcutil ssh --zone us-central1-b kubernetes-master' But when I try to list of existing pods using 'cluster/kubecfg.sh list pods' I see "F1017 21:05:31.037148 18021 kubecfg.go:422] Got request error: Get https://107.178.208.109/api/v1beta1/pods?namespace=default: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "ChangeMe") I tried to debug from my side but failed to come any conclusion. Any sort of clue will be helpful.
It is probably due to a not implemented feature, see this issue: https://github.com/GoogleCloudPlatform/kubernetes/issues/1886 you can copy the files from /usr/share/nginx/... on the master into your home dir and try again.
###Instruction: I installed an 8-node kubernetes cluster (1 master + 7 minion) but I faced a networking problem among minions. I installed my cluster according to this step-by-step Fedora manual, so I use Fedora 20 with its testing repository to get kubernetes binaries. After installing, I wanted to try the guestbook example, but it seems to me there is a problem with the inter-container networking. Although containers/PODs are in running state and I can reach my 3 frontend containers (via browser) and the redis containers as well (via natcat), but the frontend, which not on the same host with the redis, cannot reach redis master. The frontend's PHP give back network exception. Can anybody help me why the containers cannot reach each other among the hosts? I hope I could describe my setup enough accurately and thanks in advance. ### Response: The Fedora guide you followed will only get you running on a single machine. It avoids the issues around setting up networking across nodes. For kubernetes to work, the following network set up must be satisfied: Every container should be able to talk to every other container, even across nodes. This means also that the bridge IP range for those containers must not overlap. Code running on any node that isn't in a container should be able to reach every container (and vise-versa), even across nodes. It is not necessary (but useful) if computers on the network that aren't part of the cluster can reach the containers directly. There are a lot of ways to achieve this -- for instance the set up for vagrant sets up GRE tunnels between each node. On GCE we use features of the platform to do the routing. If you are on physical machines on a switch you can probably just do a big layer 2 network w/ bridges. A bulletproof way to get started (but perhaps not the most performant, depending on your set up) is to use something like flannel. We are working on making this stuff easier to start up (without using a mess of shell scripts) and are thinking of building something like flannel in so that there is a reasonable default.
I installed an 8-node kubernetes cluster (1 master + 7 minion) but I faced a networking problem among minions. I installed my cluster according to this step-by-step Fedora manual, so I use Fedora 20 with its testing repository to get kubernetes binaries. After installing, I wanted to try the guestbook example, but it seems to me there is a problem with the inter-container networking. Although containers/PODs are in running state and I can reach my 3 frontend containers (via browser) and the redis containers as well (via natcat), but the frontend, which not on the same host with the redis, cannot reach redis master. The frontend's PHP give back network exception. Can anybody help me why the containers cannot reach each other among the hosts? I hope I could describe my setup enough accurately and thanks in advance.
The Fedora guide you followed will only get you running on a single machine. It avoids the issues around setting up networking across nodes. For kubernetes to work, the following network set up must be satisfied: Every container should be able to talk to every other container, even across nodes. This means also that the bridge IP range for those containers must not overlap. Code running on any node that isn't in a container should be able to reach every container (and vise-versa), even across nodes. It is not necessary (but useful) if computers on the network that aren't part of the cluster can reach the containers directly. There are a lot of ways to achieve this -- for instance the set up for vagrant sets up GRE tunnels between each node. On GCE we use features of the platform to do the routing. If you are on physical machines on a switch you can probably just do a big layer 2 network w/ bridges. A bulletproof way to get started (but perhaps not the most performant, depending on your set up) is to use something like flannel. We are working on making this stuff easier to start up (without using a mess of shell scripts) and are thinking of building something like flannel in so that there is a reasonable default.
###Instruction: I have read some introduction of these projects, but still cannot get a clear idea of the difference between Kubernetes and Flynn/Deis. Can anyone help? ### Response: Kubernetes is really three things: A way to dynamically schedule containers (actually, sets of containers called pods) to a cluster of machines. Manage and horizontally scale a lot of those pods using labels and helpers (ReplicationController) Communicate between sets of pods via services, expose a set of pods externally on a public IP and easily consume external services. This is necessary to deal with the horizontal scaling and the dynamic nature of how pods get placed/scheduled. This is all very much a tool set for managing compute across a set of machines. It isn't a full application PaaS. Kubernetes doesn't have any idea what an "application" is. Generally PaaS systems provide an easy way to take code and get it deployed and managed as an application. In fact, I expect to see specialized PaaS systems built on top of Kubernetes -- that is what RedHat OpenShift is doing. One way to think about Kubernetes is as a system for "logical" infrastructure (vs. traditional VM cloud systems which are
I have read some introduction of these projects, but still cannot get a clear idea of the difference between Kubernetes and Flynn/Deis. Can anyone help?
Kubernetes is really three things: A way to dynamically schedule containers (actually, sets of containers called pods) to a cluster of machines. Manage and horizontally scale a lot of those pods using labels and helpers (ReplicationController) Communicate between sets of pods via services, expose a set of pods externally on a public IP and easily consume external services. This is necessary to deal with the horizontal scaling and the dynamic nature of how pods get placed/scheduled. This is all very much a tool set for managing compute across a set of machines. It isn't a full application PaaS. Kubernetes doesn't have any idea what an "application" is. Generally PaaS systems provide an easy way to take code and get it deployed and managed as an application. In fact, I expect to see specialized PaaS systems built on top of Kubernetes -- that is what RedHat OpenShift is doing. One way to think about Kubernetes is as a system for "logical" infrastructure (vs. traditional VM cloud systems which are
###Instruction: I'm running (from Windows 8.1) a Vagrant VM for CoreOS (yungsang/coreos). I installed kubernetes according to the guide I found here and created the json for the pod using my images. When I execute sudo ./kubecfg list /pods I get the following error: F0909 06:03:04.626251 01933 kubecfg.go:182] Got request error: Get http://localhost:8080/api/v1beta1/pods?labels=: dial tcp 127.0.0.1:8080: connection refused Same goes for sudo ./kubecfg -h http://127.0.0.1:8080 -c /vagrant/app.json create /pods EDIT: Update Instead of running the commands myself I integrated into the vagrant file (as such) . This makes kubernetes work fine. HOWEVER after some time my vagrant ssh connection gets closed off. I reconnect and any kubernetes commands I specify result in the same error as above. EDIT 2: Update I managed to get it to run again, however I am unsure if it will run smoothly I had to re-execute the following commands. sudo systemctl start etcd sudo systemctl start download-kubernetes sudo systemctl start apiserver sudo systemctl start controller-manager sudo systemctl start kubelet sudo systemctl start proxy I believe it is in fact the apiserver that needs restarting What is the source of this "timeout"? (Where are any logs I can find for this matter) ### Response: Kubernetes development is moving insanely fast right now so this could be out of date by tomorrow. With that in mind, the kubernetes folks recommend following one of their official installation guides. The best advice would be to start over fresh with one of the new installation guides but there are a few tips that I have learned doing this myself. The first thing to note is that Kubecfg is being deprecated in favor of kubectl. So for future reference if you want to get info about a pod you would run something like: ./kubectl get pods. With kubectl you will also need to set an env variable so kubectl know how to talk to the apiserver: KUBERNETES_MASTER=http://IPADDRESS:8080. The easiest way to debug exactly what is going on if you are using CoreOS is to tail the logs for the service you are interested in. So if you have a kube-apiserver unit you can look at what's goin on by running: journalctl -f -u kube-apiserver from the node that is running the apiserver. If that service isn't running, which may be the case, you can start it with: systemctl start kube-apiserver
I'm running (from Windows 8.1) a Vagrant VM for CoreOS (yungsang/coreos). I installed kubernetes according to the guide I found here and created the json for the pod using my images. When I execute sudo ./kubecfg list /pods I get the following error: F0909 06:03:04.626251 01933 kubecfg.go:182] Got request error: Get http://localhost:8080/api/v1beta1/pods?labels=: dial tcp 127.0.0.1:8080: connection refused Same goes for sudo ./kubecfg -h http://127.0.0.1:8080 -c /vagrant/app.json create /pods EDIT: Update Instead of running the commands myself I integrated into the vagrant file (as such) . This makes kubernetes work fine. HOWEVER after some time my vagrant ssh connection gets closed off. I reconnect and any kubernetes commands I specify result in the same error as above. EDIT 2: Update I managed to get it to run again, however I am unsure if it will run smoothly I had to re-execute the following commands. sudo systemctl start etcd sudo systemctl start download-kubernetes sudo systemctl start apiserver sudo systemctl start controller-manager sudo systemctl start kubelet sudo systemctl start proxy I believe it is in fact the apiserver that needs restarting What is the source of this "timeout"? (Where are any logs I can find for this matter)
Kubernetes development is moving insanely fast right now so this could be out of date by tomorrow. With that in mind, the kubernetes folks recommend following one of their official installation guides. The best advice would be to start over fresh with one of the new installation guides but there are a few tips that I have learned doing this myself. The first thing to note is that Kubecfg is being deprecated in favor of kubectl. So for future reference if you want to get info about a pod you would run something like: ./kubectl get pods. With kubectl you will also need to set an env variable so kubectl know how to talk to the apiserver: KUBERNETES_MASTER=http://IPADDRESS:8080. The easiest way to debug exactly what is going on if you are using CoreOS is to tail the logs for the service you are interested in. So if you have a kube-apiserver unit you can look at what's goin on by running: journalctl -f -u kube-apiserver from the node that is running the apiserver. If that service isn't running, which may be the case, you can start it with: systemctl start kube-apiserver
###Instruction: I have been working with Docker the last days and I have created a basic Docker image with a Dockerfile. It just starts a web server. Now, I have been reading further and created an account at Google Application Engine and want to deploy this image to Kubernetes. I feel lost. I don't understand how my Dockerfiles that I have locally is ment to be transferred to this platform? Don't I use them at all? I have seen some examples of Pod configuration but as far I can see they refer to images at Docker Hub? Could someone point me in the right direction on what to do? ### Response: I have not worked with Kubernetes, but as far as I know, I think you have to upload to containers to a docker image repository. You can use the official hub and upload your images there (public images are free to upload, and you can have one private for free or pay for more). Also you can have a private docker image repository, and configure kubernetes to use it. As I understand from this post, you can run link a Google Cloud bucket with a docker repository running locally, then configure a kubernetes pod to point to that bucket, and all the rest of your pods can consume docker images from there.
I have been working with Docker the last days and I have created a basic Docker image with a Dockerfile. It just starts a web server. Now, I have been reading further and created an account at Google Application Engine and want to deploy this image to Kubernetes. I feel lost. I don't understand how my Dockerfiles that I have locally is ment to be transferred to this platform? Don't I use them at all? I have seen some examples of Pod configuration but as far I can see they refer to images at Docker Hub? Could someone point me in the right direction on what to do?
I have not worked with Kubernetes, but as far as I know, I think you have to upload to containers to a docker image repository. You can use the official hub and upload your images there (public images are free to upload, and you can have one private for free or pay for more). Also you can have a private docker image repository, and configure kubernetes to use it. As I understand from this post, you can run link a Google Cloud bucket with a docker repository running locally, then configure a kubernetes pod to point to that bucket, and all the rest of your pods can consume docker images from there.
###Instruction: What is the usual way to organize pods in a cluster in Kubernetes? I have a Jenkins build server, Docker registry, Git repository and other development tools that I want to run in Google Container Engine. Do I create one cluster for each of them? Or can multiple pods be scheduled on each node? So my question is: Would you create one cluster that holds all these services, or multiple clusters? The same question applies to production, qa etc enviroments. Do I create one cluster for each enviroment or do I have them in the same cluster? ### Response: To answer your first question, multiple pods can be scheduled on each node. One of the best parts about Google Container Engine / Kubernetes is that it is really flexible, so you can structure your services in the way that works best for you. For your specific use case, I think that a single cluster would make sense because all of the applications that you want to run are closely related. You'll want to think a bit about choosing an appropriate size for your cluster (both the number of VMs and the size of each VM) to fit your entire workload. You can experiment with creating a single cluster for both your QA and Prod workloads, or you can split them across clusters. Until Kubernetes has better support for QoS (for scheduling pods), it probably makes more sense to keep the QA environment separate (and probably sized more modestly).
What is the usual way to organize pods in a cluster in Kubernetes? I have a Jenkins build server, Docker registry, Git repository and other development tools that I want to run in Google Container Engine. Do I create one cluster for each of them? Or can multiple pods be scheduled on each node? So my question is: Would you create one cluster that holds all these services, or multiple clusters? The same question applies to production, qa etc enviroments. Do I create one cluster for each enviroment or do I have them in the same cluster?
To answer your first question, multiple pods can be scheduled on each node. One of the best parts about Google Container Engine / Kubernetes is that it is really flexible, so you can structure your services in the way that works best for you. For your specific use case, I think that a single cluster would make sense because all of the applications that you want to run are closely related. You'll want to think a bit about choosing an appropriate size for your cluster (both the number of VMs and the size of each VM) to fit your entire workload. You can experiment with creating a single cluster for both your QA and Prod workloads, or you can split them across clusters. Until Kubernetes has better support for QoS (for scheduling pods), it probably makes more sense to keep the QA environment separate (and probably sized more modestly).
###Instruction: How does pods that are controlled by a replication controller and "hidden" behind a service in Kubernetes write/read data? If I have an application that recieves images from the user that needs to be persisted, where do I store that? Because of the service in front I have no control over which node it is stored at if I use volumes. ### Response: I think the "simple" answer to your question is that you will need shared storage under you Kubernetes cluster, so that every pods access the same data. Then it wouldn't matter where the pods are running and which pod is actually executing the service. May be another solution would be Flocker, they describe themself in short: Flocker is a data volume manager and multi-host Docker cluster management tool. With it you can control your data using the same tools you use for your stateless applications by harnessing the power of ZFS on Linux. Anyway I think the storage question on Kubernetes or any other dockerized infrastructure is very interesting. It looks like the google-app-engine doesn't support sharing data store between their apps by default like they pointed out in this SO Question
How does pods that are controlled by a replication controller and "hidden" behind a service in Kubernetes write/read data? If I have an application that recieves images from the user that needs to be persisted, where do I store that? Because of the service in front I have no control over which node it is stored at if I use volumes.
I think the "simple" answer to your question is that you will need shared storage under you Kubernetes cluster, so that every pods access the same data. Then it wouldn't matter where the pods are running and which pod is actually executing the service. May be another solution would be Flocker, they describe themself in short: Flocker is a data volume manager and multi-host Docker cluster management tool. With it you can control your data using the same tools you use for your stateless applications by harnessing the power of ZFS on Linux. Anyway I think the storage question on Kubernetes or any other dockerized infrastructure is very interesting. It looks like the google-app-engine doesn't support sharing data store between their apps by default like they pointed out in this SO Question
###Instruction: It seems like the best way to deploy a external facing application on Google Cloud would be to create an external load balancer with this line in the service configuration: { ... "createExternalLoadBalancer": true ... } This doesn't seem to work for AWS. I'm getting the following error when running the service create: requested an external service, but no cloud provider supplied I know about the PublicIPs setting in services, but that would involve knowing the service's IP in advance so I can set a domain name to it, but so far that doesn't look to be possible if I want to set it up using an external service like AWS ELB. What's the recommended way of doing this on AWS? ### Response: This is still a work in progress. Please see: https://github.com/GoogleCloudPlatform/kubernetes/pull/2672 For a proposal that starts to add support for AWS ELBs to Kubernetes, we're working to get that pull request integrated. Thanks!
It seems like the best way to deploy a external facing application on Google Cloud would be to create an external load balancer with this line in the service configuration: { ... "createExternalLoadBalancer": true ... } This doesn't seem to work for AWS. I'm getting the following error when running the service create: requested an external service, but no cloud provider supplied I know about the PublicIPs setting in services, but that would involve knowing the service's IP in advance so I can set a domain name to it, but so far that doesn't look to be possible if I want to set it up using an external service like AWS ELB. What's the recommended way of doing this on AWS?
This is still a work in progress. Please see: https://github.com/GoogleCloudPlatform/kubernetes/pull/2672 For a proposal that starts to add support for AWS ELBs to Kubernetes, we're working to get that pull request integrated. Thanks!
###Instruction: I am trying to run two Dockers on the same Kubernetes pod and I want one of the Docker container always to run before the other. I remember learning about specifying such dependency on the pod configuration file, but can not find that now. Kubernetes documentation does not explain it either. Here is the example pod configuration with two containers I adopted from another Stackoverflow question. How should I change this pod configuration to run container type1 before type2? { "id": "podId", "desiredState": { "manifest": { "version": "v1beta1", "id": "podId", "containers": [{ "name": "type1", "image": "local/image" }, { "name": "type2", "image": "local/secondary" }] } }, "labels": { "name": "imageTest" } } Thanks in advance, Nodir. ### Response: Kubernetes currently does not allow specification of container startup dependencies. There has been some discussion in GitHub issues 1996 and 1589 that might help you out.
I am trying to run two Dockers on the same Kubernetes pod and I want one of the Docker container always to run before the other. I remember learning about specifying such dependency on the pod configuration file, but can not find that now. Kubernetes documentation does not explain it either. Here is the example pod configuration with two containers I adopted from another Stackoverflow question. How should I change this pod configuration to run container type1 before type2? { "id": "podId", "desiredState": { "manifest": { "version": "v1beta1", "id": "podId", "containers": [{ "name": "type1", "image": "local/image" }, { "name": "type2", "image": "local/secondary" }] } }, "labels": { "name": "imageTest" } } Thanks in advance, Nodir.
Kubernetes currently does not allow specification of container startup dependencies. There has been some discussion in GitHub issues 1996 and 1589 that might help you out.
###Instruction: I am trying to connect to a Docker container on Google Container Engine(GKE) from my local machine through the internet by TCP protocol. So far I have used Kubernetes services which gives an external IP address, so the local machine can connect to the container on GKE using the service. When we create a service, we can specify only one port and cannot specify the port range. Please see the my-ros-service.yaml below. In this case, we can access the container by 11311 port from outside of GCE. However, some applications that run on my container expose dynamic ports to connect to other applications. Therefore I cannot determine the port number that the application uses and cannot create the Kubernetes services before I run the application. So far I have managed to connect to the container by creating many services which have different port while running the application. But this is not a realistic way to solve the problem. My question is that: How to connect to the application that exposes dynamic ports on Docker container from outside of the GCE by using Kubernetes service? If possible, can we create a service which exposes dynamic port for incoming connection before running the application which runs on the container? Any advice or information you could provide would be greatly appreciated. Thank you in advance. my-ros-service.yaml kind: Service apiVersion: v1beta1 id: my-ros-service port: 11311 selector: name: my-ros containerPort: 11311 createExternalLoadBalancer: true ### Response: I don't think there is currently a better solution than what you are doing. There is already a related issue, kubernetes issue 1802, about having multiple ports per service. I mentioned your requirements on that issue. You might want to follow up there with more information about your use case, such as what program you are running (if it is publicly available), and whether the dynamic ports come from a specific contiguous range.
I am trying to connect to a Docker container on Google Container Engine(GKE) from my local machine through the internet by TCP protocol. So far I have used Kubernetes services which gives an external IP address, so the local machine can connect to the container on GKE using the service. When we create a service, we can specify only one port and cannot specify the port range. Please see the my-ros-service.yaml below. In this case, we can access the container by 11311 port from outside of GCE. However, some applications that run on my container expose dynamic ports to connect to other applications. Therefore I cannot determine the port number that the application uses and cannot create the Kubernetes services before I run the application. So far I have managed to connect to the container by creating many services which have different port while running the application. But this is not a realistic way to solve the problem. My question is that: How to connect to the application that exposes dynamic ports on Docker container from outside of the GCE by using Kubernetes service? If possible, can we create a service which exposes dynamic port for incoming connection before running the application which runs on the container? Any advice or information you could provide would be greatly appreciated. Thank you in advance. my-ros-service.yaml kind: Service apiVersion: v1beta1 id: my-ros-service port: 11311 selector: name: my-ros containerPort: 11311 createExternalLoadBalancer: true
I don't think there is currently a better solution than what you are doing. There is already a related issue, kubernetes issue 1802, about having multiple ports per service. I mentioned your requirements on that issue. You might want to follow up there with more information about your use case, such as what program you are running (if it is publicly available), and whether the dynamic ports come from a specific contiguous range.
###Instruction: We are looking into using Docker plus either Mesos/Marathon or Kubernetes for hosting a cluster. However, the one issue that we haven't really seen any answers for is how to allow clustered services to connect to each other correctly. All of the ones that I have seen need to know about at least one other node before they can join the cluster. Some need to know about every node. However, in Kubernetes and Mesos, there's no way to know what those IP addresses are ahead of time. So, are there any best practices for this? If it helps, some technologies we're looking into deploying as containers are ElasticSearch, ActiveMQ, and MongoDB. There may be others. ### Response: However, the one issue that we haven't really seen any answers for is how to allow clustered services to connect to each other correctly. I think you're talking about HA/replicated/sharded apps here. At the moment, in kubernetes, you can accomplish this by making an api call listing all the "endpoints" of the service; that will tell you where your peers are running. We'd eventually like to support the use case you describe in a more first-class manner. I filed https://github.com/GoogleCloudPlatform/kubernetes/issues/3419 to maybe get something more standardized started here.
We are looking into using Docker plus either Mesos/Marathon or Kubernetes for hosting a cluster. However, the one issue that we haven't really seen any answers for is how to allow clustered services to connect to each other correctly. All of the ones that I have seen need to know about at least one other node before they can join the cluster. Some need to know about every node. However, in Kubernetes and Mesos, there's no way to know what those IP addresses are ahead of time. So, are there any best practices for this? If it helps, some technologies we're looking into deploying as containers are ElasticSearch, ActiveMQ, and MongoDB. There may be others.
However, the one issue that we haven't really seen any answers for is how to allow clustered services to connect to each other correctly. I think you're talking about HA/replicated/sharded apps here. At the moment, in kubernetes, you can accomplish this by making an api call listing all the "endpoints" of the service; that will tell you where your peers are running. We'd eventually like to support the use case you describe in a more first-class manner. I filed https://github.com/GoogleCloudPlatform/kubernetes/issues/3419 to maybe get something more standardized started here.
###Instruction: I've read that AWS does not support Kubernetes and builds their own Docker orchestration engine EC2 Container Service. However, on Kubernetes getting-started -page there is a guide on how to run Kubernetes on AWS: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/aws.md Which is right? ### Response: You can install Kubernetes on a normal Amazon EC2 server. The new container service is a separate offering by Amazon, called ECS. EDIT: AWS released in 2018 a new container service for Kubernetes called EKS: https://aws.amazon.com/eks/ Amazon Elastic Container Service for Kubernetes (Amazon EKS) makes it easy to deploy, manage, and scale containerized applications using Kubernetes on AWS. Amazon EKS runs the Kubernetes management infrastructure for you across multiple AWS availability zones to eliminate a single point of failure.
I've read that AWS does not support Kubernetes and builds their own Docker orchestration engine EC2 Container Service. However, on Kubernetes getting-started -page there is a guide on how to run Kubernetes on AWS: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/aws.md Which is right?
You can install Kubernetes on a normal Amazon EC2 server. The new container service is a separate offering by Amazon, called ECS. EDIT: AWS released in 2018 a new container service for Kubernetes called EKS: https://aws.amazon.com/eks/ Amazon Elastic Container Service for Kubernetes (Amazon EKS) makes it easy to deploy, manage, and scale containerized applications using Kubernetes on AWS. Amazon EKS runs the Kubernetes management infrastructure for you across multiple AWS availability zones to eliminate a single point of failure.
###Instruction: Kubernetes is billed as a container cluster "scheduler/orchestrator", but I have no idea what this means. After reading the Kubernetes site and (vague) GitHub wiki, the best I can tell is that its somehow figures out what VMs are available/capable of running your Docker container, and then deploys them there. But that is just my guess, and I haven't seen any concrete verbiage in their documentation to support that. So what is Kubernetes, exactly, and what are some specific problems that it solves? ### Response: The purpose of Kubernetes is to make it easier to organize and schedule your application across a fleet of machines. At a high level it is an operating system for your cluster. Basically, it allows you to not worry about what specific machine in your datacenter each application runs on. Additionally it provides generic primitives for health checking and replicating your application across these machines, as well as services for wiring your application into micro-services so that each layer in your application is decoupled from other layers so that you can scale/update/maintain them independently. While it is possible to do many of these things in application layer, such solutions tend to be one-off and brittle, it's much better to have separation of concerns, where an orchestration system worries about how to run your application, and you worry about the code that makes up your application.
Kubernetes is billed as a container cluster "scheduler/orchestrator", but I have no idea what this means. After reading the Kubernetes site and (vague) GitHub wiki, the best I can tell is that its somehow figures out what VMs are available/capable of running your Docker container, and then deploys them there. But that is just my guess, and I haven't seen any concrete verbiage in their documentation to support that. So what is Kubernetes, exactly, and what are some specific problems that it solves?
The purpose of Kubernetes is to make it easier to organize and schedule your application across a fleet of machines. At a high level it is an operating system for your cluster. Basically, it allows you to not worry about what specific machine in your datacenter each application runs on. Additionally it provides generic primitives for health checking and replicating your application across these machines, as well as services for wiring your application into micro-services so that each layer in your application is decoupled from other layers so that you can scale/update/maintain them independently. While it is possible to do many of these things in application layer, such solutions tend to be one-off and brittle, it's much better to have separation of concerns, where an orchestration system worries about how to run your application, and you worry about the code that makes up your application.
###Instruction: I am trying to consume an event stream provided by the Kubernetes api using the requests module. I have run into what looks like a buffering problem: the requests module seems to lag by one event. I have code that looks something like this: r = requests.get('http://localhost:8080/api/v1beta1/watch/services', stream=True) for line in r.iter_lines(): print 'LINE:', line As Kubernetes emits event notifications, this code will only display the last event emitted when a new event comes in, which makes it almost completely useless for code that needs to respond to service add/delete events. I have solved this by spawning curl in a subprocess instead of using the requests library: p = subprocess.Popen(['curl', '-sfN', 'http://localhost:8080/api/watch/services'], stdout=subprocess.PIPE, bufsize=1) for line in iter(p.stdout.readline, b''): print 'LINE:', line This works, but at the expense of some flexibility. Is there a way to avoid this buffering problem with the requests library? ### Response: This behavior is due to a buggy implementation of the iter_lines method in the requests library. iter_lines iterates over the response content in chunk_size blocks of data using the iter_content iterator. If there are less than chunk_size bytes of data available for reading from the remote server (which will typically be the case when reading the last line of output), the read operation will block until chunk_size bytes of data are available. I have written my own iter_lines routine that operates correctly: import os def iter_lines(fd, chunk_size=1024): '''Iterates over the content of a file-like object line-by-line.''' pending = None while True: chunk = os.read(fd.fileno(), chunk_size) if not chunk: break if pending is not None: chunk = pending + chunk pending = None lines = chunk.splitlines() if lines and lines[-1]: pending = lines.pop() for line in lines: yield line if pending: yield(pending) This works because os.read will return less than chunk_size bytes of data rather than waiting for a buffer to fill.
I am trying to consume an event stream provided by the Kubernetes api using the requests module. I have run into what looks like a buffering problem: the requests module seems to lag by one event. I have code that looks something like this: r = requests.get('http://localhost:8080/api/v1beta1/watch/services', stream=True) for line in r.iter_lines(): print 'LINE:', line As Kubernetes emits event notifications, this code will only display the last event emitted when a new event comes in, which makes it almost completely useless for code that needs to respond to service add/delete events. I have solved this by spawning curl in a subprocess instead of using the requests library: p = subprocess.Popen(['curl', '-sfN', 'http://localhost:8080/api/watch/services'], stdout=subprocess.PIPE, bufsize=1) for line in iter(p.stdout.readline, b''): print 'LINE:', line This works, but at the expense of some flexibility. Is there a way to avoid this buffering problem with the requests library?
This behavior is due to a buggy implementation of the iter_lines method in the requests library. iter_lines iterates over the response content in chunk_size blocks of data using the iter_content iterator. If there are less than chunk_size bytes of data available for reading from the remote server (which will typically be the case when reading the last line of output), the read operation will block until chunk_size bytes of data are available. I have written my own iter_lines routine that operates correctly: import os def iter_lines(fd, chunk_size=1024): '''Iterates over the content of a file-like object line-by-line.''' pending = None while True: chunk = os.read(fd.fileno(), chunk_size) if not chunk: break if pending is not None: chunk = pending + chunk pending = None lines = chunk.splitlines() if lines and lines[-1]: pending = lines.pop() for line in lines: yield line if pending: yield(pending) This works because os.read will return less than chunk_size bytes of data rather than waiting for a buffer to fill.
###Instruction: Kubernetes has master and minion nodes. Will (can) Kubernetes run specified Docker containers on the master node(s)? I guess another way of saying it is: can a master also be a minion? Thanks for any assistance. ### Response: Update 2015-08-06: As of PR #12349 (available in 1.0.3 and will be available in 1.1 when it ships), the master node is now one of the available nodes in the cluster and you can schedule pods onto it just like any other node in the cluster. A docker container can only be scheduled onto a kubernetes node running a kubelet (what you refer to as a minion). There is nothing preventing you from creating a cluster where the same machine (physical or virtual) runs both the kubernetes master software and a kubelet, but the current cluster provisioning scripts separate the master onto a distinct machine. This is going to change significantly when Issue #6087 is implemented.
Kubernetes has master and minion nodes. Will (can) Kubernetes run specified Docker containers on the master node(s)? I guess another way of saying it is: can a master also be a minion? Thanks for any assistance.
Update 2015-08-06: As of PR #12349 (available in 1.0.3 and will be available in 1.1 when it ships), the master node is now one of the available nodes in the cluster and you can schedule pods onto it just like any other node in the cluster. A docker container can only be scheduled onto a kubernetes node running a kubelet (what you refer to as a minion). There is nothing preventing you from creating a cluster where the same machine (physical or virtual) runs both the kubernetes master software and a kubelet, but the current cluster provisioning scripts separate the master onto a distinct machine. This is going to change significantly when Issue #6087 is implemented.
###Instruction: What is the best way to deploy Google service account credentials inside a custom built CentOS Docker container for running either on Google's Container Engine or their 'container-vm'? This behavior happens automatically on the google/cloud-sdk container, which runs debian and includes things I'm not using such as app-eng/java/php. Ideally I am trying to access non-public resources inside my project, e.g., Google Cloud Storage bucket objects, without loging in and authorizing every single time a large number of these containers are launched. For example, on a base Centos container running on GCE with custom code and gcloud/gsutil installed, when you run: docker run --rm -ti custom-container gsutil ls You are prompted to run "gsutil config" to gain authorization, which I expect. However, pulling down the google/cloud-sdk container onto the same GCE and executing the same command, it seems to have cleverly configured inheritance of credentials (perhaps from the host container-vm's credentials?). This seems to bypass running "gsutil config" when running the container on GCE to access private resources. I am looking to replicate that behavior in a minimal build Centos container for mass deployment. ### Response: Update: as of 15 Dec 2016, the ability to update the scopes of an existing VM is now in beta; see this SO answer for more details. Old answer: One approach is to create the VM with appropriate scopes (e.g., Google Cloud Storage read-only or read-write) and then all processes on the VM, including containers, will have access to credentials that they can use via OAuth 2.0; see docs for Google Cloud Storage and Google Compute Engine. Note that once a VM is created with some set of scopes, they cannot be changed later (neither added nor removed), so you have to be sure to set the right set of scopes at the time of VM instance creation.
What is the best way to deploy Google service account credentials inside a custom built CentOS Docker container for running either on Google's Container Engine or their 'container-vm'? This behavior happens automatically on the google/cloud-sdk container, which runs debian and includes things I'm not using such as app-eng/java/php. Ideally I am trying to access non-public resources inside my project, e.g., Google Cloud Storage bucket objects, without loging in and authorizing every single time a large number of these containers are launched. For example, on a base Centos container running on GCE with custom code and gcloud/gsutil installed, when you run: docker run --rm -ti custom-container gsutil ls You are prompted to run "gsutil config" to gain authorization, which I expect. However, pulling down the google/cloud-sdk container onto the same GCE and executing the same command, it seems to have cleverly configured inheritance of credentials (perhaps from the host container-vm's credentials?). This seems to bypass running "gsutil config" when running the container on GCE to access private resources. I am looking to replicate that behavior in a minimal build Centos container for mass deployment.
Update: as of 15 Dec 2016, the ability to update the scopes of an existing VM is now in beta; see this SO answer for more details. Old answer: One approach is to create the VM with appropriate scopes (e.g., Google Cloud Storage read-only or read-write) and then all processes on the VM, including containers, will have access to credentials that they can use via OAuth 2.0; see docs for Google Cloud Storage and Google Compute Engine. Note that once a VM is created with some set of scopes, they cannot be changed later (neither added nor removed), so you have to be sure to set the right set of scopes at the time of VM instance creation.
###Instruction: What patterns are valid in kubernetes for the names of containers and ports? I had underscores in the names of ports and containers and got an error. Replacing the underscores with hyphens worked. ### Response: Container names and port names must conform to the RFC 1123 definition of a DNS label. Names must be no longer than 63 characters, must start and end with a lowercase letter or number, and may contain lowercase letters, numbers, and hyphens. Expressed as a regular expression: [a-z0-9]([-a-z0-9]*[a-z0-9])? Here's the applicable code in GitHub for checking container names, checking port names, and defining acceptable names.
What patterns are valid in kubernetes for the names of containers and ports? I had underscores in the names of ports and containers and got an error. Replacing the underscores with hyphens worked.
Container names and port names must conform to the RFC 1123 definition of a DNS label. Names must be no longer than 63 characters, must start and end with a lowercase letter or number, and may contain lowercase letters, numbers, and hyphens. Expressed as a regular expression: [a-z0-9]([-a-z0-9]*[a-z0-9])? Here's the applicable code in GitHub for checking container names, checking port names, and defining acceptable names.
###Instruction: I would like to try Kubernetes' hooks but I didn't find any example how I should do it. As far as I know, with this hooks I can run bash scripts in freshly created containers and prior to terminate them. I've found just a short documentation which say this is possible but that's all. Do somebody have an example or something useful info? Thanks in advance. ### Response: I don't see any examples .yaml files, but Kubernetes API v1 describes the lifecycle events in the same manner. Currently, only PostStart and PreStop are defined and you should be able to use them by adding a lifecycle section to a container in your pod definition. Based on reading the API definition, something like this should work (disclaimer: I haven't actually tried it myself): containers: - name: lifecycle image: busybox lifecycle: postStart: exec: command: - "touch" - "/var/log/lifecycle/post-start" preStop: httpGet: path: "/abort" port: 8080
I would like to try Kubernetes' hooks but I didn't find any example how I should do it. As far as I know, with this hooks I can run bash scripts in freshly created containers and prior to terminate them. I've found just a short documentation which say this is possible but that's all. Do somebody have an example or something useful info? Thanks in advance.
I don't see any examples .yaml files, but Kubernetes API v1 describes the lifecycle events in the same manner. Currently, only PostStart and PreStop are defined and you should be able to use them by adding a lifecycle section to a container in your pod definition. Based on reading the API definition, something like this should work (disclaimer: I haven't actually tried it myself): containers: - name: lifecycle image: busybox lifecycle: postStart: exec: command: - "touch" - "/var/log/lifecycle/post-start" preStop: httpGet: path: "/abort" port: 8080
###Instruction: I wonder if it is possible to change labels of pods on the fly so services route requests to those pods based on new labels. For example I have two services A and B. Then I have 10 pods, where 5 have label type = A (matches service A) and the other 5 have label type = B (matches service B). At some point I want to change labels on pods to achieve a configuration of 2 with label type = A and 8 with label type = B. I want to know if I can just change the labels and services will be updated accordingly without having to stop and start new pods with different labels. ### Response: You can change the labels on individual pods using the kubectl label command, documented here. Changing the label of a running pod should not cause it to be restarted, and services will automatically detect and handle label changes. So in other words, yes you can :)
I wonder if it is possible to change labels of pods on the fly so services route requests to those pods based on new labels. For example I have two services A and B. Then I have 10 pods, where 5 have label type = A (matches service A) and the other 5 have label type = B (matches service B). At some point I want to change labels on pods to achieve a configuration of 2 with label type = A and 8 with label type = B. I want to know if I can just change the labels and services will be updated accordingly without having to stop and start new pods with different labels.
You can change the labels on individual pods using the kubectl label command, documented here. Changing the label of a running pod should not cause it to be restarted, and services will automatically detect and handle label changes. So in other words, yes you can :)
###Instruction: is it possible to autoscale docker containers, which contain application servers (like wildfly/tomcat/jetty/) within kubernetes ? For example at cpu & ram use or based on http requests ? If there is a build in feature for that i can't find it, or is it possible to write something like a configuration script for this ? If so where does the magic happen ? ### Response: Autoscaling of containers is not yet supported and is not part of the near term 1.0 roadmap for Kubernetes (meaning that the core team isn't going to add it soon but external contributions are certainly welcome).
is it possible to autoscale docker containers, which contain application servers (like wildfly/tomcat/jetty/) within kubernetes ? For example at cpu & ram use or based on http requests ? If there is a build in feature for that i can't find it, or is it possible to write something like a configuration script for this ? If so where does the magic happen ?
Autoscaling of containers is not yet supported and is not part of the near term 1.0 roadmap for Kubernetes (meaning that the core team isn't going to add it soon but external contributions are certainly welcome).
###Instruction: I am relatively new to all these, but I'm having troubles getting a clear picture among the listed technologies. Though, all of these try to solve different problems, but do have things in common too. I would like to understand what are the things that are common and what is different. It is likely that the combination of few would be great fit, if so what are they? I am listing a few of them along with questions, but it would be great if someone lists all of them in detail and answers the questions. Kubernetes vs Mesos: This link What's the difference between Apache's Mesos and Google's Kubernetes provides a good insight into the differences, but I'm unable to understand as to why Kubernetes should run on top of Mesos. Is it more to do with coming together of two opensource solutions? Kubernetes vs Core-OS Fleet: If I use kubernetes, is fleet required? How does Docker-Swarm fit into all the above? ### Response: Disclosure: I'm a lead engineer on Kubernetes I think that Mesos and Kubernetes are largely aimed at solving similar problems of running clustered applications, they have different histories and different approaches to solving the problem. Mesos focuses its energy on very generic scheduling, and plugging in multiple different schedulers. This means that it enables systems like Hadoop and Marathon to co-exist in the same scheduling environment. Mesos is less focused on running containers. Mesos existed prior to widespread interest in containers and has been re-factored in parts to support containers. In contrast, Kubernetes was designed from the ground up to be an environment for building distributed applications from containers. It includes primitives for replication and service discovery as core primitives, where-as such things are added via frameworks in Mesos. The primary goal of Kubernetes is a system for building, running and managing distributed systems. Fleet is a lower-level task distributor. It is useful for bootstrapping a cluster system, for example CoreOS uses it to distribute the kubernetes agents and binaries out to the machines in a cluster in order to turn-up a kubernetes cluster. It is not really intended to solve the same distributed application development problems, think of it more like systemd/init.d/upstart for your cluster. It's not required if you run kubernetes, you can use other tools (e.g. Salt, Puppet, Ansible, Chef, ...) to accomplish the same binary distribution. Swarm is an effort by Docker to extend the existing Docker API to make a cluster of machines look like a single Docker API. Fundamentally, our experience at Google and elsewhere indicates that the node API is insufficient for a cluster API. You can see a bunch of discussion on this here: https://github.com/docker/docker/pull/8859 and here: https://github.com/docker/docker/issues/8781 Join us on IRC @ #google-containers if you want to talk more.
I am relatively new to all these, but I'm having troubles getting a clear picture among the listed technologies. Though, all of these try to solve different problems, but do have things in common too. I would like to understand what are the things that are common and what is different. It is likely that the combination of few would be great fit, if so what are they? I am listing a few of them along with questions, but it would be great if someone lists all of them in detail and answers the questions. Kubernetes vs Mesos: This link What's the difference between Apache's Mesos and Google's Kubernetes provides a good insight into the differences, but I'm unable to understand as to why Kubernetes should run on top of Mesos. Is it more to do with coming together of two opensource solutions? Kubernetes vs Core-OS Fleet: If I use kubernetes, is fleet required? How does Docker-Swarm fit into all the above?
Disclosure: I'm a lead engineer on Kubernetes I think that Mesos and Kubernetes are largely aimed at solving similar problems of running clustered applications, they have different histories and different approaches to solving the problem. Mesos focuses its energy on very generic scheduling, and plugging in multiple different schedulers. This means that it enables systems like Hadoop and Marathon to co-exist in the same scheduling environment. Mesos is less focused on running containers. Mesos existed prior to widespread interest in containers and has been re-factored in parts to support containers. In contrast, Kubernetes was designed from the ground up to be an environment for building distributed applications from containers. It includes primitives for replication and service discovery as core primitives, where-as such things are added via frameworks in Mesos. The primary goal of Kubernetes is a system for building, running and managing distributed systems. Fleet is a lower-level task distributor. It is useful for bootstrapping a cluster system, for example CoreOS uses it to distribute the kubernetes agents and binaries out to the machines in a cluster in order to turn-up a kubernetes cluster. It is not really intended to solve the same distributed application development problems, think of it more like systemd/init.d/upstart for your cluster. It's not required if you run kubernetes, you can use other tools (e.g. Salt, Puppet, Ansible, Chef, ...) to accomplish the same binary distribution. Swarm is an effort by Docker to extend the existing Docker API to make a cluster of machines look like a single Docker API. Fundamentally, our experience at Google and elsewhere indicates that the node API is insufficient for a cluster API. You can see a bunch of discussion on this here: https://github.com/docker/docker/pull/8859 and here: https://github.com/docker/docker/issues/8781 Join us on IRC @ #google-containers if you want to talk more.
###Instruction: Using fleet I can specify a command to be run inside the container when it is started. It seems like this should be easily possible with Kubernetes as well, but I can't seem to find anything that says how. It seems like you have to create the container specifically to launch with a certain command. Having a general purpose container and launching it with different arguments is far simpler than creating many different containers for specific cases, or setting and getting environment variables. Is it possible to specify the command a kubernetes pod runs within the Docker image at startup? ### Response: I spend 45 minutes looking for this. Then I post a question about it and find the solution 9 minutes later. There is an hint at what I wanted inside the Cassandra example. The command line below the image: id: cassandra kind: Pod apiVersion: v1beta1 desiredState: manifest: version: v1beta1 id: cassandra containers: - name: cassandra image: kubernetes/cassandra command: - /run.sh cpu: 1000 ports: - name: cql containerPort: 9042 - name: thrift containerPort: 9160 env: - key: MAX_HEAP_SIZE value: 512M - key: HEAP_NEWSIZE value: 100M labels: name: cassandra Despite finding the solution, it would be nice if there was somewhere obvious in the Kubernetes project where I could see all of the possible options for the various configuration files (pod, service, replication controller).
Using fleet I can specify a command to be run inside the container when it is started. It seems like this should be easily possible with Kubernetes as well, but I can't seem to find anything that says how. It seems like you have to create the container specifically to launch with a certain command. Having a general purpose container and launching it with different arguments is far simpler than creating many different containers for specific cases, or setting and getting environment variables. Is it possible to specify the command a kubernetes pod runs within the Docker image at startup?
I spend 45 minutes looking for this. Then I post a question about it and find the solution 9 minutes later. There is an hint at what I wanted inside the Cassandra example. The command line below the image: id: cassandra kind: Pod apiVersion: v1beta1 desiredState: manifest: version: v1beta1 id: cassandra containers: - name: cassandra image: kubernetes/cassandra command: - /run.sh cpu: 1000 ports: - name: cql containerPort: 9042 - name: thrift containerPort: 9160 env: - key: MAX_HEAP_SIZE value: 512M - key: HEAP_NEWSIZE value: 100M labels: name: cassandra Despite finding the solution, it would be nice if there was somewhere obvious in the Kubernetes project where I could see all of the possible options for the various configuration files (pod, service, replication controller).
###Instruction: I see Mesosphere building all kinds of applications on the Mesos Framework like Hadoop, Kubernetes, etc. but since there is the Marathon applications for long-running services, why not just use that? E.g. why not setup Kubernetes nodes on a bunch of Marathon services? Why implement Kubernetes directly on Framework API? Because scheduling is more efficient that way? Same question goes for Jenkins implementation, why not just run Jenkins master/slaves on top of Marathon... ### Response: Apache Mesos is a 2-level scheduler. The purpose of a framework is to provide the intelligence of high-level scheduling. Marathon provides the ability to schedule a task in the cluster, queue that task for scheduling and re-queue tasks that have failed. It is great at keeping long running processes running. It is like the init of the datacenter. As such, it is commonly used to make sure other frameworks are up and running such as Kubernetes-Mesos or Jenkins. There are many applications for which this level of scheduling is insufficient. Marathon can and often is used for running things like Apache Kafka, however this often falls short in many failure modes. Additionally, Marathon doesn't care if task runs multiple times on the same node, however running multiple Kafka nodes on the same slave is a bad idea. Using Hadoop as another example (since you referred it), HDFS has several types of nodes that need to be managed; NameNode, DataNode and JournalNode. Marathon does not know the order to start these in, or if these can be co-located on the same node or not. It doesn't know how to scale this application. The HDFS framework manages that intelligence. As far as scheduling efficiency, I'm not sure that is the goal. Apache Mesos is a 2-level scheduler for a reason. It is a highly efficient 2-level scheduler. The value of 2-level scheduling is to abstract the type of concerns I described above to a higher-level scheduler (which is termed by Mesos as frameworks). Marathon is still a great way to schedule and ensure high availability to other frameworks.
I see Mesosphere building all kinds of applications on the Mesos Framework like Hadoop, Kubernetes, etc. but since there is the Marathon applications for long-running services, why not just use that? E.g. why not setup Kubernetes nodes on a bunch of Marathon services? Why implement Kubernetes directly on Framework API? Because scheduling is more efficient that way? Same question goes for Jenkins implementation, why not just run Jenkins master/slaves on top of Marathon...
Apache Mesos is a 2-level scheduler. The purpose of a framework is to provide the intelligence of high-level scheduling. Marathon provides the ability to schedule a task in the cluster, queue that task for scheduling and re-queue tasks that have failed. It is great at keeping long running processes running. It is like the init of the datacenter. As such, it is commonly used to make sure other frameworks are up and running such as Kubernetes-Mesos or Jenkins. There are many applications for which this level of scheduling is insufficient. Marathon can and often is used for running things like Apache Kafka, however this often falls short in many failure modes. Additionally, Marathon doesn't care if task runs multiple times on the same node, however running multiple Kafka nodes on the same slave is a bad idea. Using Hadoop as another example (since you referred it), HDFS has several types of nodes that need to be managed; NameNode, DataNode and JournalNode. Marathon does not know the order to start these in, or if these can be co-located on the same node or not. It doesn't know how to scale this application. The HDFS framework manages that intelligence. As far as scheduling efficiency, I'm not sure that is the goal. Apache Mesos is a 2-level scheduler for a reason. It is a highly efficient 2-level scheduler. The value of 2-level scheduling is to abstract the type of concerns I described above to a higher-level scheduler (which is termed by Mesos as frameworks). Marathon is still a great way to schedule and ensure high availability to other frameworks.
###Instruction: Can Kubernetes automatically add or reduce the number of pods,when it monitors for increases or decreases in load (i.e. CPU load, traffic)? If it's possible, how can I configure it? ### Response: Auto scaling of pods is not yet available, but it's definitely on our roadmap, as mentioned by Brendan in a previous answer. It could actually be easily built outside of the core of Kubernetes, using the public Kubernetes API. If you'd rather wait for someone else to build it, though, it looks like a contributor has started planning a design for one.
Can Kubernetes automatically add or reduce the number of pods,when it monitors for increases or decreases in load (i.e. CPU load, traffic)? If it's possible, how can I configure it?
Auto scaling of pods is not yet available, but it's definitely on our roadmap, as mentioned by Brendan in a previous answer. It could actually be easily built outside of the core of Kubernetes, using the public Kubernetes API. If you'd rather wait for someone else to build it, though, it looks like a contributor has started planning a design for one.
###Instruction: I have been messing around with openshift and reading as much documentation as i can. Yet, the authentication performed by default(using admin .kubeconfig) puzzles me. 1)Are client-certificate-data and client-key-data the same as the admin certificate and key? I ask this because the contents of the certificate/key files are not the same as in .kubeconfig. 2).kubeconfig (AFAIK) is used to authenticate agains a kubernetes master. Yet, in OpenShift we are authentication against OpenShift master (right?). Why using .kubeconfig? Kinds regards and thank you for your patience. ### Response: OpenShift builds on top of Kubernetes - it exposes both the OpenShift APIs (builds, deployments, images, projects) and the Kubernetes APIs (pods, replication controllers, services). A client connecting to OpenShift will use both sets of APIs. OpenShift can run on top of an existing Kubernetes cluster, in which case it will proxy API calls to the Kubernetes master and then apply security policy on top (via the OpenShift policy engine which may eventually become part of Kube). So, the client is really an extension of Kubectl that offers some additional functionality, and it can use .kubeconfig to be consistent with a Kubectl setup. You can talk to an OpenShift cluster via kubectl, so vice versa seems fair. The client-certificate-data and key-data are base64 encoded versions of the files on disk. They should be the same once you decode them. We do that so the .kubeconfig can be shipped around as one unit, but you can also set it up to reference files on disk.
I have been messing around with openshift and reading as much documentation as i can. Yet, the authentication performed by default(using admin .kubeconfig) puzzles me. 1)Are client-certificate-data and client-key-data the same as the admin certificate and key? I ask this because the contents of the certificate/key files are not the same as in .kubeconfig. 2).kubeconfig (AFAIK) is used to authenticate agains a kubernetes master. Yet, in OpenShift we are authentication against OpenShift master (right?). Why using .kubeconfig? Kinds regards and thank you for your patience.
OpenShift builds on top of Kubernetes - it exposes both the OpenShift APIs (builds, deployments, images, projects) and the Kubernetes APIs (pods, replication controllers, services). A client connecting to OpenShift will use both sets of APIs. OpenShift can run on top of an existing Kubernetes cluster, in which case it will proxy API calls to the Kubernetes master and then apply security policy on top (via the OpenShift policy engine which may eventually become part of Kube). So, the client is really an extension of Kubectl that offers some additional functionality, and it can use .kubeconfig to be consistent with a Kubectl setup. You can talk to an OpenShift cluster via kubectl, so vice versa seems fair. The client-certificate-data and key-data are base64 encoded versions of the files on disk. They should be the same once you decode them. We do that so the .kubeconfig can be shipped around as one unit, but you can also set it up to reference files on disk.
###Instruction: If you know the number of minions ahead of time, I suppose you could create x number of replicas, provided that you give a host port to ensure that there is one replica per minion, but is there a way to say that a pod should run on every minion if you don't know the number of minions ahead of time (or if minions are added later)? ### Response: There's been a lot of talk within the project about creating a more straightforward feature to do this (and how to work around it in the meantime), but nothing official has been added yet, so these are really the two best options at the moment: You can put files containing the "manifest" section of your pod's config into each node's /etc/kubernetes/manifests directory. The kubelet on the machine will detect it and run it. You can use a host port and set the number of replicas to a number larger than the number of minions. It won't hurt the cluster to have a few too many replicas, as they won't be able to run anywhere until more nodes are added.
If you know the number of minions ahead of time, I suppose you could create x number of replicas, provided that you give a host port to ensure that there is one replica per minion, but is there a way to say that a pod should run on every minion if you don't know the number of minions ahead of time (or if minions are added later)?
There's been a lot of talk within the project about creating a more straightforward feature to do this (and how to work around it in the meantime), but nothing official has been added yet, so these are really the two best options at the moment: You can put files containing the "manifest" section of your pod's config into each node's /etc/kubernetes/manifests directory. The kubelet on the machine will detect it and run it. You can use a host port and set the number of replicas to a number larger than the number of minions. It won't hurt the cluster to have a few too many replicas, as they won't be able to run anywhere until more nodes are added.
###Instruction: How does Kubernetes' scheduler work? What I mean is that Kubernetes' scheduler appears to be very simple? My initial thought is that this scheduler is just a simple admission control system, not a real scheduler. Is it that correct? I found a short description, but it is not terribly informative: The kubernetes scheduler is a policy-rich, topology-aware, workload-specific function that significantly impacts availability, performance, and capacity. The scheduler needs to take into account individual and collective resource requirements, quality of service requirements, hardware/software/policy constraints, affinity and anti-affinity specifications, data locality, inter-workload interference, deadlines, and so on. Workload-specific requirements will be exposed through the API as necessary. ### Response: The paragraph you quoted describes where we hope to be in the future (where the future is defined in units of months, not years). We're not there yet, but the scheduler does have a number of useful features already, enough for a simple deployment. In the rest of this reply, I'll explain how the scheduler works today. The scheduler is not just an admission controller; for each pod that is created, it finds the "best" machine for that pod, and if no machine is suitable, the pod remains unscheduled until a machine becomes suitable. The scheduler is configurable. It has two types of policies, FitPredicate (see master/pkg/scheduler/predicates.go) and PriorityFunction (see master/pkg/scheduler/priorities.go). I'll describe them. Fit predicates are required rules, for example the labels on the node must be compatible with the label selector on the pod (this rule is implemented in PodSelectorMatches() in predicates.go), and the sum of the requested resources of the container(s) already running on the machine plus the requested resources of the new container(s) you are considering scheduling onto the machine must not be greater than the capacity of the machine (this rule is implemented in PodFitsResources() in predicates.go; note that "requested resources" is defined as pod.Spec.Containers[n].Resources.Limits, and if you request zero resources then you always fit). If any of the required rules are not satisfied for a particular (new pod, machine) pair, then the new pod is not scheduled on that machine. If after checking all machines the scheduler decides that the new pod cannot be scheduled onto any machine, then the pod remains in Pending state until it can be satisfied by one of the machines. After checking all of the machines with respect to the fit predicates, the scheduler may find that multiple machines "fit" the pod. But of course, the pod can only be scheduled onto one machine. That's where priority functions come in. Basically, the scheduler ranks the machines that meet all of the fit predicates, and then chooses the best one. For example, it prefers the machine whose already-running pods consume the least resources (this is implemented in LeastRequestedPriority() in priorities.go). This policy spreads pods (and thus containers) out instead of packing lots onto one machine while leaving others empty. When I said that the scheduler is configurable, I mean that you can decide at compile time which fit predicates and priority functions you want Kubernetes to apply. Currently, it applies all of the ones you see in predicates.go and priorities.go.
How does Kubernetes' scheduler work? What I mean is that Kubernetes' scheduler appears to be very simple? My initial thought is that this scheduler is just a simple admission control system, not a real scheduler. Is it that correct? I found a short description, but it is not terribly informative: The kubernetes scheduler is a policy-rich, topology-aware, workload-specific function that significantly impacts availability, performance, and capacity. The scheduler needs to take into account individual and collective resource requirements, quality of service requirements, hardware/software/policy constraints, affinity and anti-affinity specifications, data locality, inter-workload interference, deadlines, and so on. Workload-specific requirements will be exposed through the API as necessary.
The paragraph you quoted describes where we hope to be in the future (where the future is defined in units of months, not years). We're not there yet, but the scheduler does have a number of useful features already, enough for a simple deployment. In the rest of this reply, I'll explain how the scheduler works today. The scheduler is not just an admission controller; for each pod that is created, it finds the "best" machine for that pod, and if no machine is suitable, the pod remains unscheduled until a machine becomes suitable. The scheduler is configurable. It has two types of policies, FitPredicate (see master/pkg/scheduler/predicates.go) and PriorityFunction (see master/pkg/scheduler/priorities.go). I'll describe them. Fit predicates are required rules, for example the labels on the node must be compatible with the label selector on the pod (this rule is implemented in PodSelectorMatches() in predicates.go), and the sum of the requested resources of the container(s) already running on the machine plus the requested resources of the new container(s) you are considering scheduling onto the machine must not be greater than the capacity of the machine (this rule is implemented in PodFitsResources() in predicates.go; note that "requested resources" is defined as pod.Spec.Containers[n].Resources.Limits, and if you request zero resources then you always fit). If any of the required rules are not satisfied for a particular (new pod, machine) pair, then the new pod is not scheduled on that machine. If after checking all machines the scheduler decides that the new pod cannot be scheduled onto any machine, then the pod remains in Pending state until it can be satisfied by one of the machines. After checking all of the machines with respect to the fit predicates, the scheduler may find that multiple machines "fit" the pod. But of course, the pod can only be scheduled onto one machine. That's where priority functions come in. Basically, the scheduler ranks the machines that meet all of the fit predicates, and then chooses the best one. For example, it prefers the machine whose already-running pods consume the least resources (this is implemented in LeastRequestedPriority() in priorities.go). This policy spreads pods (and thus containers) out instead of packing lots onto one machine while leaving others empty. When I said that the scheduler is configurable, I mean that you can decide at compile time which fit predicates and priority functions you want Kubernetes to apply. Currently, it applies all of the ones you see in predicates.go and priorities.go.
###Instruction: From what I understand, Kubernetes/Mesosphere is a cluster manager and Docker Swarm is an orchestration tool. I am trying to understand how they are different? Is Docker Swarm analogous to the POSIX API in the Docker world while Kubernetes/Mesosphere are different implementations? Or are they different layers? ### Response: Disclosure: I'm a lead engineer on Kubernetes Kubernetes is a cluster orchestration system inspired by the container orchestration that runs at Google. Built by many of the same engineers who built that system. It was designed from the ground up to be an environment for building distributed applications from containers. It includes primitives for replication and service discovery as core primitives, where-as such things are added via frameworks in Mesos. The primary goal of Kubernetes is a system for building, running and managing distributed systems. Swarm is an effort by Docker to extend the existing Docker API to make a cluster of machines look like a single Docker API. Fundamentally, our experience at Google and elsewhere indicates that the node API is insufficient for a cluster API. You can see a bunch of discussion on this here: https://github.com/docker/docker/pull/8859 and here: https://github.com/docker/docker/issues/8781
From what I understand, Kubernetes/Mesosphere is a cluster manager and Docker Swarm is an orchestration tool. I am trying to understand how they are different? Is Docker Swarm analogous to the POSIX API in the Docker world while Kubernetes/Mesosphere are different implementations? Or are they different layers?
Disclosure: I'm a lead engineer on Kubernetes Kubernetes is a cluster orchestration system inspired by the container orchestration that runs at Google. Built by many of the same engineers who built that system. It was designed from the ground up to be an environment for building distributed applications from containers. It includes primitives for replication and service discovery as core primitives, where-as such things are added via frameworks in Mesos. The primary goal of Kubernetes is a system for building, running and managing distributed systems. Swarm is an effort by Docker to extend the existing Docker API to make a cluster of machines look like a single Docker API. Fundamentally, our experience at Google and elsewhere indicates that the node API is insufficient for a cluster API. You can see a bunch of discussion on this here: https://github.com/docker/docker/pull/8859 and here: https://github.com/docker/docker/issues/8781
###Instruction: I'm researching: Docker Container Google Containers The goal is to use something of these 2 on our own physical boxes with Linux in the enterprise for Dev/Prod. However, I've read that Google reimplemented LXC (Linux Containers) and use their own lmctfy instead. Is it possible to use Google Containers on my Linux boxes without their cloud space? Your experience is highly appreciated. ### Response: Not sure I fully understand the question, but neither kubernetes (the framework on which Google Container Engine runs) nor docker require a particular cloud provider. AFAIK, you can use docker containers on any linux distro, and kubernetes supports a number of configurations for running on your own machines. See kubernetes getting started guides for details.
I'm researching: Docker Container Google Containers The goal is to use something of these 2 on our own physical boxes with Linux in the enterprise for Dev/Prod. However, I've read that Google reimplemented LXC (Linux Containers) and use their own lmctfy instead. Is it possible to use Google Containers on my Linux boxes without their cloud space? Your experience is highly appreciated.
Not sure I fully understand the question, but neither kubernetes (the framework on which Google Container Engine runs) nor docker require a particular cloud provider. AFAIK, you can use docker containers on any linux distro, and kubernetes supports a number of configurations for running on your own machines. See kubernetes getting started guides for details.
###Instruction: I'm looking at deploying Kubernetes on top of a CoreOS cluster, but I think I've run into a deal breaker of sorts. If I'm using just CoreOS and fleet, I can specify within the unit files that I want certain services to not run on the same physical machine as other services (anti-affinity). This is sort of essential for high availability. But it doesn't look like kubernetes has this functionality yet. In my specific use-case, I'm going to need to run a few clusters of elasticsearch machines that need to always be available. If, for any reason, kubernetes decides to schedule all of my elasticsearch node containers for a given ES cluster on a single machine, (or even the majority on a single machine), and that machine dies, then my elasticsearch cluster will die with it. That can't be allowed to happen. It seems like there could be work-arounds. I could set up the resource requirements and machine specs such that only one elasticsearch instance could fit on each machine. Or I could probably use labels in some way to specify that certain elasticsearch containers should go on certain machines. I could also just provision way more machines than necessary, and way more ES nodes than necessary, and assume kubernetes will spread them out enough to be reasonably certain of high availability. But all of that seems awkward. It's much more elegant from a resource-management standpoint to just specify required hardware and anti-affinity, and let the scheduler optimize from there. So does Kubernetes support anti-affinity in some way I couldn't find? Or does anyone know if it will any time soon? Or should I be thinking about this another way? Do I have to write my own scheduler? ### Response: Looks like there are a few ways that kubernetes decides how to spread containers, and these are in active development. Firstly, of course there have to be the necessary resources on any machine for the scheduler to consider bringing up a pod there. After that, kubernetes spreads pods by replication controller, attempting to keep the different instances created by a given replication controller on different nodes. It seems like there was recently implemented a method of scheduling that considers services and various other parameters. https://github.com/GoogleCloudPlatform/kubernetes/pull/2906 Though I'm not completely clear on exactly how to use it. Perhaps in coordination with this scheduler config? https://github.com/GoogleCloudPlatform/kubernetes/pull/4674 Probably the most interesting issue to me is that none of these scheduling priorities are considered during scale-down, only scale-up. https://github.com/GoogleCloudPlatform/kubernetes/issues/4301 That's a bit of big deal, it seems like over time you could weird distributions of pods because they stay whereever they are originally placed. Overall, I think the answer to my question at the moment is that this is an area of kubernetes that is in flux (as to be expected with pre-v1). However, it looks like much of what I need will be done automatically with sufficient nodes, and proper use of replication controllers and services.
I'm looking at deploying Kubernetes on top of a CoreOS cluster, but I think I've run into a deal breaker of sorts. If I'm using just CoreOS and fleet, I can specify within the unit files that I want certain services to not run on the same physical machine as other services (anti-affinity). This is sort of essential for high availability. But it doesn't look like kubernetes has this functionality yet. In my specific use-case, I'm going to need to run a few clusters of elasticsearch machines that need to always be available. If, for any reason, kubernetes decides to schedule all of my elasticsearch node containers for a given ES cluster on a single machine, (or even the majority on a single machine), and that machine dies, then my elasticsearch cluster will die with it. That can't be allowed to happen. It seems like there could be work-arounds. I could set up the resource requirements and machine specs such that only one elasticsearch instance could fit on each machine. Or I could probably use labels in some way to specify that certain elasticsearch containers should go on certain machines. I could also just provision way more machines than necessary, and way more ES nodes than necessary, and assume kubernetes will spread them out enough to be reasonably certain of high availability. But all of that seems awkward. It's much more elegant from a resource-management standpoint to just specify required hardware and anti-affinity, and let the scheduler optimize from there. So does Kubernetes support anti-affinity in some way I couldn't find? Or does anyone know if it will any time soon? Or should I be thinking about this another way? Do I have to write my own scheduler?
Looks like there are a few ways that kubernetes decides how to spread containers, and these are in active development. Firstly, of course there have to be the necessary resources on any machine for the scheduler to consider bringing up a pod there. After that, kubernetes spreads pods by replication controller, attempting to keep the different instances created by a given replication controller on different nodes. It seems like there was recently implemented a method of scheduling that considers services and various other parameters. https://github.com/GoogleCloudPlatform/kubernetes/pull/2906 Though I'm not completely clear on exactly how to use it. Perhaps in coordination with this scheduler config? https://github.com/GoogleCloudPlatform/kubernetes/pull/4674 Probably the most interesting issue to me is that none of these scheduling priorities are considered during scale-down, only scale-up. https://github.com/GoogleCloudPlatform/kubernetes/issues/4301 That's a bit of big deal, it seems like over time you could weird distributions of pods because they stay whereever they are originally placed. Overall, I think the answer to my question at the moment is that this is an area of kubernetes that is in flux (as to be expected with pre-v1). However, it looks like much of what I need will be done automatically with sufficient nodes, and proper use of replication controllers and services.
###Instruction: I have a kubernetes cluster running with 2 minions. Currently I make my service accessible in 2 steps: Start replication controller & pod Get minion IP (using kubectl get minions) and set it as publicIPs for the Service. What is the suggested practice for exposing service to the public? My approach seems wrong because I hard-code the IP-s of individual minion IP-s. It also seems to bypass load balancing capabilities of kubernetes services because clients would have to access services running on individual minions directly. To set up the replication controller & pod I use: id: frontend-controller kind: ReplicationController apiVersion: v1beta1 desiredState: replicas: 2 replicaSelector: name: frontend-pod podTemplate: desiredState: manifest: version: v1beta1 id: frontend-pod containers: - name: sinatra-docker-demo image: madisn/sinatra_docker_demo ports: - name: http-server containerPort: 4567 labels: name: frontend-pod To set up the service (after getting minion ip-s): kind: Service id: frontend-service apiVersion: v1beta1 port: 8000 containerPort: http-server selector: name: frontend-pod labels: name: frontend publicIPs: [10.245.1.3, 10.245.1.4] ### Response: As I mentioned in the comment above, the createExternalLoadBalancer is the appropriate abstraction that you are looking for, but unfortunately it isn't yet implemented for all cloud providers, and in particular for vagrant, which you are using locally. One option would be to use the public IPs for all minions in your cluster for all of the services you want to be externalized. The traffic destined for the service will end up on one of the minions, where it will be intercepted by the kube-proxy process and redirected to a pod that matches the label selector for the service. This could result in an extra hop across the network (if you land on a node that doesn't have the pod running locally) but for applications that aren't extremely sensitive to network latency this will probably not be noticeable.
I have a kubernetes cluster running with 2 minions. Currently I make my service accessible in 2 steps: Start replication controller & pod Get minion IP (using kubectl get minions) and set it as publicIPs for the Service. What is the suggested practice for exposing service to the public? My approach seems wrong because I hard-code the IP-s of individual minion IP-s. It also seems to bypass load balancing capabilities of kubernetes services because clients would have to access services running on individual minions directly. To set up the replication controller & pod I use: id: frontend-controller kind: ReplicationController apiVersion: v1beta1 desiredState: replicas: 2 replicaSelector: name: frontend-pod podTemplate: desiredState: manifest: version: v1beta1 id: frontend-pod containers: - name: sinatra-docker-demo image: madisn/sinatra_docker_demo ports: - name: http-server containerPort: 4567 labels: name: frontend-pod To set up the service (after getting minion ip-s): kind: Service id: frontend-service apiVersion: v1beta1 port: 8000 containerPort: http-server selector: name: frontend-pod labels: name: frontend publicIPs: [10.245.1.3, 10.245.1.4]
As I mentioned in the comment above, the createExternalLoadBalancer is the appropriate abstraction that you are looking for, but unfortunately it isn't yet implemented for all cloud providers, and in particular for vagrant, which you are using locally. One option would be to use the public IPs for all minions in your cluster for all of the services you want to be externalized. The traffic destined for the service will end up on one of the minions, where it will be intercepted by the kube-proxy process and redirected to a pod that matches the label selector for the service. This could result in an extra hop across the network (if you land on a node that doesn't have the pod running locally) but for applications that aren't extremely sensitive to network latency this will probably not be noticeable.
###Instruction: I'm just starting with Kubernetes on 2 node (master-minion) setup on 2 private cloud servers. I've installed it, did basic config and got it running some simple pods/services from the master to the minion. My question is: How can I use persistent storage with the pods when not using Google Cloud? For my first tests I got a Ghost Blog pod running, but if i tear the pod the changes are lost. Tried adding volume to the pod, but can't actually find any documentation about how it is done when not on GC. My try: apiVersion: v1beta1 id: ghost kind: Pod desiredState: manifest: version: v1beta1 id: ghost containers: - name: ghost image: ghost volumeMounts: - name: ghost-persistent-storage mountPath: /var/lib/ghost ports: - hostPort: 8080 containerPort: 2368 volumes: - name: ghost-persistent-storage source: emptyDir: {} Found this: Persistent Installation of MySQL and WordPress on Kubernetes Can't figure it out how to add storage (NFS?) to my testing install. ### Response: In the new API (v1beta3), we've added many more volume types, including NFS volumes. The NFS volume type assumes you already have an NFS server running somewhere to point the pod at. Give it a shot and let us know if you have any problems!
I'm just starting with Kubernetes on 2 node (master-minion) setup on 2 private cloud servers. I've installed it, did basic config and got it running some simple pods/services from the master to the minion. My question is: How can I use persistent storage with the pods when not using Google Cloud? For my first tests I got a Ghost Blog pod running, but if i tear the pod the changes are lost. Tried adding volume to the pod, but can't actually find any documentation about how it is done when not on GC. My try: apiVersion: v1beta1 id: ghost kind: Pod desiredState: manifest: version: v1beta1 id: ghost containers: - name: ghost image: ghost volumeMounts: - name: ghost-persistent-storage mountPath: /var/lib/ghost ports: - hostPort: 8080 containerPort: 2368 volumes: - name: ghost-persistent-storage source: emptyDir: {} Found this: Persistent Installation of MySQL and WordPress on Kubernetes Can't figure it out how to add storage (NFS?) to my testing install.
In the new API (v1beta3), we've added many more volume types, including NFS volumes. The NFS volume type assumes you already have an NFS server running somewhere to point the pod at. Give it a shot and let us know if you have any problems!
###Instruction: I was trying to build kubernetes from source: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/binary_release.md#building-from-source I have docker installed on my ubuntu. royalharsh95@ubuntu:~$ sudo docker version Client version: 1.0.1 Client API version: 1.12 Go version (client): go1.2.1 Git commit (client): 990021a Server version: 1.0.1 Server API version: 1.12 Go version (server): go1.2.1 Git commit (server): 990021a I tried after sudo service docker start but got the same error. royalharsh95@ubuntu:~$ cd kubernetes royalharsh95@ubuntu:~/kubernetes$ make release build/release.sh +++ Verifying Prerequisites.... Can't connect to 'docker' daemon. please fix and retry. Possible causes: - On Mac OS X, boot2docker VM isn't installed or started - On Mac OS X, docker env variable isn't set appropriately. Run: $(boot2docker shellinit) - On Linux, user isn't in 'docker' group. Add and relogin. - Something like 'sudo usermod -a -G docker royalharsh95' - RHEL7 bug and workaround: https://bugzilla.redhat.com/show_bug.cgi?id=1119282#c8 - On Linux, Docker daemon hasn't been started or has crashed make: *** [release] Error 1 ### Response: The problem you are experiencing is caused by the fact that you are unable to access the Docker socket /var/run/docker.sock as a non-root user. When you run sudo docker version you are running the Docker client as root so it does not experience this problem. This is a basic Unix permissions problem and there are the standard solutions: You could run the Kubernetes build as root with sudo make release. You can fix the permissions on the socket such that you are able to use Docker without sudo. If you look at the permissions on the Docker socket, you will probably see something like: $ ls -l /var/run/docker.sock /var/run/docker.sock srw-rw----. 1 root docker 0 Mar 17 12:26 /var/run/docker.sock This shows a socket that is readable by root and by members of the docker group. In this case, I am a member of the docker group so I can run the docker client without sudo. You could set up the same thing in your environment. Note that of course you always need to start the Docker daemon as root, but in general you would expect to have this configured to start automatically when your system boots, rather than starting it manually.
I was trying to build kubernetes from source: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/binary_release.md#building-from-source I have docker installed on my ubuntu. royalharsh95@ubuntu:~$ sudo docker version Client version: 1.0.1 Client API version: 1.12 Go version (client): go1.2.1 Git commit (client): 990021a Server version: 1.0.1 Server API version: 1.12 Go version (server): go1.2.1 Git commit (server): 990021a I tried after sudo service docker start but got the same error. royalharsh95@ubuntu:~$ cd kubernetes royalharsh95@ubuntu:~/kubernetes$ make release build/release.sh +++ Verifying Prerequisites.... Can't connect to 'docker' daemon. please fix and retry. Possible causes: - On Mac OS X, boot2docker VM isn't installed or started - On Mac OS X, docker env variable isn't set appropriately. Run: $(boot2docker shellinit) - On Linux, user isn't in 'docker' group. Add and relogin. - Something like 'sudo usermod -a -G docker royalharsh95' - RHEL7 bug and workaround: https://bugzilla.redhat.com/show_bug.cgi?id=1119282#c8 - On Linux, Docker daemon hasn't been started or has crashed make: *** [release] Error 1
The problem you are experiencing is caused by the fact that you are unable to access the Docker socket /var/run/docker.sock as a non-root user. When you run sudo docker version you are running the Docker client as root so it does not experience this problem. This is a basic Unix permissions problem and there are the standard solutions: You could run the Kubernetes build as root with sudo make release. You can fix the permissions on the socket such that you are able to use Docker without sudo. If you look at the permissions on the Docker socket, you will probably see something like: $ ls -l /var/run/docker.sock /var/run/docker.sock srw-rw----. 1 root docker 0 Mar 17 12:26 /var/run/docker.sock This shows a socket that is readable by root and by members of the docker group. In this case, I am a member of the docker group so I can run the docker client without sudo. You could set up the same thing in your environment. Note that of course you always need to start the Docker daemon as root, but in general you would expect to have this configured to start automatically when your system boots, rather than starting it manually.
###Instruction: I have deployed a Redis Cluster using Kubernetes. I am now attempting to use HAProxy to load balance. HAProxy is great for load balancing a redis cluster, IF you have static IPs. However, we don't have this when using kubernetes. While testing failover, Redis and Kubernetes handle election of a new master and deploying a new pod, respectively. However, kubernetes elects a new IP to the new pod. How can we inject this new IP into the HAProxy healthchecks and remove the old master IP? I have the following setup. +----+ +----+ +----+ +----+ | W1 | | W2 | | W3 | | W4 | Web application servers +----+ +----+ +----+ +----+ \ | | / \ | | / \ | | / +---------+ | HAProxy | +---------+ / \ \ +----+ +----+ +----+ | P1 | | P2 | | P3 | K8S pods = Redis + Sentinel +----+ +----+ +----+ Which is very similar to the setup described on the haproxy blog. ### Response: According to https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples/redis it uses sentinel to manage the failover. This reduces the problem to the "normal" sentinel based solution. In this case I would recommend running HAProxy in the same container as the Senrinels and using a simple sentinel script to update the HAProxy Config and issue a reload. A simple HAProxy Config which o ly talks to the master can easily be a simple search, replace, reload script. Oh and don't use the HAProxy check in that blog post. It doesn't account for or detect split brain conditions. You could either go with a simple port check for availability, or write a custom check which queries each of the sentinels and only talks to the one with at least two sentinels reporting it as the master.
I have deployed a Redis Cluster using Kubernetes. I am now attempting to use HAProxy to load balance. HAProxy is great for load balancing a redis cluster, IF you have static IPs. However, we don't have this when using kubernetes. While testing failover, Redis and Kubernetes handle election of a new master and deploying a new pod, respectively. However, kubernetes elects a new IP to the new pod. How can we inject this new IP into the HAProxy healthchecks and remove the old master IP? I have the following setup. +----+ +----+ +----+ +----+ | W1 | | W2 | | W3 | | W4 | Web application servers +----+ +----+ +----+ +----+ \ | | / \ | | / \ | | / +---------+ | HAProxy | +---------+ / \ \ +----+ +----+ +----+ | P1 | | P2 | | P3 | K8S pods = Redis + Sentinel +----+ +----+ +----+ Which is very similar to the setup described on the haproxy blog.
According to https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples/redis it uses sentinel to manage the failover. This reduces the problem to the "normal" sentinel based solution. In this case I would recommend running HAProxy in the same container as the Senrinels and using a simple sentinel script to update the HAProxy Config and issue a reload. A simple HAProxy Config which o ly talks to the master can easily be a simple search, replace, reload script. Oh and don't use the HAProxy check in that blog post. It doesn't account for or detect split brain conditions. You could either go with a simple port check for availability, or write a custom check which queries each of the sentinels and only talks to the one with at least two sentinels reporting it as the master.
###Instruction: I'm using google container engine and I can create pods and services in my cluster. But when I try to use the DNS feature (skydns) to lookup my services nothing is being found. If I log in to the non-master node, I can see the DNS container and can use 'host' command to do DNS lookup (installed with apt-get). But I can't find my service by it's name. It associates kubernetes.local with the IP of the service. Actually it associates kubernetes.local with the IP of every one of my services (I have 9). But it does not associate the service name "my-service-name". Anyone know the trick to get this to work? Either creating the service isn't causing skydns to create the DNS entry (maybe there is some magic to make it work)...or I'm just completely clueless (less magical, perhaps more likely). I don't know which. b ### Response: There's a little bit of magic involved that's intended to make DNS in Kubernetes more convenient from within a pod. Let me try to explain. The way that the DNS names are constructed within Kubernetes is <service-name>.<namespace>.kubernetes.local. This is why kubernetes.local is resolving from on your node, but my-service-name isn't. Assuming your service is defined in the default namespace (it will be unless you explicitly created it in a different namespace), you should be able to resolve it at my-service-name.default.kubernetes.local. The docs around DNS assume that you care about how to resolve service names from within a pod rather than directly on the host. Within your pod, DNS should be set up to first search for names you specify relative to default.kubernetes.local and kubernetes.local, meaning that from within any pod in the cluster that isn't kube-dns (it's handled specially) you should be able to resolve your service using either my-service-name or my-service-name.default.kubernetes.local. If you want to try it out, attach to one of your cluster's fluentd pods using docker exec and try looking up your service from within the container. Note that the namespace changed from kubernetes.local to cluster.local between versions 0.17.0 and 0.18.0, so check your cluster's version (using kubectl version) if your first attempt doesn't work.
I'm using google container engine and I can create pods and services in my cluster. But when I try to use the DNS feature (skydns) to lookup my services nothing is being found. If I log in to the non-master node, I can see the DNS container and can use 'host' command to do DNS lookup (installed with apt-get). But I can't find my service by it's name. It associates kubernetes.local with the IP of the service. Actually it associates kubernetes.local with the IP of every one of my services (I have 9). But it does not associate the service name "my-service-name". Anyone know the trick to get this to work? Either creating the service isn't causing skydns to create the DNS entry (maybe there is some magic to make it work)...or I'm just completely clueless (less magical, perhaps more likely). I don't know which. b
There's a little bit of magic involved that's intended to make DNS in Kubernetes more convenient from within a pod. Let me try to explain. The way that the DNS names are constructed within Kubernetes is <service-name>.<namespace>.kubernetes.local. This is why kubernetes.local is resolving from on your node, but my-service-name isn't. Assuming your service is defined in the default namespace (it will be unless you explicitly created it in a different namespace), you should be able to resolve it at my-service-name.default.kubernetes.local. The docs around DNS assume that you care about how to resolve service names from within a pod rather than directly on the host. Within your pod, DNS should be set up to first search for names you specify relative to default.kubernetes.local and kubernetes.local, meaning that from within any pod in the cluster that isn't kube-dns (it's handled specially) you should be able to resolve your service using either my-service-name or my-service-name.default.kubernetes.local. If you want to try it out, attach to one of your cluster's fluentd pods using docker exec and try looking up your service from within the container. Note that the namespace changed from kubernetes.local to cluster.local between versions 0.17.0 and 0.18.0, so check your cluster's version (using kubectl version) if your first attempt doesn't work.
###Instruction: I have two instances of an app container (happens to be a Node.JS app, but that shouldn't matter) running in a Kubernetes cluster on Google Container Engine. I'd like to scale it up to three instances. My cluster has a master and two minion nodes, with a replication controller and a load balancer service. The replication controller keeps my app container running happily on the two nodes. I can see that there is a handy gcloud alpha container kubectl resize command which lets me change the number of replicas, but I don't see how or if I can increase the size of the cluster itself, so that it can spin up another minion node. I only see gcloud commands to create, delete, list and describe clusters; nothing to resize them. If I can't resize my cluster, then to scale up I'd need to create a whole new cluster and kill the old one. Am I missing something? Also, are there plans to support auto-scaling? ### Response: Update (June 2015): Kubernetes on GCE now uses managed instance groups which you can manually resize to add new nodes to your cluster. There isn't currently a way to add nodes to your existing Google Container Engine cluster. We are currently adding support to Kubernetes to allow clusters to have nodes dynamically added but the work isn't quite finished yet. Once the feature is available in Kubernetes you can expect that it will show up in Google Container Engine shortly after the next Kubernetes release. In the mean time, it should be possible to run more than two replicas of your node.js application on the existing two VMs.
I have two instances of an app container (happens to be a Node.JS app, but that shouldn't matter) running in a Kubernetes cluster on Google Container Engine. I'd like to scale it up to three instances. My cluster has a master and two minion nodes, with a replication controller and a load balancer service. The replication controller keeps my app container running happily on the two nodes. I can see that there is a handy gcloud alpha container kubectl resize command which lets me change the number of replicas, but I don't see how or if I can increase the size of the cluster itself, so that it can spin up another minion node. I only see gcloud commands to create, delete, list and describe clusters; nothing to resize them. If I can't resize my cluster, then to scale up I'd need to create a whole new cluster and kill the old one. Am I missing something? Also, are there plans to support auto-scaling?
Update (June 2015): Kubernetes on GCE now uses managed instance groups which you can manually resize to add new nodes to your cluster. There isn't currently a way to add nodes to your existing Google Container Engine cluster. We are currently adding support to Kubernetes to allow clusters to have nodes dynamically added but the work isn't quite finished yet. Once the feature is available in Kubernetes you can expect that it will show up in Google Container Engine shortly after the next Kubernetes release. In the mean time, it should be possible to run more than two replicas of your node.js application on the existing two VMs.
###Instruction: ActiveMQ built-in cluster discovery mechanisms are basically based on multicast (excepting LDAP here). Openshift v3 / Kubernetes don't support well multicast as it could be quite bad or misfunctioning on a public cloud infrastructure. Is there any existing option to enable network of activemq brokers discovery within Openshift v3 ? I saw the project jboss-openshift/openshift-ping enabling discovery for JGroups members on Openshift. I am looking for an equivalent for ActiveMQ. ### Response: fabric8 is a project that has a number of value-adds for OS3 / kubernetes platforms http://fabric8.io/ There is clustered ActiveMQ out of the box http://fabric8.io/guide/fabric8MQ.html As the project is in development, you may get best help on irc chat on #fabric8 on freenode - all the guys hang out there.
ActiveMQ built-in cluster discovery mechanisms are basically based on multicast (excepting LDAP here). Openshift v3 / Kubernetes don't support well multicast as it could be quite bad or misfunctioning on a public cloud infrastructure. Is there any existing option to enable network of activemq brokers discovery within Openshift v3 ? I saw the project jboss-openshift/openshift-ping enabling discovery for JGroups members on Openshift. I am looking for an equivalent for ActiveMQ.
fabric8 is a project that has a number of value-adds for OS3 / kubernetes platforms http://fabric8.io/ There is clustered ActiveMQ out of the box http://fabric8.io/guide/fabric8MQ.html As the project is in development, you may get best help on irc chat on #fabric8 on freenode - all the guys hang out there.
###Instruction: I am planning to test Kubernetes locally, but would like to ask some theoretic questions before. I created a pipeline in python that takes as input a whole bunch of files from a directory, and created a docker image out of it (this is my Pod) What I understood from the documentation is that the Kubernetes scheduler will choose automatically the minion to deploy for a given task, my question is, using an 8G memory laptop, is there a 'rule' to follow before creating the minion (specifying the number of minions to deploy) based on the amount of memory available in a machine (regardless if it is a laptop or a cluster) ? Thanks ### Response: You would typically only ever have one minion/host. So if you are deploying your minions on physical hardware, there is a 1:1 mapping between minions and physical hosts. If you are deploying into a virtual cluster on your laptop, you will want to make sure that each virtual minion has enough memory to run at least a single instance of whatever containers you plan on deploying. "How much is enough?" is a question that only you can answer.
I am planning to test Kubernetes locally, but would like to ask some theoretic questions before. I created a pipeline in python that takes as input a whole bunch of files from a directory, and created a docker image out of it (this is my Pod) What I understood from the documentation is that the Kubernetes scheduler will choose automatically the minion to deploy for a given task, my question is, using an 8G memory laptop, is there a 'rule' to follow before creating the minion (specifying the number of minions to deploy) based on the amount of memory available in a machine (regardless if it is a laptop or a cluster) ? Thanks
You would typically only ever have one minion/host. So if you are deploying your minions on physical hardware, there is a 1:1 mapping between minions and physical hosts. If you are deploying into a virtual cluster on your laptop, you will want to make sure that each virtual minion has enough memory to run at least a single instance of whatever containers you plan on deploying. "How much is enough?" is a question that only you can answer.
###Instruction: I started a cluster in aws following the guides and then went about following the guestbook. The problem I have is accessing it externally. I set the PublicIP to the ec2 publicIP and then use the ip to access it in the browser with port 8000 as specified in the guide. Nothing showed. To make sure it was actually the service that wasn't showing anything I then removed the service and set a host port to be 8000. When I went to the ec2 instance IP I could access it correctly. So it seems there is a problem with my setup or something. The one thing I can think of is, I am inside a VPC with an internet gateway. I didn't add any of my json files I used, because they are almost exactly the same as the guestbook example with a few changes to allow my ec2 PublicIP, and a few changes for the VPC. ### Response: On AWS you have to use your PRIVATE ip address with Kubernetes' services, since your instance is not aware of its public ip. The NAT-ing on amazon's side is done in such a way that your service will be accessible using this configuration. Update: please note that the possibility to set the public IP of a service explicitly was removed in the v1 API, so this issue is not relevant anymore. Please check the following documentation page for workarounds: https://kubernetes.io/docs/user-guide/services/
I started a cluster in aws following the guides and then went about following the guestbook. The problem I have is accessing it externally. I set the PublicIP to the ec2 publicIP and then use the ip to access it in the browser with port 8000 as specified in the guide. Nothing showed. To make sure it was actually the service that wasn't showing anything I then removed the service and set a host port to be 8000. When I went to the ec2 instance IP I could access it correctly. So it seems there is a problem with my setup or something. The one thing I can think of is, I am inside a VPC with an internet gateway. I didn't add any of my json files I used, because they are almost exactly the same as the guestbook example with a few changes to allow my ec2 PublicIP, and a few changes for the VPC.
On AWS you have to use your PRIVATE ip address with Kubernetes' services, since your instance is not aware of its public ip. The NAT-ing on amazon's side is done in such a way that your service will be accessible using this configuration. Update: please note that the possibility to set the public IP of a service explicitly was removed in the v1 API, so this issue is not relevant anymore. Please check the following documentation page for workarounds: https://kubernetes.io/docs/user-guide/services/
###Instruction: I have recently started exploring kuberenetes and done with practical implementation of pods,services and replication Controller on google cloud. I have some doubts over service and network access . First, Where is the service deployed which will work as load balancer for group of pods ? Second, does the request to access an application running in pod using a service load balancer go through master or direct to minions nodes ? ### Response: A service proxy runs on each node on the cluster. From inside the cluster, when you make a request to a service IP, it is intercepted by the service proxy and routed to a pod matching the label selector for the service. If you have specified an external load balancer for your service, the load balancer will pick a node to send the request to, at which point it will be captured by the proxy and directed to an appropriate pod. If you are using public IPs, then your router will send the request to the node with the public IP where it will be captured by the proxy and directed to an appropriate pod. If you followed by description, you can see that service requests do not go through the master. They bounce through a proxy running on the nodes. As an aside, there is also a proxy running on the master, which you can use to reach nodes, services, pods, but this proxy isn't in the packet path for services that you create within the cluster.
I have recently started exploring kuberenetes and done with practical implementation of pods,services and replication Controller on google cloud. I have some doubts over service and network access . First, Where is the service deployed which will work as load balancer for group of pods ? Second, does the request to access an application running in pod using a service load balancer go through master or direct to minions nodes ?
A service proxy runs on each node on the cluster. From inside the cluster, when you make a request to a service IP, it is intercepted by the service proxy and routed to a pod matching the label selector for the service. If you have specified an external load balancer for your service, the load balancer will pick a node to send the request to, at which point it will be captured by the proxy and directed to an appropriate pod. If you are using public IPs, then your router will send the request to the node with the public IP where it will be captured by the proxy and directed to an appropriate pod. If you followed by description, you can see that service requests do not go through the master. They bounce through a proxy running on the nodes. As an aside, there is also a proxy running on the master, which you can use to reach nodes, services, pods, but this proxy isn't in the packet path for services that you create within the cluster.
###Instruction: I have a distributed application running on virtual machines, among which I have one service running on active/passive mode. The active VM provides service via a public IP. Should the active VM fail, the public IP will be moved to the passive VM and the passive VM will become active and starts to provide service. How this pattern fit in containerized application managed by kubernetes? If I use a replication controller with replicas =1, in case of node/minion failure, the replication controller will reschedule the pod(= VM in my current application) in another minion, but this would likely cause high downtime compared with my current solution where only IP resource is moved. If I use a replication controller with replicas=2, then I would need to have a different configuration with two pods (one with public IP, the other without) which is anti-pattern? Furthermore, there is no designed way in kubernetes to support virtual IP(move around pods.)? OR should I use replicas =2 and implement something myself to manage the IP(or maybe make use of pacemaker? this would introduce another problem: there will be to cluster management in my application, kubernetes, and pacemaker/corosync) So, how this should be done? ### Response: It sounds like your application is using its own master election scheme between the two VMs acting as a load balancer and you know internally which one is currently the master. This can be achieved today in Kubernetes using a service that spans both pods (master and standby) and a readiness probe that only returns success for the currently active master. Failure of a readiness probe removes the pod from the endpoints list, so no traffic will be directed to the node that isn't the master. When you need to do failover, the standby would report healthy to the readiness probe (and the master would report unhealthy or be unreachable) at which point traffic to the service would only land on the standby (now acting as the master). You can create the service that spans the two pods with an external IP such that it is reachable from outside of your cluster.
I have a distributed application running on virtual machines, among which I have one service running on active/passive mode. The active VM provides service via a public IP. Should the active VM fail, the public IP will be moved to the passive VM and the passive VM will become active and starts to provide service. How this pattern fit in containerized application managed by kubernetes? If I use a replication controller with replicas =1, in case of node/minion failure, the replication controller will reschedule the pod(= VM in my current application) in another minion, but this would likely cause high downtime compared with my current solution where only IP resource is moved. If I use a replication controller with replicas=2, then I would need to have a different configuration with two pods (one with public IP, the other without) which is anti-pattern? Furthermore, there is no designed way in kubernetes to support virtual IP(move around pods.)? OR should I use replicas =2 and implement something myself to manage the IP(or maybe make use of pacemaker? this would introduce another problem: there will be to cluster management in my application, kubernetes, and pacemaker/corosync) So, how this should be done?
It sounds like your application is using its own master election scheme between the two VMs acting as a load balancer and you know internally which one is currently the master. This can be achieved today in Kubernetes using a service that spans both pods (master and standby) and a readiness probe that only returns success for the currently active master. Failure of a readiness probe removes the pod from the endpoints list, so no traffic will be directed to the node that isn't the master. When you need to do failover, the standby would report healthy to the readiness probe (and the master would report unhealthy or be unreachable) at which point traffic to the service would only land on the standby (now acting as the master). You can create the service that spans the two pods with an external IP such that it is reachable from outside of your cluster.
###Instruction: Based on the following setup of Kubernetes on Microsoft Azure. I was able to deploy my Docker containers, using the same configuration settings. We have 2 categories of containers, front-end and back-end, where the back-end consist of high intensive processing. The latter we want to run on Large instances, whereas the front-end will be run on Small instances. What is the best option to separate these, my gues would be with labeling the hosts. But was not able to find this in the docs, or in examples. ### Response: Currently, adding labels to the hosts and restricting your pods to nodes with the appropriate label (e.g. 'Large' / 'Small') is the best way to do this. For an example, see examples/node-selection Kubernetes has a resource model which, once implemented, will allow you to tell the scheduler what resources each pod needs and the system will ensure that the pod is placed on a node with available resources. Unfortunately, as you can tell from the design document on Github, this isn't fully implemented yet.
Based on the following setup of Kubernetes on Microsoft Azure. I was able to deploy my Docker containers, using the same configuration settings. We have 2 categories of containers, front-end and back-end, where the back-end consist of high intensive processing. The latter we want to run on Large instances, whereas the front-end will be run on Small instances. What is the best option to separate these, my gues would be with labeling the hosts. But was not able to find this in the docs, or in examples.
Currently, adding labels to the hosts and restricting your pods to nodes with the appropriate label (e.g. 'Large' / 'Small') is the best way to do this. For an example, see examples/node-selection Kubernetes has a resource model which, once implemented, will allow you to tell the scheduler what resources each pod needs and the system will ensure that the pod is placed on a node with available resources. Unfortunately, as you can tell from the design document on Github, this isn't fully implemented yet.

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
19
Add dataset card