- Hits: 128
I have been hearing about containers for some time now but have been too busy with work (have not introduced containers yet...) to take a good look at this technology. I have spent a few weekends over the past ~6 months reading up on this technology including the different orchestration platforms like Docker Swarm or Kubernetes and I have to say I'm not only impressed with the varying possibilities but believe this could be one of the ways to safely migrate workloads to the public cloud without fear of vendor lock-in.
Although I'm no expert in this field right now I wanted to share a quick tutorial on getting Kubernetes installed and running on a vSphere environment. There are many tutorials on AWS & Azure but I did not find too many for vSphere which I think is important because this represents one way to have a private cloud presence.
Before jumping into steps let's take a look at a high-level conceptual diagram that shows what we are trying to accomplish. We need to deploy a kubernetes cluster to our vSphere environment. Next we will do a deployment using a pre-existing YAML file. This will provision a two node cluster running our deployment. Finally we will expose this deployment using a kubernetes service.
STEP 1 - k8s cluster on vSphere 6.x
The following guide is very good and it's what I used to get a k8s cluster setup on vSphere 6.0. https://github.com/kubernetes/kubernetes-anywhere/blob/master/phase1/vsphere/README.md
once you have k8s portal up you can test an nginx deployment and expose external with NodePort. No SLB required, although for production you would need some form of load balancing as unlike AWS or Azure there is no built-in support for automatic load balancing setup.
STEP 2 - k8s NodePort
The following guide was very helpful in understanding how to expose your k8s cluster to an external network. By default the k8s cluster is only available to the private network it resides in.
STEP 3 - k8s deployment
The code below will create a simple nginx deployment. You can grab the yaml file here. You can create the file and save locally or point directly to URI.
To view/verify new deployment run the command below. You should see output displaying the Name, Namespace, creation time, and other useful information.
Once we verify our deployment was successful we can start to gather the information we need to expose our deployment externally. Remember that by default our new deployment is only available inside the k8s internal network. The command below will get the we need to pass to the expose command. We see two deployments, we want the "nginx-deployment".
STEP 4 - k8s service
Armed with the name we need to pass to the expose command we are now ready to proceed. the --name parameter is the name of your new "exposed" deployment, which is a k8s service now.
Now we should have a new service created called "my-nginx". Let's verify by running the command below. Verify the new service is displayed. You will see the cluster IP, this is an internal IP that you don't need to worry about at this time. Notice the EXTERNAL-IP is empty, this is normal. You do need to capture the Port mapping. The first port is the port in use by the nodes, the second is the mapped port. In this example the port mapping is 80:30575. The second port "30575" is the one we can use to access nginx from the external network.
The next step is to figure out which pods are running our nginx-deployment. The command below will do this for us. The command will output the status of the pod, age, IP (Internal IP), and node. At this time we are only interested in the Node, the IP is a private IP and not useful at this time.
STEP 5 - k8s external IP
let's find where nginx is running and note the name of each pod.
We take the Names of the output from the above command and use below to find out the node and external IP.
The command below will also pull the IP Address.
With the external IP of the VM running the node, we can then go to http://10.4.4.234:30575/ You can follow step 5 to get the name of the second node and find its external IP. The port will be the same on both nodes. You can now easily add a Netscaler, A10, F5 or other load balancer in front of the two nodes and along with DNS provide a friendly name for your cluster VIP.
- Hits: 130
Helpful commands for docker newbies like myself.
# List all docker images you have.
REPOSITORY TAG IMAGE ID CREATED SIZE
nginx latest 46102226f2fd 12 days ago 109 MB
centos-rl latest 647c13af08c7 2 weeks ago 302 MB
ubuntu latest 6a2f32de169d 3 weeks ago 117 MB
centos latest a8493f5f50ff 4 weeks ago 192 MB
d4w/nsenter latest 9e4f13a0901e 7 months ago 83.8 kB
# Pull a new docker image:
# List docker containers both active and not actively running
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c8ef567a8bdc centos-rl "/bin/bash" 3 hours ago Exited (0) 4 minutes ago centos-rl-tools
7f9f3afa7a3f nginx "nginx -g 'daemon ..." 4 hours ago Up 4 hours 0.0.0.0:32769->80/tcp nginx-the-cross.net
# Rename docker container
# Remove a container
# Delete image
# Run docker container on Windows 10 with host volume
# The "run" command should only be used the very first time you run the image. This creates a new container.
# Use "start" for subsequent uses. This starts the container that was previously created. The image remains unmodified. Your volume will still be mapped, along with any other parameters you used with "run" command.
# Run docker container in detached mode:
# To access the container:
# Export docker container to tar file.
# Import/Load docker image