Welcome!

Get Cloud Ready!

Janakiram MSV

Subscribe to Janakiram MSV: eMailAlertsEmail Alerts
Get Janakiram MSV via: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Blog Feed Post

The Kubernetes Way: Scale and Reliability

In the first part of this article, we explored the concepts of pods and services in Kubernetes. Let’s understand how to achieve the scale and reliability with replication controllers. We will also discuss how to bring persistence to cloud-native applications deployed on Kubernetes.

Replication Controller: Scaling and Managing Microservices

If pods are the units, and deployment and services are the abstraction layers, then who tracks the health of the pods? This is where the concept of a replication controller (RC) comes into the picture.

After the pods are deployed, they need to be scaled and tracked. An RC definition has the baseline configuration of the number of pods that should be available at any given point. Kubernetes ensures that the desired configuration is maintained all the time by tracking the number of pods. It can kill a few or launch a few to meet the baseline configuration.

The RC can track the health of pods. If a pod becomes inaccessible, it gets killed, and a new pod is launched. Since an RC essentially inherits the definition of a pod, the YAML or JSON manifest may contain the attributes for the restart policy, container probes, and a health check endpoint.

Kubernetes supports the auto scaling of pods based on CPU utilization, similar to that of EC2 Auto Scaling or GCE Autoscaler. At runtime, the RC can be manipulated to scale the pods automatically, based on a specific threshold of CPU utilization. The maximum and minimum number of pods may also be specified in the same command.

Flat Networking: The Secret Sauce

Networking is one of the complex challenges of containerization. The only way to expose a container to the outside world is through port forwarding from the hosts. But that becomes complex when scaling the containers. Instead of leaving the network configuration and integration to administrators, Kubernetes comes with an integrated networking model that works out of the box.

Each node, service, pod and container gets an IP address. A node’s IP address is assigned by the physical router; combined with the assigned port, it becomes the endpoint to access the external-facing services. Though not routable, the Kubernetes service also gets an IP address. All communication happens without a network address translation (NAT) layer, making the network flat and transparent.

This model brings the following advantages:

  • All containers can talk to each other without a NAT.
  • All nodes can talk to all pods and containers in the cluster without a NAT.
  • Each container sees exactly the same IP address that other containers see.

The best thing about scaling pods through a Replica Set (RS) is that the port mapping is handled by Kubernetes. All pods that belong to a service are exposed through the same port on each node. Even if there is no pod scheduled on a specific node, the request automatically gets forwarded to the appropriate node.

Read the entire article at The New Stack.

Janakiram MSV is an analyst, advisor, and architect. Follow him on Twitter,  Facebook and LinkedIn.

Read the original blog entry...

More Stories By Janakiram MSV

Janakiram MSV heads the Cloud Infrastructure Services at Aditi Technologies. He was the founder and CTO of Get Cloud Ready Consulting, a niche Cloud Migration and Cloud Operations firm that recently got acquired by Aditi Technologies. In his current role, he leads a highly talented engineering team that focuses on migrating and managing applications deployed on Amazon Web Services and Microsoft Windows Azure Infrastructure Services.
Janakiram is an industry analyst with deep understanding of Cloud services. Through his speaking, writing and analysis, he helps businesses take advantage of the emerging technologies. He leverages his experience of engaging with the industry in developing informative and practical research, analysis and authoritative content to inform, influence and guide decision makers. He analyzes market trends, new products / features, announcements, industry happenings and the impact of executive transitions.
Janakiram is one of the first few Microsoft Certified Professionals on Windows Azure in India. Demystifying The Cloud, an eBook authored by Janakiram is downloaded more than 100,000 times within the first few months. He is the Chief Editor of a popular portal on Cloud called www.CloudStory.in that covers the latest trends in Cloud Computing. Janakiram is an analyst with the GigaOM Pro analyst network where he analyzes the Cloud Services landscape. He is a guest faculty at the International Institute of Information Technology, Hyderabad (IIIT-H) where he teaches Big Data and Cloud Computing to students enrolled for the Masters course. As a passionate speaker, he has chaired the Cloud Computing track at premier events in India.
He has been the keynote speaker at many premier conferences, and his seminars are attended by thousands of architects, developers and IT professionals. His sessions are rated among the best in every conference he participates.
Janakiram has worked at the world-class product companies including Microsoft Corporation, Amazon Web Services and Alcatel-Lucent. Joining as the first employee of Amazon Web Services in India, he was the AWS Technology Evangelist. Prior to that, Janakiram spent 10 years at Microsoft Corporation where he was involved in selling, marketing and evangelizing the Microsoft Application Platform and Tools.