Welcome!

Get Cloud Ready!

Janakiram MSV

Subscribe to Janakiram MSV: eMailAlertsEmail Alerts
Get Janakiram MSV via: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Blog Feed Post

Running Stateful Applications in Kubernetes: Storage Provisioning and Allocation

In the last part of this series, we explored how the concept of volumes brings persistence to containers. This article builds upon the understanding of volumes to introduce persistent volumes and claims, which form the robust storage infrastructure of Kubernetes.

To appreciate how Kubernetes manages storage pools that provide persistence to applications, we need to understand the architecture and the workflow related to application deployment.

Kubernetes is used in various roles — by developers, system administrators, operations, and DevOps teams. Each of these personas, if you will, interact with the infrastructure in a distinct way. The system administration team is responsible for configuring the physical infrastructure for running Kubernetes cluster. The operations team maintains the Kubernetes cluster through patching, upgrading, and scaling the cluster. DevOps teams deal with Kubernetes to configure CI/CD, monitoring, logging, rolling upgrades, and canary deployments. Developers consume the API and the resources exposed by the Kubernetes infrastructure. They are never expected to have visibility into the underlying physical infrastructure that runs the master and nodes.

Developers “ask” for the resources they need to run their applications through a declarative mechanism, typically described in YAML or JSON. The Kubernetes master is responsible for ensuring that the appropriate resources are selected as requested by the developers. But before it can do that, the administrators will need to provision the required compute, storage, and networking capacity.

For example, a developer may ask Kubernetes to schedule a pod backed by SSD running powered by a certain number of cores and memory. Assuming that the infrastructure is capable, Kubernetes master honors the request by choosing the right node(s) to run the pod.

To understand this concept, let’s look at the relationship between a pod and node. Nodes are pre-provisioned servers configured by administrators and operations team. Developers create pods that utilize the compute resources exposed by the nodes.

This architecture of Kubernetes enables clean separation of concerns among developers, administrators, and operations.

Read the entire article at The New Stack.

Janakiram MSV is an analyst, advisor, and architect. Follow him on Twitter,  Facebook and LinkedIn.

Read the original blog entry...

More Stories By Janakiram MSV

Janakiram MSV heads the Cloud Infrastructure Services at Aditi Technologies. He was the founder and CTO of Get Cloud Ready Consulting, a niche Cloud Migration and Cloud Operations firm that recently got acquired by Aditi Technologies. In his current role, he leads a highly talented engineering team that focuses on migrating and managing applications deployed on Amazon Web Services and Microsoft Windows Azure Infrastructure Services.
Janakiram is an industry analyst with deep understanding of Cloud services. Through his speaking, writing and analysis, he helps businesses take advantage of the emerging technologies. He leverages his experience of engaging with the industry in developing informative and practical research, analysis and authoritative content to inform, influence and guide decision makers. He analyzes market trends, new products / features, announcements, industry happenings and the impact of executive transitions.
Janakiram is one of the first few Microsoft Certified Professionals on Windows Azure in India. Demystifying The Cloud, an eBook authored by Janakiram is downloaded more than 100,000 times within the first few months. He is the Chief Editor of a popular portal on Cloud called www.CloudStory.in that covers the latest trends in Cloud Computing. Janakiram is an analyst with the GigaOM Pro analyst network where he analyzes the Cloud Services landscape. He is a guest faculty at the International Institute of Information Technology, Hyderabad (IIIT-H) where he teaches Big Data and Cloud Computing to students enrolled for the Masters course. As a passionate speaker, he has chaired the Cloud Computing track at premier events in India.
He has been the keynote speaker at many premier conferences, and his seminars are attended by thousands of architects, developers and IT professionals. His sessions are rated among the best in every conference he participates.
Janakiram has worked at the world-class product companies including Microsoft Corporation, Amazon Web Services and Alcatel-Lucent. Joining as the first employee of Amazon Web Services in India, he was the AWS Technology Evangelist. Prior to that, Janakiram spent 10 years at Microsoft Corporation where he was involved in selling, marketing and evangelizing the Microsoft Application Platform and Tools.