Cloud Native Maturity Model - Level 2
Dante's Cloud Native Maturity Model is a framework that enables organizations to gauge the current maturity of their cloud applications and to define an architectural roadmap such that the applications will ultimately take full advantage of cloud computing. In the first two installments of this series we discussed the classic mistake of lifting and shifting an application as-is into the cloud and why software defined infrastructureis a prerequisite to further advancement in the cloud.
- Level 0: Lift and Shift
- Level 1: Embrace Software Defined Infrastructure
- Level 2: Liberate Cloud Native Architecture
- Level 3: Transcend to Continuous Deployment
Level 2 - Liberate Cloud Native Architecture
At Level 2, the focus is on the architectural changes that are necessary to operate at cloud-scale. In general, these changes include decomposing your applications into their constituent components and refactoring them to leverage the native capabilities of the cloud. The smaller, independent components intrinsically provide better scalability since they do not compete with other components for resources and they require less resources at startup time when they need to quickly scale out horizontally. It can also be beneficial to take advantage of the value added capabilities offered in the cloud instead of always rolling a custom solution. These capabilities generally provide lean alternatives with higher scalability for lower costs.
More specifically, when we talk about cloud native architecture, we divided it into multiple categories: micorservices, polyglot persistence, serverless client applications, and the event hub.
Microservices are implemented as smaller, decoupled, independent and light weight deployment units that run in their own isolated containers. This architectural style can help optimize scalability and flexibility and is ideally suited for the cloud. Volumes have already been written about micorservices. In the context of Dante's Cloud Native Maturity Model, the key points about microservices are that: they will be needed to optimize scalability; but they require software defined infrastructure, so that they can be effectively managed; and they don't always need to be written from scratch. An entire industry segment known as Backend-as-a-Service (BaaS) has emerged with ready made microservices that can be leveraged quickly and effectively at reasonable cost. In some cases the only code that needs to be implemented is the automation logic required to configure and deploy a native capability and a client SDK. BaaS can be a lean, cost effective alternative that is leveraged in the early stages of a project and potentially replaced in later stages after the need for the service is vetted and fully understood. Another interesting alternative is AWS Lambda, which will run code on your behalf instead of needing to provision hourly resources.
Of all the aspects of cloud architecture, persistence has seen some of the most interesting advancements. Traditional databases were unable to scale up to meet the demands of global web applications. This forced the innovations that produced new databases and architectures that embrace eventual consistency and deliver horizontal scalability. An important characteristic of microservices is that they each encapsulate their own dedicated datastore. This means that each service can chose the persistence model that is best suited for its needs. These two forces, scalability and specialization, have produced a myriad of NoSQL database categories such as key-value, document, column-family, column-oriented, and graph. Gone are the days were one size fits all. Instead each database is optimized for the read and write characteristics of its specific use cases. This specialization has led to the emergency of polyglot persistence which is the recognition that modern software architectures and their various kinds of data require many difference types of databases to be effective. A typical cloud native application may include a client side data store that can work in offline mode and synchronize with the backend and shared devices; a highly available, append only data store that can process over a million writes per second to keep pace with web events; a data-lake that provides a historical record of all raw data for future analytics requirements; a unified search engine for targeted recommendations; multiple petabyte-scale data warehouses provisioned on demand for ad hoc analysis; and various operational stores, each optimized to retrieve a specific kind of data. All with automated backups, redundancy for high availability, and geo replication to minimize latency and provide for disaster recovery. The cloud can help control the cost and the learning curve of polyglot persistence through various Database-as-a-Service (DBaaS) offerings.
The distributed and decoupled nature of microserves and polyglot persistence requires globally distributed event driven processing to integrate, synchronize and orchestrate all the components. All the events in the system, such as transactions, click streams, logs, social media feeds and more, are processed as a real-time stream and fed through the big data refinery and analytics engine to produce the actionable outcomes that help facility adaptive leaning. AWS capabiltities such as Kinesis, Lambda, S3, EMR and Spot instances provide the backbone for this high volume processing with resources that are provisioned on demand to meet the dynamic workloads.
Continue reading this series of postings to learn more about how to advance through the levels of the maturity model. The final installment will discuss how continuous deployment in the cloud facilitates agility and adaptive leaning.
Engage Dante to help you liberate cloud native architecture and navigate a successful transition to the cloud.