Cloud computing adoption in a large enterprise requires intensive policy-making decisions by organizational tech leadership. There would be many complexities to take care of such as privacy issues, intellectual property issues, interoperability issues, systemic risk, jurisdictional complexity, data governance, reliability, loss of IT controls, etc.

Implementing a Cloud adoption task force or a steering committee

Implementation of a cloud adoption task force or a steering committee would be recommended (for a large enterprise) to involve all stakeholders including leadership, department heads, CTO/CIO office, CISO & compliance teams, etc. Building a cross-functional multifaceted team would help to lead the drive to shift into the cloud throughout all departments of the organization. This team would be instrumental in identifying the organizational goals and the important milestones of the organizational roadmap for cloud adoption.

Define the goals of cloud migration strategy

The roadmap for the cloud migration strategy for a large organization depends on multiple factors such as;

  • Assessing the existing ecosystem.
    • Existing clusters of non-cloud based applications.
    • Existing clusters of scattered cloud-based applications.
    • Amount of technical debt in the current ecosystem.
  • TCO calculations for the project.
  • Leadership & organizational goals of such migration.
  • Interoperability, integration & consolidation of various internal and external systems.
  • Local regulatory, statutory, and compliance requirements.
  • Organizational technological roadmap.
  • Assessing delivery capabilities.
    • Availability of vendor support for the organization.
    • Technical know-how & knowledge transfer capabilities of the IT teams.

Establishing controls in cloud governance

Decision making and process controls including fund allocations, scheduling, and prioritization of the cloud migration project would need to be handled by the controlling body to ensure that everyone is on the same page on the decision-making process. 

This body would also be in charge of revising organizational IT policies and procedures to include cloud ecosystems and establish new standards based on cloud computing to reflect data security, privacy, and portability.

Opportunity Evaluation

Identify the stakeholders & their end goals. Identify opportunities to integrate, optimize, cost reduce & transform existing systems scattered throughout the departments. Also, evaluate non-tech opportunities such as reducing organizational carbon footprint & new career opportunities for employees.

Portfolio Discovery and Planning

Identify the existing ecosystem which would be a possible mix of scattered non-cloud based and multi-cloud environments. Identify internal and external dependencies based on institutional requirements to provide a holistic platform of an interconnected & integrated ecosystem. Identifying opportunities for consolidation and de-duplication of systems, data, and processes would be an important milestone of the discovery.

Identifying priorities + complexity of migrations and splitting the transition into a series of micro-migrations would be a part of planning & scheduling for better manageability.

This metadata could be collected by hybrid methods such as using migration tools provided by CSPs, Third-party migration tools, and manual intervention.

Choosing the cloud platform

Based on existing and future requirements, the platform could be chosen from a public cloud, hybrid cloud, private cloud, or a multi-cloud approach. For storing and processing sensitive information, a local private cloud would be the best choice. A hybrid cloud would be the most practical approach for many large enterprises. 

Some organizations prefer to use multiple CSPs based on the service offerings & the workloads. Some CSPs have clear advantages over others where they have stronger product offerings.

Another decision to make would be whether to go on IaaS, Paas, or SaaS offerings based on various decisions like requirements to be cloud-agnostic, etc. However, for many large organizations, it would be a mix of all.

Major public CSP contenders are Microsoft Azure, Amazon Web Services, Google Cloud Platform with Oracle Cloud Infrastructure, Alibaba Cloud, and IBM Cloud having smaller market shares.

Magic Quadrant for Cloud Infrastructure as a Service, Worldwide

https://www.gartner.com/doc/reprints?id=1-1CMAPXNO&ct=190709&st=sb 

While these public CSPs often have similar functionality & implementation; the TCO differences, regional data center availability, compliance to local regulatory requirements, data residency & local presence play an important role in evaluating a CSP.

Below comparison published by https://www.managedsentinel.com/ is a rough comparison of features as of mid-2019. It is not exhaustive nor should be taken as the ultimate source of truth, but it provides a decent picture.

Another deciding factor should be the SLAs provided by each CSP for the services you’re planning to implement. While all of the CSPs put an enormous effort to maintain availability and uptime of services, it is not the same between the CSPs or the services provided. 

https://status.aws.amazon.com/
https://status.azure.com/en-us/status
https://status.cloud.google.com/

Above provides service status of the Three major CSPs while https://cloudharmony.com/status provides a decent comparison of some common services of these CSPs.

Based on the selected CSP and the migration path, further Third-party tools might need to be deployed to evaluate, plan, stage, and automate the migrations. Automated and manual verification steps would be deployed after each micro-migration.

Data residency options of public cloud CSP

To be compliant with various regional data residency laws, you might need to pick data centers within your region. While many of the big public cloud CSPs have regional data centers scattered throughout the continents, this needs to be a factor when selecting a CSP. Additionally depending on the location, some of the service offering availability would be limited.

https://azure.microsoft.com/en-us/global-infrastructure/locations/
https://aws.amazon.com/about-aws/global-infrastructure/
https://cloud.google.com/about/locations

Migration Strategies

As mentioned before to manage the size of the migration, it is best to split the migration into a series of micro-migrations and identify the strategy for each of the migrations from below 6Rs. The strategy selected for each micro-migration would depend on the discovery & analysis done on prior steps.

Rehosting (Lift & Shift) 

This is more suited for systems with less technical debt, where the systems could be directly lifted & shifted to the cloud with minimal changes. Typically suited towards IaaS cloud implementations. As a public AWS case study indicates, GE Oil & Gas has achieved over 30% savings just by rehosting on the cloud.

Replatforming

A version of Lift & Shift including some modifications without changing the core architecture of systems. An example of a replatforming done for One of our clients was to migrate all on-premise SQL Server and Postgres instances into AWS RDS based PaaS while conducting a lift & shift.

Repurchasing

Involves moving some of the legacy platforms into SaaS platforms and migrating data rather than re-architecting existing systems. This might possibly result in using a different product for the same purpose. Usually effective in implementing aggregated HR, CRM, ERP & financial platforms for multi-tenant entities. 

Refactoring/Re-architecting

Most expensive and complex where legacy systems would need to be refactored and rewritten to fit into the cloud. Here the end target would be to convert the existing systems into cloud-native applications that are optimal, scalable, loosely coupled, and well architectured. 

Based on ICT policies, the decision to be cloud-agnostic or to tie the architecture to vendor-specific PaaS would be an important decision to make. Most of the time re-architecturing would resolve existing technical debt and transform legacy systems into modern containerized microservice-based API services. 

Retiring

With retiring the client would be reaping benefits of the prior migrations. Obsolete systems would be run in parallel with the new cloud-native systems to validate input/output matrices & functionality. Once a sign off is done on the new system, older systems could be retired in phases. With proper consolidation & de-duplication, around 10% – 15% of the system components should be retirable in general.

Retaining

Retained components would include sensitive information which is not advisable to be stored in the public or hybrid cloud. A private localized data cloud architectured leveraging higher levels of virtualization is advisable. This private localized data cloud would have secured links to the public cloud to streamline the information flow between the sub-systems.

Rinse & repeat cycles

A proper knowledge transfer & handoff is going to be important for the local teams, so they could continue the rinse & repeat cycles of their cloud journey. It is going to be critical for cost optimizations, data deduplication, master data management, cost attributions, and various other factors that the cloud assets are properly inventoried, monitored, and managed.

It is recommended that a 24/7 cloud operations center be established to ensure smooth coverage and availability of the amalgamated cloud infrastructure. The suite of tools and technologies used for this phase would depend on the hybrid CSPs chosen for the implementation & various monitoring, alerting, graphing/dashboard, and escalation solutions that go along with that.

CMS together with Bluecorp provides Cloud Management Solutions for enterprises including cloud migrations and managed services teams.

Author : Admin
Published Date June 11, 2020
Share Post