Cloud Computing

What is DevOps? A Tutorial and Training. DevOps Explained

learn solutions architecture


In this session today, we will review DevOps, key practices and tools, and steps that one can take to transition to a DevOps environment. Before, I begin, please ensure to subscribe to my YouTube channel and you will be notified of more learning videos on the latest topics and trends in the area of digital and cloud computing.

Definition of DevOps

So, first let’s get to the definition of DevOps. What is DevOps? Let’s review the concept from a number of dimensions.

  • DevOps refers to a collection of practices and a general philosophy in the area of software development, the overall goal of which is to constantly deliver and deploy high quality software at high velocities. So, it’s not one practice or a methodology but usually a collection of various practices and methodologies.
  • DevOps refers to the concept where software developers and Ops staff collaborate throughout the software development and deployment lifecycle to ensure the delivery of quality code to production. So, it eliminates the silo mentality and the finger pointing that has existed in IT environments for the past many years. This elimination of silos of software development and operations has allowed software developers to understand the complications inherent in running software that they develop in the operations environment making them sensitive to stability and reliability issues that are important for productions and operations environments. Likewise, operations staff and engineers and system administrators understand the complexities of the software build process making them less critical of “those software developers.” So, a DevOps culture thus instills teamwork to solve issues, empowers teams to take critical decisions – all the while keeping teams focused on the ultimate business outcome, which is to deploy and run quality software that would delight customers. So, as DevOps brings development and operations teams together, the scope of DevOps usually encompasses both software development and infrastructure management processes.
  • In an ideal environment, DevOps practices integrate with Agile Software Development Methodologies. Here, we need to understand the relationship between Agile software development and DevOps. To put it simply, Agile software development enables production of quality software quickly. However, even if that’s done there is no way to ensure the integration, testing and deployment of that code rapidly. That’s where DevOps practices take over. So, Agile Software Development approaches together with DevOps enable a fast and rapid software design, development, testing, and deployment of quality software products to production that in turn can have a direct bearing on customer experience and satisfaction.
  • DevOps is also synonymous with the many tools that are used in software delivery and deployment because a key tenet of DevOps has to do with the automation of the various pipelines that integrate software development, testing, deployment in production and monitoring. We will cover some of these tools and their functionality a little later in this presentation.

 

Business Benefits of DevOps

Next, let’s review the overall business benefits of instituting DevOps principles and practices. Most organizations who have successfully implemented DevOps report the following:

  • An accelerated delivery and deployment process – As DevOps brings down the silos between organizations by getting teams in software development and production or operations to collaborate closely throughout the software development and deployment lifecycle, organizations observe a considerable increase in the velocity of quality software development to production. This helps the organization to serve its customers faster and to innovate and test its innovations at a rapid pace. This is a departure from the traditional practices where production staff would become a bottleneck in the deployment of developed software to production to ensure stability of operations. However, as teams in DevOps work as part of one team throughout the overall process, any issues related to software development or operations are addressed during the overall development lifecycle facilitating a fast delivery to production and operations.
  • Higher frequencies of software releases – DevOps practices such as CI and CD along with automation of the pipelines from software development to deployment enable organizations to release software constantly to production or at a minimum get it ready for deployment. Depending on the size of the organizations and the scale of their software development, many have reported that their releases have gone up by 50 to 100 times. For example, in one of the latest DevOps conferences, Netflix reported to have been doing thousands of releases on a daily basis. That’s an astounding increase from earlier practices which allowed for much fewer number of releases per week.
  • Automation of repetitive tasks – As we discussed in the earlier point, such a high increase in the frequency of software releases isn’t possible unless various facets of the overall software delivery and deployment pipeline are fully automated. The various steps in this whole pipeline can include code development, integration, testing, security, validation, deployment, monitoring, etc. a number of which can be automated in an organization that institutes DevOps.
  • Stability, security and reliability of deployed software – Automation can ensure that various policies and best practices are reflected in code or scripts minimizing human errors and thus increasing compliance with an organization’s policies thus ensuring stability, security and reliability of a production and operations environment.
  • Better predictability of software release cycles – Due to the automation of the various facets of the overall lifecycle, the organization expects to get better predictability on when certain business functionality can be deployed into production and thus can plan accordingly.
  • Fewer errors in delivered and deployed code – Automation and the constant practices of testing and integration ensure that developed code has fewer errors when run in production.

 

Popular DevOps Practices

Now we will cover some of the popular DevOps practices. Although there a number of DevOps practices to help organizations achieve the overall goal of rapidly delivering quality software to production and to enable its automated monitoring, in the following we will cover the 4 popular practices that are at the foundation of any organization using DevOps.

Continuous integration refers to the practice of developers integrating their newly developed or modified code with the code baseline checked in by others continuously. The keyword here is ‘continuous’ and that’s to ensure that any defects that may surface during integration are surfaced as early as possible rather than waiting to integrate later in the process. So, as multiple developers produce and update code, they are constantly integrating with the main baseline to prevent discovering larger integration problems later. This practice, therefore, removes potential hurdles from the process and speeds delivery of software to the production environment.

Continuous delivery ensures that software is constantly readied for release by automating the steps highlighted in continuous integration along with other steps of unit testing, load testing, integration testing, API reliability testing, etc. This helps developers discover any issues pre-emptively rather than discovering at a later stage. Whether that actually gets released to production or not depends on other factors including prioritization by the product owner of the various business functions. The deployment to production therefore waits for a manual approval trigger. Through the practice of continuous delivery, high quality software is ready to be deployed quickly to production reducing the risk of suboptimal code being released to production while ensuring speed and fast time to market.

Continuous deployment is similar to the continuous delivery process except that delivery is automated all the way to production and not merely to a staging environment. In general, unless you are confident of auto deployment, this practice is not recommended. Usually, someone does a final manual check of other dependencies before code is deployed in production. Most organizations prefer to take the process all the way to continuous delivery and then wait for a manual check and validation of other dependencies. However, depending on your business situation and process and environment maturity, you can consider the instituting of this practice with care.

Infrastructure as code facilitates configuration of infrastructure components such as servers through code. In traditional environments, software developers manually provision and configure servers and apply patches to the multiple servers in various environments such as dev, test, pre-production, and production. As technological advances in cloud computing have allowed engineers to interface with infrastructure through APIs and code, engineers can provision and configure servers using software thus simplifying and accelerating the entire process.

 

So, when we look at these practices of team collaboration, and lean processes that fuse development, testing, deployment and monitoring processes, we see that these practices have come from the Agile and Lean approaches that started a few years ago. So, if you are an IT and technology executive, this should provide you some cues to ensure that regardless of your IT maturity, ensure that your teams understand the fundamentals of Agile and Lean as that knowledge can help your teams to formulate your organization’s processes.

DevOps Tools

Now we will look at DevOps tools. These tools span a number of areas including the building and compiling of software, testing, configuration management, application deployment, monitoring, version control, and others. Other tools are used in the areas of continuous deployment, continuous delivery, and continuous deployment. These tools together with the emergence of virtualization allows organizations to deploy digital services quickly to the business.

Basically, within DevOps, the tools are needed for a wide variety of activities and functions some of which are the following:

  1. Building and provisioning of servers
  2. Virtual infrastructure provisioning – This refers to using APIs and other tools to help you provision other parts of the infrastructure either in your cloud environment or public cloud environments such as Amazon’s AWS and others.
  3. Building code – These tools compile code into executable components.
  4. Maintain source code repositories
  5. Configuration management – These tools facilitate configuration of development environment and servers.
  6. Testing and automation –
  7. Version control – This ensures that all code history is maintained in the repositories using version numbers. With numerous developers checking in and checking out code, tools can help track previous histories using automated versioning. Also, if there is ever a need to revert back to the previous version of functioning software in a production environment, these tools allow that very easily.
  8. Pipeline orchestration – These tools orchestrate the entire process from the time software is ready for deployment all the way to deployment. There are other tools that provide complete visibility from the beginning to the end.
  9. And so on.

 

Over the past few years, a number of tools have surfaced with multiple features that are too numerous to mention but here we will cover some with their primary features. With time, these tools mature and incorporate additional functionality. Here is an overview of some of those tools.

  • Jenkins – It’s one of the most common and popular tool and addresses various facets of both Continuous integration and continuous delivery.
  • Vagrant helps DevOps teams create and configure lightweight, development environments. This falls under the Infrastructure as Code and essentially lets developers create a single file for projects where they can describe the type of machine they want, the software that needs to be installed, and access rules for the machine. Vagrant then uses that to provision development environments.
  • Splunk – This tool provides operational intelligence to the teams and is based on data analytics.
  • Nagios – Monitors the infrastructure components such as applications, services, operating systems, network protocols, system metrics, and network infrastructure.
  • Chef is another popular tool that turns infrastructure into code so that users easily and quickly can adapt to changing business needs.
  • Docker – This is an open integrated tool that allows DevOps teams to build, ship, and run distributed containerized applications
  • Artifactory is a universal code repository manager that supports software packages created in any language or technology.
  • JIRA – This is one of the very popular tools used by Agile teams. This tool is used by DevOps teams for issue and project tracking.
  • ProductionMap is another popular tool with advanced orchestrator and development features. This tool enables teams to develop and execute complex automation on a large scale of servers and hybrid technologies.
  • Ansible is a DevOps tool for automating your entire application lifecycle. Ansible is designed for collaboration and makes it much easier for DevOps teams to scale automation, manage complex deployments, and speed productivity.

 

If you work with public cloud frameworks such as Amazon’s AWS or Micrsoft’s Azure, then they you will have to integrate with their specific tools and solutions. For example, in the AWS world, you have access to the following: There are others as well but we will cover the key ones here.

  • AWS CodePipelineAWS CodePipeline addresses both the continuous integration and continuous delivery practices and when configured properly according to your workflows, allow a complete smooth DevOps pipeline.
  • AWS CodeBuild – This tool as is obvious from the name is used for building software and checking in repositories and to perform testing. It also ensures that one doesn’t have to worry about server provisioning, etc. as they are taking care of in the background.
  • AWS CodeDeploy – This AWS service allows the deployment of code to production to AWS server instances.

 

DevOps Transitioning

Finally, we will discuss some of the steps that organizations can take to institute DevOps in their environments.

  • Start Small – First start small. Start with a small project and get it through a continuous integration and delivery pipeline and then to deployment. So, essentially get your teams to understand the technicalities of instituting a DevOps pipeline from planning and development of code to deployment, monitoring and collecting feedback.
  • Focus on the cultural aspects – Also, in parallel start to focus on the cultural aspects. That’s very important. DevOps is not merely about getting a bunch of tools and making them work. If the cultural aspects that we discussed earlier about collaboration, bringing down siloes, etc. are not taken care of, then the effort won’t yield fruitful results.
  • Define a workflow specific to your environment – Next, as you start to mature gradually and start to piece together various tools to support the development and deployment processes in your environment, define a specific workflow or workflows that would be appropriate for your software environment. So, for example, you may have multiple and hybrid development environments related to Docker, Kubernetes, legacy applications, working in public cloud environments, and more. Ensure that your defined workflows will work to support all those scenarios.
  • Select the right tools to define your workflow – Depending on the workflows that you define, you will need to then ensure that you pick the right tools that integrate tightly to form an integrated DevOps pipeline. That is essential to ensure maximum automation that at the end will help you achieve high velocity development, delivery, and deployment.
  • Establish business level metrics and measure maturity over time – Finally, ensure that you institute the right metrics to ensure that you can measure your organization’s maturity over time in terms of delivering more software, quality of deployed software, and so on.

 

With this we come to the conclusion of this presentation. To learn more on other topics, ensure to subscribe to my YouTube channel where I post a number of best practices related to the area of digital transformation.

devops training practices

Six Cloud Migration Strategies (Based on Gartner and Amazon Methodologies)

learn solutions architecture

This post discusses the six cloud migration strategies (Based on Gartner and Amazon Methodologies).

In today’s episode I will cover the 6 fundamental migration strategies that organizations have at their disposal when migrating to the cloud. These strategies are based on Gartner’s research and also on the work that Amazon has done in helping their customers to migrate to the cloud. Both Gartner and Amazon discuss these extensively on their blogs and websites as well. As a technology executive, if you are still in the early phases of your cloud migration journey, a review of these strategies can help you in developing the right mental models that in turn can guide you to develop your own digital transformation journey.

So, let’s get started.

One of the key phases that every organization goes through when considering migrating its legacy systems to the cloud is that of a discovery process. In this phase, the organization essentially takes a detailed inventory of its systems and then decides one by one on the effort and cost required to do the migration. This step is usually done by keeping the overall business case and objectives of the migration in perspective. For each of the applications and systems in its inventory, the organization may decide on a specific migration strategy or approach. We will discuss those strategies next.

Re-hosting – The first strategy is that of re-hosting. This is also referred to as lift and shift and involves migrating a system or application as is to the new cloud environment. The focus is to make as few changes to the underlying system as possible. During the discovery process of the migration planning exercise, systems that qualify for such a migration are usually considered quick-wins as they can be migrated with minimal cost and effort. However, as the application and system usually involves a simple lift and shift, such as system isn’t expected to utilize the cloud native features and thus isn’t optimized to run in a cloud environment. Thus depending on the system, it may even be more expensive to run the new migrated system on the cloud. These types of issues should be considered before categorizing a system for such type of migration.

Refactoring – Refactoring is the second migration strategy and falls on the other extreme of the migration effort because it requires a complete change and reengineering of the system or application logic to fully make use of all the cloud features. When complete, however, this application is fully optimized to utilize cloud native features. So, even though the cost and effort required for this migration can be quite high, in the long run this approach can be efficient and cost effective because the application is reengineered to make use of the cloud native features. A typical example of refactoring is changing a mainframe based monolithic application from its current form to a microservices based architecture. When categorizing an application as refactoring, the business should perform a detailed business case analysis to justify the investment of the cost, effort and a potential business impact and also to ensure that other alternatives are considered as well.

Replatforming – This type of migration is similar to re-hosting but requires few changes to the application. Amazon’s AWS team refers to this approach as lift-tinker-and shift. Even though this approach closely resembles that of re-hosting, it’s categorized differently simply because it requires some changes. For example, in doing such migrations, an organization may plug its application to a new database system that’s on the cloud or change its web server from a proprietary version such as Weblogic to Apache Tomcat, which is an open source based web server. So, for planning purposes it’s important to categorize it as such. Obviously, if a system or application is going to be changed to make even slight changes, it may need to be put through more thorough re-testing processes.

Repurchasing – This migration strategy entails essentially switching the legacy application in favor of a new but similar application on the cloud. Migrating to a SaaS based system would be an example of such a migration where an organization may decide to migrate from its legacy financial system to a SaaS based financial ERP system.

Retire – The fifth strategy is about retiring systems and applications that an organization no longer needs. During the discovery process, an organization may find applications as part of its inventory that are no longer actively used or have limited use. In such cases, those types of applications may be considered for retirement and users of those systems (if any) can be provided other alternatives.

Retain – In some cases, the organization may decide not to touch certain applications and systems and to postpone their migration for later in the future. This may be either that the applications are too critical to be touched at that point in time or require a more thorough business case analysis. Either way, it’s normal for organizations to not touch some applications and systems during their cloud migration efforts. However, in certain cases such as a data center migration, organizations may not have a choice and will have to consider one of the earlier described strategies.

To conclude, although the strategies that I have covered address most of the common cloud migration scenarios, as a technology executive you can devise other categories based on your business needs. Defining these migration categories and their criteria upfront can be a major and helpful step to aid in the migration of one’s legacy systems to the cloud.

Hope this session was useful. Again, to ensure that you don’t miss any future episodes, do subscribe to this channel.

  • End

Public and Private Cloud Environments – Pros and Cons

learn solutions architecture

If you are debating the pros and cons of a public cloud versus a private cloud solution, the following sheds some light on the problem.

Here are the pros of a public cloud environment (and associated cons of a private cloud)

  • There are still Capex costs associated with setting up a private cloud environment. You will also need staff to maintain that environment.
  • If your applications experience numerous peaks during the year, you will still have to plan to ensure availability of enough compute and storage capacity. Besides, as this is your equipment, there are costs associated with the infrastructure even if your applications are not using that infrastructure.
  • Unlike private clouds, public clouds provide unlimited elasticity without you worrying about reaching your limits. Resources are therefore available to your applications in a public environment on an on-demand basis.
  • If you are planning to have diverse cloud services (Big Data, etc.), you will have to have a vast array of skills.
  • You will have to plan for your own Disaster Recovery solutions. That may mean maintaining private cloud environments in different regions (assuming you operate in multiple geographical regions).

Here are the pros of a private cloud environment (and related cons of a public cloud environment)

An enterprise may consider the use of private cloud in the following situations:

  • Although cloud providers have gotten very  good in terms of securing their clients’ environments, it’s possible that due to regulatory and other reasons, the security provided by cloud environments is not sufficient for your needs and thus you want to completely isolate your environment. In those cases, a private cloud environment is the better way forward.
  • You want to have more control over your environment. In many cases, this may not be an issue, as public cloud providers can provide you a lot of control of your environments.

— End

Major dimensions of an enterprise’s cloud migration strategy

learn solutions architecture

Formulating an enterprise’s cloud strategy involves looking at the problem from a number of dimensions and then asking relevant questions related to each dimension. Doing so requires an extensive understanding of an organization’s current infrastructure, application architecture, business requirements, and an organization’s overall business and IT strategy. This post reviews some of the key dimensions related to formulating an enterprises’s cloud strategy.

Usually an organization that is looking to get a good handle on its cloud strategy already has a number of cloud related initiatives live or in the pipeline. For example, the sales and marketing group may already be using cloud solutions form salesforce.com or certain LOBs (Lines of Businesses) may already be experimenting in a “Shadow IT” setting, and so on. Seeing all the different groups and departments of an organization pursuing their own agendas, the CFO or the CIO usually jump in to define an enterprise wide unified cloud strategy to manage and control spending and to ensure guiding the enterprise through the cloud migration journey.

Crafting a cloud strategy in the light of the dimensions delineated later in this post necessitates that an organization think the type of cloud services that it will be using. This classification usually involves the following four layers of the cloud:

  1. IaaS – This refers to services from a compute, storage, and network perspectives.
  2. PaaS – This refers to platform services such as application development and integration, middleware, analytics, etc.
  3. SaaS – This refers to applications that are hosted and maintained by the cloud services provider. Examples include salesforce.com applications, or Oracle enterprise applications that are hosted on the Oracle cloud.
  4. DaaS – This refers to the data that enterprises can leverage to advance their business outcomes. A number of DaaS offerings have made this a reality. Examples include data profiles of customers belonging to different industry segments and in different markets, data insights related to certain class of customers, etc.

Below are some of the major dimensions that should guide an enterprise’s cloud strategy.

cio cloud strategy

Choice of Vendors

This dimension requires an analysis of whether an organization has a preferred vendor strategy regarding moving to the cloud. The market has numerous cloud providers with Amazon, Microsoft, Oracle, Salesforces.com, and Google leading the pack. The answer to this question usually involves analyzing the following facts:

  • An understanding of enterprise’s current use of cloud services and the vendors. For example, if your organization has already invested in a specific vendor who in turn is helping you deliver specific business outcomes, then you may have an inclination to continue along the same path.
  • Desired and expected use of cloud services. For example, if an enterprise knows that it will need to employ certain cloud services (e.g. big data solutions or IoT solutions) that certain vendors are better in delivering based on your organization’s requirements, then that may drive the choice of relevant vendor choice decisions.
  • Some organizations have invested in legacy enterprise applications that are now offering cloud version of those applications. Oracle is an example of this where it is now offering its enterprise applications (e.g. related to HR, HCM) in a cloud environment. In such environments, application and platform layers integrate better and seamlessly driving organizations to opt for those vendors.

Public Cloud vs Private Cloud

Private cloud refers to the network that resides behind an organization’s firewall. This means that an organization is usually responsible for the complete management and maintenance of all aspects of the cloud, hence the term ‘private cloud’. In a public cloud, an organization’s infrastructure, data and / or applications is managed and controlled by the cloud service provider. Although an organization’s data is separate and secure, the hosting still is in a shared environment. The biggest difference, therefore between the two is the extent of control that an organization has on its cloud environment. When formulating a cloud strategy, therefore, an organization must decide on not only which application workloads will be migrated to the cloud but also whether the migration will be on a private or public cloud.

Innovation Initiatives

The many innovation initiatives that an organization has in its pipeline can have a major impact on the organization’s cloud strategy. As mentioned earlier, for example, if an organization is planning to do ventures in the areas of big data and analytics, IoT, and other such innovations, this can accordingly shape an organization’s cloud strategy.

Application Workload Analysis

This requires an analysis of an enterprise’s application architecture and analyzing the various applications and plans related to their migration to the cloud. This therefore necessitates that each application must be analyzed in terms of its migration complexity and feasibility. This analysis will bring to light whether some of the applications may be candidates for a simple rehost or a “lift and shift” approach or whether they need to be completely re-architected before they are migrated to the new cloud environment.

Business Priorities and Roadmaps

A cloud strategy must be able to incorporate various LOBs’ ongoing business plans and priorities for it to get the right buy-in from all the relevant stakeholders. Although an enterprise’s cloud strategy is hatched in IT or the CFO’s office, in the absence of one, businesses start with their own plans. When formulating an enterprise-wide cloud strategy, it’s therefore vital to discuss with LOBs regarding their business requirements, urgency, and roadmaps if any. Normally, as the IT department has its own cloud initiatives in the pipeline, those should be factored in as well.

Shadow IT

As mentioned earlier, in the absence of a cloud strategy, many LOBs and other departments have their own shadow IT initiatives where they test and experiment with their specific product and service initiatives. A cloud strategy must therefore address the requirements of those departments and bring them under a unified enterprise cloud strategy.

Data Center Strategy

Cloud services are forcing user organizations to also rethink their data center strategies. The industry pundits are already predicting the expected dramatic reduction in organizations’ data center footprints over the next few years. This thinking should also be factored in as an organization decides on its cloud strategy.

Designing the new Cloud ecosystem

Whether you know it or not, your application workloads in your current computing environments have an ecosystem of their own. For example, your application workloads have certain levels of security, are being monitored to a certain degree in your data center, interface with other systems and applications, and are surrounded by other related services. Therefore, as you start to devise your cloud strategy, you should be aware (and design) the new ecosystem that will exist in the new cloud environment. Your overall cloud migration strategy, therefore, should be devised based on the new ecosystem of services that your new application workloads will require to run in the new environment.

Bringing it all together

Getting answers relative to each dimension requires interviews and collecting data through other means. Getting these answers requires input from the following:

  1. Interviews with LOB executives and their users
  2. Interviews with the CIO and other IT executives
  3. Interviews with the application architecture group
  4. Review of the IT and enterprise architecture
  5. Review of an enterprise’s strategy

Information obtained through these documents and interview sessions thus can provide a first baseline for a relevant cloud strategy. This strategy must then be validated with the key stakeholders before obtaining the final consensus and publishing the strategy for all.

— End

PgMP Course Outline Preparation

Reference Cloud Architectures and Services

learn solutions architecture

 

Hybrid Architecture

This configuration allows enterprises to connect their private cloud network with a public cloud network. Enterprises can use their private cloud network to host enterprise applications and databases and can connect to the public cloud for other services. Connections can be direct or over a VPN over the Internet.

 

Content Management Systems Architecture

This architecture is suited for use cases that are heavy on serving content. Due to this architecture being heavy on content, extensive caching is used including static content caching, and database query caching.

 

E-Commerce Architecture

The primary components of this architecture typically include:

  • Web / Application servers
  • Database servers
  • File servers
  • CDN
  • Payment gateways

(Take online training on PgMP Certification)

IaaS Services Architecture

The following are typical services that organizations contract and manage in their IaaS Cloud Computing portfolio.

  • Servers –
  • Storage –
  • Virtualization –
  • Operating Systems –
  • Network services –

PaaS Services Architecture

  • Database management –
  • Business analytics engine –
  • User management –
  • Identity and Access management –

 

SaaS services architecture and portfolio

  • Financials applications (e.g. Oracle or Workday) –
  • Sales and Marketing (e.g. Salesforce.com) –
  • Product Lifecycle management –
  • Project and Portfolio Management –
  • Procurement –
  • Service management –
  • Supply Chain and Logistics –

 

Strategic and Management issues Related to Multi-Cloud Environments

learn solutions architecture

As a number of enterprises have been living in the cloud environments for a few years, many have transitioned to use multiple cloud services concurrently. For example, your organization may have deployed certain application workloads on a Microsoft Azure environment while another workload may be running on OpenStack. Similarly, another business unit of your organization may be using IaaS services from Amazon AWS. The reasons that an enterprise ends up in multiple cloud environments vary. Some of them are listed below:

  1. An enterprise may use IaaS services from one cloud vendor while SaaS services from another vendor.
  2. Multiple LOBs may use different cloud service providers independently for their own business needs.
  3. One workload may be deployed across multiple cloud service providers – also called an active-active configuration.
  4. One workload may be deployed on on cloud service provider with a backup on the other cloud service provider (active-passive).

Enterprises may use multiple clouds for different reasons. Different Lines of Business (LOBs) or business units contract their own cloud services for their specific business needs. Alternately, an enterprise may strategically opt to use multiple cloud services for redundancy purposes to reduce its risk. For example, a cloud service provider either may go out of business or may experience technical malfunctions in the use of its cloud environment. For example, both Amazon’s AWS and Microsoft’s Azure platform have experienced downtime in the past due to hardware failure.

Managing the Security, Governance, and Systems Management Issues

Managing a multi-cloud environment can potentially cause security, governance, and overall systems management complexity that an enterprise must manage. As the whole idea of moving to a cloud is to reduce systems management complexity, it’s important that you as a CIO or senior manager do not introduce more complexity to your organization when ending up with a multiple cloud environment.

The following lists some of the tips for managing a multi-cloud environment for an enterprise.

  1. Ensure you have a complete view of the network and application architecture as it relates to the cloud services. This architectural view should provide you a complete view of the application workloads that are in use, how they are distributed across the various CSPs, users utilizing those workloads, internal employees using various infrastructure services of the cloud, and so on.
  2. Use the complete view of the network to assess any security and compliance issues that the overall cloud services can cause.
  3. As moving to a cloud is meant to reduce cost and complexity, work with your CSPs to automate any tasks that can be done easily to minimize errors and increase efficiency.
  4. Attempt to standardize the governance of the overall cloud architecture run by different vendors by having uniform policies and standards.
  5. Recently, a number of multi-cloud management systems have surfaced to unify management of clouds from different vendors. However, as these tools are still in their infancy, they only support the big cloud platforms and may not be able to work across all the clouds that your organizations manages. However, as new products and features are being updated constantly, it’s best to check the current tools in the market and their relevance to your cloud environments.
  6. Securing a cloud environment involves securing applications, databases, infrastructure, network access, and strong staff access policies. In a multi-cloud environment, these issues can get multiplied. Ensure that you do a multiple level detailed assessment and comprehensive security planning. It’s important to remember that while you may be outsourcing the operations to an external partner, as a CIO or senior IT executive you are still responsible for the operations, security, and privacy of your processes and data toward your stakeholders.
  7. Depending on the complexity of your cloud environments, you may consider establishing a governance office with defined management and governance functions. This will help you create uniform standards by which to govern methodically across the various cloud service providers. The internal governance function will also serve as the main point of contact for your internal management and users of your organization whose services and applications are running on those cloud environments.
  8. Ensure that you have the right staff with the right skillset to manage the multi-cloud environment. A common perception that enteprise managers develop early on is that they are outsourcing most functions to an external provider that they may not have a need for internal technical and other skilled resources. This is obviously untrue and it’s important to consider this facet as you transition to a multi-cloud environment that potentially could be based on multiple technologies and vendor platforms.
  9. Enterprises should be wary of integration issues in multi-cloud environments. This is because each cloud vendor operates on its technology stacks and methodologies and standards don’t yet exist to provide application and platform portability across varied cloud environments.
  10. When selecting multi-cloud management tools (e.g. RightScale Cloud Management), ensure that it provides the features that are designed to alleviate business’s issues and problems specific to your business and enterprise. When evaluated appropriately and selected, multi-cloud management tools can provide visibility across public and private clouds, bare metal servers, and other servers. The management software provides a governance layer that can help enterprises self manage clouds from an integrated console and can make use of analytics to give them deeper insights into the multi-cloud operations bringing efficiencies to the process.

 

— End