Close

Data Center Migration Success: Optimization

avatar
Published on June 22, 2021 by

Andrew Sweeney

This is part 4 of a 4-part series on how to set up your data center migration for success. Here we go over the final stage: optimization

If you’re looking for the other steps in this journey, check out our other blog posts in the series:
Part 1: Data Discovery
Part 2: Planning
Part 3: Execution

Congratulations! You’ve survived your data center migration project. You’ve managed the discovery, planning and execution stages and everything is up and running. But if you think your work is done, think again.

Data has become the most valuable asset of an enterprise and protecting that data, as well as enterprise end user and customer access, is a primary concern of any IT infrastructure manager. It’s estimated that data center downtime can cost US$9,000 per minute, so even once you have executed your data center migration, you need to focus on monitoring your assets and environment to optimize performance and safeguard against issues.

 

Reasons for Data Center Outages

Any data center outage can cause issues,  impact employee or customer access, halt important business functions and lead to a loss of revenue. Outages happen for many reasons, including:

  • Uninterrupted Power Supply (UPS) battery failure
  • Distributed denial-of-service (DDOS) or cyber attacks
  • Environmental and weather issues
  • Hardware or Software failure
  • Human Error

In fact, while human error is cited as only one of the reasons, it’s a big contributor to many of the others, and it has been found that, 70% of failures can be attributed to human error. Whether that’s mishandling equipment such as batteries or hardware, missing out steps in the maintenance process, not correctly protecting equipment or even failing to connect back-up power, if you are  monitoring and optimizing your on-prem data center you need to make sure everyone follows processes to the letter.

If you have moved to a public cloud service provider, you will have little or no control over resources. You should, however, continue to monitor activities to ensure performance is maintained in line with your service level agreement(s).

Monitor Your Data Center

To mitigate risks and optimize your resources, you should monitor your data center facilities and assets across your entire environment as an ongoing program, focusing on:

  • Status of your UPS batteries to pick up on any issues before they occur.
  • Maintaining heating, ventilation and air-conditioning (HVAC) levels and being aware of changes that can affect hardware performance.
  • Understanding application usage and ensuring your software is current to maintain security.
  • Network traffic and service activity to understand and mitigate any fluctuations that may be a result of security breaches. While CSPs have stringent security in place you should be aware of any fluctuations that could signify a breach.
  • Having a mitigation plan in place, ensuring personnel know the process to correct issues and keep downtime to a minimum, whatever the cause of the outage.
  • The risk of natural disasters in your data center location and having a plan to protect assets.
  • Ensure georedundancy, backing up data in more than one physical data center or with more than one CSP so that if there are any issues in one facility or with one CSP, your end users and customers will not be affected.
  • Asset lifecycle management (ITALM) to ensure you know when any of your physical assets are nearing end-of-life (EOL) or coming to the end of their lease to optimize performance.

By continually monitoring your data center environments and incorporating that information into your wider, IT asset management program (ITAM) you can also benefit from:

  • Optimized performance.
  • Cost savings, in terms of right-sizing software licenses and removing support for data, applications or assets that you no longer use.
  • Increased end user and enterprise customer satisfaction.

You need to be able to quickly react to changes and issues to limit their impact and report your findings, actions, and recommendations to key project stakeholders. That means regularly pulling together information across your physical and virtual environments and spending time communicating with teams at your data center locations as well as your cloud service provider.

Given that human error is one of the biggest contributors to data center failure, by limiting the number of manual processes you rely on, such as data collection and analysis,, you could dramatically reduce issues.

Leverage Intelligent Automation 

ReadyWorks automates 50% or more of manual tasks across a data center migration project and beyond, including data collation, analysis, planning, execution, and reporting. By connecting to and orchestrating your tools, databases, and systems, ReadyWorks simplifies the monitoring and optimization process by:

  • Automatically collating data to giving you a centralized, real-time view of data center assets and their interdependencies.
  • Providing access to dashboards and reports so that you can:
    • See hardware nearing end-of-life or end-of-lease.
    • Right-size software licenses and costs for your application usage.
    • Orchestrate tools and leverage automatic runbooks to move workloads from one cloud or CSP to another giving your greater agility.
    • Provide reports to key stakeholders in customizable views to meet their needs.
    • Use the data to incorporate in other successful IT transformation programs.
  • Automating tool, team and system tasks triggered by outage events, so that you can implement mitigation plans that limit the impact of outages. 

Schedule a demo to see how ReadyWorks can help you optimize your data center performance as an ongoing process.