Now that companies have moved beyond taking a cloud-first approach, many are realizing that not all workloads need to move to the cloud right away – or at all. And lately there has been a lot of talk about actually repatriating workloads back on-prem or from public to private clouds.
A 2019 Gartner study called repatriation the exception rather than the rule, stating that only 4% of organizations are moving workloads back from public cloud. But IBM’s CTO for Asia-Pacific said cloud repatriation is real, and is the result of: 1) lack of understanding the need to turn things off that are not being used, and 2) failure to understand the data consumption rate of applications.
Clearly companies are rethinking what workloads should remain on prem, but how many? What is the right blend of cloud and on-prem, and based on what criteria?
In 2018, IDC published a report noting the accelerated pace of repatriation in a multi-cloud world; they found that while companies were increasing public cloud spend they were also increasing investments in on-premises private cloud solutions to better address security, cost, latency and control requirements. A year later, they reported that a whopping 80% of companies planned on repatriating 50% of public cloud workloads. The top reasons remained the same, but it was unclear how many of these organizations actually acted on their plans.
Recently, in its 2021 State of Hybrid Cloud Survey, Virtana found that 72% of participating organizations had moved applications back on-prem after migrating them to the public cloud.
What’s driving the repatriations?
Dropbox offers the most significant and well known example of a company that decided to repatriate workloads in order to reduce costs; in 2017 they left AWS and decided to host all workloads on-prem, saving $75M. Dropbox is in the storage business and owning their own infrastructure made sense.
But each organization must consider many different factors driving their repatriation, and it may not be just a single factor such as cost. In fact, one-third of the respondents in the Virtana study cited two or more of these factors drove their decision to move back on-prem.
The Virtana data revealed several issues driving repatriation:
In 2018, Morgan Stanley released a financial report that projected the potential for revenue growth for hardware, while acknowledging that there is a massive enterprise shift to running applications in the cloud. The projected growth may seem contradictory, but the researchers found that the current slow down in hardware sales wasn’t because companies were choosing cloud over on-prem; rather, they were putting off their hardware purchases as they struggled to figure out what percentage of workloads to migrate, which workloads to migrate and where to migrate them.
And today, the struggle continues as companies are rethinking their approach to cloud migration, realizing that not everything needs to go to the cloud and adjusting some decisions they made early in their journey. To make those better, more informed decisions requires a better understanding of the value the cloud offers your organization and how you can best deliver expected business outcomes.
Andreeseen Horowitz wrote a blog post which simplifies the issue: if infrastructure is core to a company’s business, it may be more cost effective to run workloads on their own infrastructure. But for everyone else, the decision is not as clear and is more about finding the optimal mix of on-prem and public cloud hosting.
Horowitz also cited two data points from industry experts:
Bottom line, IT is the engine that drives innovation for the enterprise, and it is an environment of constant change. Planning and executing strategic projects such as a cloud migration – for both the short term and long term — while in the midst of rapidly changing business requirements, security threats and privacy concerns can be daunting and risky.
To succeed, IT needs to understand what the requirements are for each workload before asking where it should run. Machine Learning (ML) workloads are a good example of workloads that, depending on what they do, will have significantly different compute needs and therefore cost models. ML workloads that are running constantly and use significant compute can be run on prem as the cost in the cloud would be much higher. But ML workloads used to train algorithms don’t run all the time, and public clouds offer the ideal infrastructure to support them.
When IT has a deep understanding of their environment, and is armed with actionable data about their assets and infrastructure, the IT team can make decisions with confidence and successfully navigate their cloud journey, deliver business outcomes, and avoid getting sidetracked by trends or hype.
Want to learn more? Here are some blogs we’ve previously shared which may help guide on how to make better decisions, and deliver outcomes that align with strategic business goals.