Some cloud-based AI systems are making their way back to on-premises data centers

As a concept, AI is very old. My first job after college nearly 40 years ago was to use an AI systems developer lisp. Many concepts from that time are still in use today. However, the cost of creating, deploying, and operating AI systems for any number of commercial purposes is about a thousand times less expensive.

Cloud computing has revolutionized artificial intelligence and machine learning, not because superstars invented it but because it made it accessible to everyone. However, I and some others see a shift in thinking about where to host AI/ML processing and associated AI/ML data. Using public cloud providers has been pretty straightforward over the past few years. These days, the evaluation of AI/ML hosting and required data on public cloud providers is called into question. why?

The cost of course. Many companies have built game-changing AI/ML systems in the cloud, and when they get their cloud bills at the end of the month, they quickly realize that hosting AI/ML systems, including terabytes or petabytes of data, is expensive. Furthermore, the data entry and exit costs (what you pay to send data from your cloud service provider to your data center or other cloud service provider) will significantly increase that bill.

Companies are looking at other, more cost-effective options, including managed service providers and co-location providers (colos), or even moving these systems into the old server room down the hall. This last group belongs to “owned platforms” largely for two reasons.

First, the cost of traditional computing and storage equipment has fallen dramatically in the past five years or so. If you’ve never used anything but cloud based systems, let me explain. We used to go to rooms called data centers where we could physically touch our computing equipment – equipment that we had to buy directly before we could use it. I’m only half kidding.

When it comes to rent-versus-buy, many find that traditional methods, including the burden of maintaining your own hardware and software, are actually much cheaper than ever-increasing cloud bills.

Second, many experience some delay in dealing with the cloud. The slowdown occurs because most organizations are consuming cloud-based systems over the open internet, and the multi-staff model means that you share processors and storage systems with many other companies at the same time. Episodic latency can translate into several thousand dollars in lost revenue per year, depending on what you do with your cloud-based AI/ML system in the cloud.

Many artificial intelligence/machine learning systems available from cloud service providers are also available on traditional systems. Migrating from a cloud provider to an on-premises server is cheaper and faster, and it’s more like an upload and transfer process, if you’re not limited to an AI/ML system that only works on a single cloud provider.

What is the bottom line here? Cloud computing will continue to grow. The traditional computing systems whose hardware we own and maintain, not to the same extent. This trend will not slow down. However, some systems, especially AI/machine learning systems that use a large amount of data and processing and happen to be sensitive to latency, will not be cost-effective in the cloud. This may also be the case for some of the larger analytics applications such as data lakes and data lake homes.

Some can save half the annual cost of hosting on a public cloud provider by bringing their AI/ML system back on-premises. This business study is too compelling to ignore, and not many will.

Cloud computing prices may drop to accommodate these workloads that are prohibitively expensive to run on public cloud providers. In fact, many workloads may not have been built there in the first place, which I suspect is happening now. It no longer always makes sense to take advantage of the cloud for AI/ML.

Copyright © 2022 IDG Communications, Inc.