Managing your Public Cloud AI Projects

The venture capital firm made the case that excessive cloud expenditure is causing public companies to forgo billions of dollars worth of potential market value. The company advises recalibrating cloud resources into a hybrid approach as a substitute. A concept like this can increase a business’s bottom line and free up capital to concentrate on new products and growth.

Enterprises’ adherence to this advice is uncertain, but one thing is certain: CIOs are demanding greater agility and performance from the infrastructure that supports them. This is especially true when they seek to deploy sophisticated and computationally costly machine learning and artificial intelligence (AI/ML) technologies to enhance their capacity to make decisions based on data in real time.

To that aim, the public cloud has been essential in ushering AI into the mainstream. However, the very factors that made the public cloud a perfect testing ground for AI (elastic pricing, ease of flexing up or down, among others) are also inhibiting AI from reaching its full potential.

Here are a few things to take into account for businesses aiming to maximize AI’s advantages in their settings.

The cloud is not a one-size-fits-all solution for AI

Data is the energy that powers AI insights and the lifeblood of the modern business. Additionally, it’s critical that infrastructure can handle these demands in a high-performance, cost-effective manner because many AI workloads are required to continuously ingest massive volumes of data that are always growing.

IT leaders must take into account a number of variables when determining the best approach to tackling AI at scale. The first is determining which is most suited to satisfy the particular requirements of contemporary AI applications: colocation, public cloud, or a hybrid mix.

Even while the public cloud has been crucial in bringing AI to market, there are still some difficulties with it. These consist of:

  • Vendor lock-in: Most cloud-based services are vulnerable to vendor lock-in. However, several of the cloud-based AI services now on the market are quite platform-specific, each with its own unique quirks and partner-related integrations. Therefore, a lot of businesses prefer to consolidate their AI workloads with a single vendor. They will find it challenging to transfer vendors in the future without paying high expenditures as a result.
  • Elastic Pricing: The public cloud is an attractive choice for organizations, especially those seeking to save their CapEx spending, because you only pay for what you use. Additionally, using public cloud services sparingly often makes financial sense in the near run. Organizations with little insight into their cloud consumption, on the other hand, frequently discover that they are consuming it by the bucket. At that point, it becomes a stifling tax on creativity.
  • Egress Fees: A client does not have to pay for the data that is sent to the cloud while using cloud data transfers. However, in order to get that data out of the cloud, they must pay egress fees, which can quickly build up. Disaster recovery systems, for example, will frequently be deployed across geographic regions to ensure resilience. That means that in the case of a disruption, data must be copied across availability zones or to alternative systems on a continuous basis. As a result, IT directors are realizing that the more data they push into the public cloud, the more likely they will be painted into a financial corner.
  • Data Sovereignty: Choosing the best cloud provider requires careful consideration of the sensitiveness and localization of the data. It will also be crucial to make sure that all data utilized for AI in public cloud environments complies with current data privacy standards as a number of new state-mandated data privacy regulations come into force.

Three issues to consider before implementing AI on the cloud

The largest enterprise AI projects being undertaken today naturally find a testing ground in the economies of scale that public cloud providers bring to the table. To ascertain whether the public cloud is really their best alternative, IT leaders should think about the following three issues before investing heavily in it.

When does the public cloud cease to be economically viable?

Since you only pay for what you use, public cloud services like AWS and Azure enable users to grow their AI workloads quickly and affordably. These expenses, however, are not always predictable, especially given that these workloads tend to multiply in volume as they voraciously consume more data from many sources, including building and improving AI models. At a lower scale, “paying by the drip” is quicker, simpler, and less expensive, but it doesn’t take long for these drips to build up into buckets and move you up into a more expensive pricing tier.

Long-term contracts with volume discounts can help you reduce the cost of these buckets, but these multi-year contracts rarely make economic sense. For those who desire the ease and cost predictability of an OpEx consumption model with the dependability of dedicated infrastructure, the rise of AI Compute-as-a-Service outside the public cloud offers options.

Should all AI workloads be treated similarly?

It’s critical to keep in mind that AI isn’t a zero-sum endeavor. There is frequently space for both dedicated infrastructure and cloud, or something in between (hybrid). Instead, begin by examining the characteristics of your applications and data and take the time upfront to comprehend the precise technological needs for each task in your environment as well as the anticipated business consequences for each. In order to tailor the IT resource delivery model to each stage of your AI development journey, choose an architectural model that allows you to do so.

Which cloud model will allow you to grow AI deployment?

To enhance the prediction skills of the AI applications they support, fresh data must be continually supplied into the compute stack in the land of AI model training. As a result, the proximity of computing and data repositories has grown in significance as a selection factor. Of course, not every workload will demand a dedicated, constant connection with significant bandwidth. Unacceptable network latency, however, can seriously limit the capabilities of those who do. In addition to speed concerns, there are an increasing number of data protection laws that specify how and when specific data can be accessed and used. Additionally, these rules must be taken into consideration when choosing a cloud model.

AI has become widely used thanks in large part to the public cloud. However, this does not imply that all AI applications should be operated in the public cloud. To significantly reduce the risk of an AI project failing, allocate the necessary time and resources at the beginning of the project to choose the appropriate cloud model.

Source link