An AI-powered DataFinOps approach has proven effective in reducing cloud data costs by 30-40%, sometimes even as much as half. The lion’s share of savings will be found at the job level, where the number, size, and type of resources requested from AWS, Google, or Azure (via Databricks, Snowflake, et al.) are greater—and cost more—than is what’s actually needed to do the job successfully.
The kind of time-intensive, toilsome “grunt work” of correlating and analyzing thousands of logs,
metrics, events, traces, degree of parallelism, code, configuration to figure out what your actual usage (and therefore cost) should be is exactly what AI is really good at.
It can take hours, days, sometimes weeks, for even your best people to tune a single application for cost efficiency. Part of it is complexity. AWS alone has more than 200 cloud services and over 600 instance types available, and has changed its prices 107 times since its launch in 2006.
AI “throws some math” at all the performance and financial data to uncover where there’s a mismatch between “perceived need” (what the engineers think they need to get the job done) and “actual need” (what is actually needed to get the job done).
The same type of analysis can be done at the cluster level as well, across platforms, across cloud providers, and for a range of cloud cost categories—really, anywhere cloud data expenses are being incurred.