Confidential Computing Use Cases and Benefits
GPU-accelerated confidential computing has far-reaching implications for AI in enterprise environments. It also addresses privacy concerns that apply to analyzing sensitive data in the public cloud. This is especially important for organizations seeking to gain insights from multi-party data while maintaining maximum privacy.
Another key benefit of Microsoft’s confidential computing offering is that it requires no code changes on the customer side, allowing for a seamless adoption. “The confidential computing environments that we build don’t require customers to change a single line of code,” says Bhatia. “You can redeploy from a non-confidential environment to a confidential environment. It’s as simple as selecting a specific VM size that supports confidential computing capabilities.”
Industries and use cases that can benefit from advances in confidential computing include:
- Governments and sovereign entities handling sensitive data and intellectual property.
- Healthcare organizations are using AI to discover new drugs and maintain doctor-patient confidentiality.
- Banks and financial firms use AI to detect fraud and detect money laundering through shared analytics without revealing sensitive customer information.
- Manufacturers optimize their supply chains by securely sharing data with partners.
Bhatia also says confidential computing can help facilitate data “clean rooms” for secure analytics in contexts like advertising. “We see a lot of sensitivity around use cases like advertising and how customer data is processed and shared with third parties,” he says. “So in these multi-party computation scenarios, or ‘data clean rooms,’ multiple parties may be merging on a dataset, and no single party has access to the combined dataset. Only authorized code has access.”
The current state and anticipated future of confidential computing
While large-scale language models (LLMs) have gained traction in recent months, companies are seeing early success with a smaller-scale approach: small-scale language models (SLMs), which are more efficient and less resource-intensive for many use cases. “We’re seeing some targeted SLM models that can run on early-released GPUs,” Bhatia says.
This is just the beginning. Microsoft envisions a future that supports larger models and expanded AI scenarios. This progress will see enterprises move from a boardroom buzzword to an everyday reality that drives business outcomes. “Starting with SLM, we’re adding the ability to run larger models using multiple GPUs and multi-node communication,” says Bhatia. “Over time,[the ultimate goal]is to be able to run the largest models the world can produce in a confidential environment.”
Making this happen requires a collaborative effort. Partnerships between major companies like Microsoft and NVIDIA have already facilitated significant progress, and more are coming. Organizations like the Confidential Computing Consortium will also play a key role in advancing the foundational technologies needed to make widespread and secure use of enterprise AI a reality.
“We’re seeing a lot of the important pieces fall into place now,” Bhatia says. “We’re not questioning why something is HTTPS today. That’s the world we’re moving toward (confidential computing), but it’s not going to happen overnight. It’s definitely a journey, and it’s a journey that NVIDIA and Microsoft are committed to.”
Microsoft Azure customers can start this journey today with Azure Confidential VMs with NVIDIA H100 GPUs. Learn more here..
This content was created by Insights, MIT Technology Review’s custom content division. It was not written by the editorial staff of MIT Technology Review.