Apple Intelligence privacy is a key differentiator of the company’s own AI initiatives, and the company is taking a three-step approach to protecting personal data.
But Apple says we don’t have to take its word for it. We are taking “extraordinary steps” to ensure that our privacy features can be fully and independently verified by third-party security researchers.
Apple Intelligence Privacy Starts Here
Apple applies a three-level hierarchy to executing AI features.
- As much processing as possible happens on-device, without sending your data to servers.
- If you need external processing power, Apple’s own servers are your next resort.
- If you can’t help, you asked permission To use ChatGPT
Apple’s own AI servers have five protection features:
If you use Apple’s own servers, this is done using an approach Apple calls Private Cloud Compute (PCC). It is a cloud-based AI system built around five safeguards:
Your personal data is protected to the highest degree possible.
All personal data sent to PCC is end-to-end encrypted, so not even Apple has access to it. But the company goes further than this. It uses an approach called ‘stateless computation’. This means that once processing is complete, your personal data will be completely deleted from our systems. The moment the processing is complete, it’s as if it never existed in the first place.
enforceable warranty
Apple doesn’t rely on privacy Policy; Instead, all techniques used not technically capable About personal information leakage. For example, Apple does not use some types of load balancing and troubleshooting techniques because they may capture some user data in some context. Independent security researchers can confirm this.
No privileged runtime access
Another potential security hole in cloud servers is the actions field engineers can take to elevate their privileges or bypass protections to resolve issues. PCC does not include functionality that can be used in this way.
No targeting possibilities
Even if an attacker had physical access to Apple PCC facilities, there would be no technical means to target individual users’ data.
But this is the fifth step that goes beyond everything Apple has done before.
Apple has said that in principle, an “enforceable assurance” step already allows independent security researchers to verify the company’s claims. They can see for themselves what features the PCC does and does not have, allowing them to determine what an attacker can and cannot achieve.
But we want to do more and are making our software completely transparent.
When we launch Private Cloud Compute, we will take special steps to create the following software images: All production builds of PCC are publicly available for security research.. This promise is also an enforceable guarantee. User devices will only be willing to send data to PCC nodes that can cryptographically prove that they run publicly listed software(…)
All production private cloud computing software images are published for independent binary inspection, including the OS, applications and all associated executables, which researchers can check against measurements in the Transparency Log (…)
For the first time on an Apple platform, the PCC image includes sepOS firmware and iBoot bootloader. as plain textIt has never been easier for researchers to study these important components.
Apple’s security blog post goes into much more detail, and security researchers will no doubt welcome the opportunity to test all of the company’s claims.
Photo by Matthew Henry on Unsplash
FTC: We use automated affiliate links to generate income. more.