Apple, Microsoft, and Google are heralding a new era of what they call artificially intelligent smartphones and computers. They say the device will automate tasks like editing photos and celebrating friends’ birthdays.
But to make this happen, these companies need more data.
In this new paradigm, your Windows computer takes a screenshot of everything it does every few seconds. iPhone bundles information from the different apps you use. And your Android phone can listen to calls in real time and warn you about scams.
Want to share this information?
These changes have significant implications for our privacy. To deliver new personalized services, companies and their devices need constant, more intimate access to data than ever before. In the past, using apps and getting files and photos on your phone and computer was relatively siled. Security experts say AI needs an overview to connect the dots between what we do across apps, websites and communications.
“Is it safe to give this information to this company?” Cliff Steinhauer, executive director of the National Cybersecurity Alliance, a nonprofit focused on cybersecurity, spoke about the company’s AI strategy.
All of this is happening because OpenAI’s ChatGPT shook up the tech industry nearly two years ago. Since then, Apple, Google, Microsoft, and others have overhauled their product strategies, investing billions of dollars in new services under the umbrella term AI. Indispensable are new types of computing interfaces that continually study what they are doing to provide support.
Experts say the biggest potential security risk from these changes comes from subtle changes in the way new devices operate. AI can automate complex tasks, such as removing unwanted objects from photos, which sometimes require more computing power than your phone can handle. This means more personal data may have to leave your phone to be processed elsewhere.
The information is transmitted to the so-called cloud, a network of servers that process requests. Once information reaches the cloud, it can be viewed by others, including company employees, malicious actors, and government agencies. And while some of our data will always be stored in the cloud, our most personal and intimate data (photos, messages, emails) that were once visible only to our eyes can now be connected to company servers and analyzed.
Tech companies say they have gone to great lengths to protect people’s data.
It’s important now to understand what happens to your information when you use AI tools, so we got more information from companies about their data practices and interviewed security experts. I plan to wait and see if the technology works well enough before deciding if it’s worth sharing my data.
Here’s what you need to know:
apple intelligence
Apple recently announced Apple Intelligence, its suite of AI services and its first foray into the AI race.
The new AI service will be built into the fastest iPhones, iPads, and Macs starting this fall. People can use it to automatically remove unwanted objects from photos, create summaries of web articles, and write responses to text messages and emails. Apple is also overhauling its voice assistant, Siri, to enhance its conversational capabilities and provide access to data across apps.
Introducing Apple Intelligence at Apple’s conference this month, Craig Federighi, Apple’s senior vice president of software engineering, shared how it could work. Mr. Federighi received an email from his colleague asking him to postpone the meeting, but he did not do so. We were scheduled to see a play in which his daughter was appearing that night. His phone then displayed a calendar, a document with details about the play, and a maps app that predicted whether he would be late for the play if he agreed to a meeting later.
Apple said it is working to process most AI data directly on phones and computers, which would prevent others, including Apple, from accessing the information. But for operations that need to be pushed to servers, Apple says it has developed safeguards, including shuffling data through encryption and deleting it immediately.
Apple also took steps to prevent employees from accessing the data, the company said. Apple also said it would allow security researchers to audit its technology to ensure it is keeping its promises.
But Apple isn’t clear about what new Siri requests might be sent to the company’s servers, said Matthew Green, a security researcher and associate professor of computer science at Johns Hopkins University. He was briefed by Apple on the new technology. Anything that leaves the device is inherently less secure, he said.
Microsoft’s AI laptop
Microsoft is bringing AI to old laptops.
Last week, it began shipping Windows computers called Copilot+ PCs that start at $1,000. Your computer contains a new type of chip and other equipment that Microsoft says will keep your data private and safe. PCs can create images and recreate documents, among other new AI-powered features.
The company also introduced Recall, a new system to help users quickly find documents and files they worked on, emails they read, or websites they browsed. Microsoft compares Recall to having photo memory built into your PC.
To use it, you can type something casual like, “This reminds me of a video call I had with Joe recently while he was holding my ‘I Love New York’ coffee mug.” Your computer will then search for video call recordings containing those details.
To achieve this, Recall takes screenshots of what you do on your computer every five seconds and compiles those images into a searchable database. Because snapshots are stored and analyzed directly on your PC, Microsoft does not review the data or use it to improve AI, the company said.
Nonetheless, security researchers warned of the potential risks, explaining that if the data were hacked, anything the user entered or viewed could easily be exposed. In response to this, Microsoft, which had planned to release the recall last week, postponed the release indefinitely.
Your PC comes equipped with Microsoft’s new Windows 11 operating system. David Weston, a company executive who oversees security, said it consists of several layers.
Google AI
Google also announced its AI service product line last month.
One of the biggest reveals was a new AI-based fraud detector for phone calls. The tool listens to phone calls in real time, and notifies the company if the caller sounds like a potential scammer (for example, if the caller asks for your bank PIN). Google said people will need to activate a fully functional fraud detector on their phones. That means Google doesn’t listen to your calls.
Google has announced another feature called Ask Photos, which requires information to be sent to the company’s servers. Users can ask questions like “When did my daughter learn to swim?” The child surfaces his first image of swimming.
Google said that in rare cases, employees may review Ask Photos conversations and photo data to address abuse or harm, and that the information may also be used to improve the Photos app. In other words, your question and a photo of your child swimming could be used by other parents to find images of their child swimming.
Google said its cloud is locked down with security technologies such as encryption and protocols to limit employees’ access to data.
“Our privacy approach applies whether AI capabilities are on-device or in the cloud,” Suzanne Frey, Google’s trust and privacy executive, said in a statement.
But security researcher Green said Google’s approach to AI privacy is relatively opaque.
“I don’t like the idea of my personal photos and personal searches being leaked to the cloud over which I have no control,” he said.