Apple’s latest approach to artificial intelligence has raised questions about privacy, but the company says its methods are designed to keep user data secure. Apple is now using on-device analysis to improve its Apple Intelligence system, a process that compares synthetic data to real-world samples from users’ devices. According to Apple, this information never leaves the device and is not directly used to train its foundation AI models.
The company relies on a privacy technique called differential privacy, which adds random noise to data before it is analyzed. This ensures that individual user information cannot be traced back to any person, even as Apple learns general trends to improve its AI tools. For users who opt-in, only anonymized and aggregated data is used, and Apple says it does not collect or store personal content such as emails or messages.
When more complex processing is needed, Apple uses Private Cloud Compute, a system that processes only the necessary data on secure servers and deletes it after the request is completed. Apple also allows independent experts to review its privacy protections, and users can control their participation through privacy settings.
Security experts and privacy advocates have noted that Apple’s system is different from those of other tech companies, which often collect and store large amounts of user data for AI training. Apple’s approach is designed to give users the benefits of AI without giving up control over their personal information. The company says it does not use private personal data or user interactions to train its foundation models, and it applies filters to remove any personally identifiable information from public data sources.
More here.
they already know everything they need to know about you.