Apple is using your emails to train Apple Intelligence, and Apple says it's still private
Apple’s new AI training method learns from your emails, without ever seeing them, marking a bold step in merging smarter tech with its strong privacy stance.


- Apr 16, 2025,
- Updated Apr 16, 2025 5:52 PM IST
Apple is introducing a new way to train its AI models that aims to boost performance without compromising user privacy. Detailed in a blog post on Apple’s Machine Learning Research website and first reported by Bloomberg, the approach will begin rolling out in beta versions of iOS 18.5 and macOS 15.5.
Previously, Apple relied on synthetic data, such as artificially generated messages, to train its AI features, such as writing tools and email summaries. While this protected user privacy, the company admits it struggled to capture how people actually write and summarise content.
The new method allows Apple to privately compare synthetic data with real user content, without accessing or storing any actual user emails.
Here’s how it works: Apple generates thousands of fake emails covering everyday topics. These are converted into “embeddings”, data representations of the content, and sent to a small number of devices that have opted into Apple’s Device Analytics programme.
On each device, a small sample of recent user emails is privately compared with the synthetic messages to find the closest match. The results never leave the device. Using differential privacy, only anonymised signals about the most frequently selected synthetic messages are sent back to Apple.
These popular messages are then used to refine Apple’s AI models, helping improve the accuracy of outputs like email summaries, while maintaining user anonymity.
While it’s unclear if this will help Apple catch up to AI leaders like ChatGPT or Gemini 2.0, the privacy-first approach gives Apple a distinct position in the AI race.
For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine
Apple is introducing a new way to train its AI models that aims to boost performance without compromising user privacy. Detailed in a blog post on Apple’s Machine Learning Research website and first reported by Bloomberg, the approach will begin rolling out in beta versions of iOS 18.5 and macOS 15.5.
Previously, Apple relied on synthetic data, such as artificially generated messages, to train its AI features, such as writing tools and email summaries. While this protected user privacy, the company admits it struggled to capture how people actually write and summarise content.
The new method allows Apple to privately compare synthetic data with real user content, without accessing or storing any actual user emails.
Here’s how it works: Apple generates thousands of fake emails covering everyday topics. These are converted into “embeddings”, data representations of the content, and sent to a small number of devices that have opted into Apple’s Device Analytics programme.
On each device, a small sample of recent user emails is privately compared with the synthetic messages to find the closest match. The results never leave the device. Using differential privacy, only anonymised signals about the most frequently selected synthetic messages are sent back to Apple.
These popular messages are then used to refine Apple’s AI models, helping improve the accuracy of outputs like email summaries, while maintaining user anonymity.
While it’s unclear if this will help Apple catch up to AI leaders like ChatGPT or Gemini 2.0, the privacy-first approach gives Apple a distinct position in the AI race.
For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine