Apple Workshop on Privacy-Preserving Machine Learning & AI 2026
At Apple, we believe privacy is a fundamental human right. As AI capabilities increase and become more integrated into people’s daily lives, advancing research in privacy-preserving techniques is increasingly important to ensure privacy is protected while users enjoy innovative AI experiences. Apple’s fundamental research has consistently pushed the state-of-the-art in this domain, and earlier this year, we hosted the Workshop on Privacy-Preserving Machine Learning & AI. This two-day event brought together Apple researchers and members of the broader research community to discuss the latest in privacy-preserving ML and AI, focusing on three key areas: Private Learning and Statistics, Foundation Models and Privacy, and Attacks and Security. Presentations and discussions at the workshop explored advances and open questions in privacy and ML, including federated learning, statistical learning, trust models, attacks, privacy accounting, and the unique challenges presented by foundation models. These research areas ground innovation in rigorous privacy and security evaluation, bridging theoretical frameworks with real-world applications. In this post, we share recordings of selected talks and a recap of the publications discussed at the workshop. Featured Talks Published Work Presented at the Workshop Adaptive Methods Are Preferable in High Privacy Settings: An SDE Perspective by Enea Monzio Compagnoni (University of Basel), Alessandro Stanghellini (University of Basel), Rustem Islamov (University of Basel), Aurelien Lucchi (University of Basel), and Anastasiia Koloskova (University of Zurich) Captured by Captions: On Memorization and its Mitigation in Clip Models by Wenhao Wang (CISPA), Adam Dziedzic (CISPA), Grace C. Kim (Georgia Institute of Technology), Michael Backes (CISPA), and Franziska Boenisch (CISPA) Combining Machine Learning and Homomorphic Encryption in the Apple Ecosystem by Apple researchers Concurrent Composition for Differentially Private Continual Mechanisms by Monika Henzinger (Institute of Science and Technology, Austria), Roodabeh Safavi (Institute of Science and Technology, Austria), and Salil Vadhan (Harvard University) Contextual Agent Security: A Policy for Every Purpose by Lillian Tsai (Google) and Eugene Bagdasarian (Google) Cram Less to Fit More: Training Data Pruning Improves Fact Memorization by Jiayuan Ye, Vitaly Feldman, and Kunal Talwar Demystifying Foreground-Background Memorization in Diffusion Models by Jimmy Z. Di (University of Waterloo), Yiwei Lu (University of Ottawa), Yaoliang Yu (University of Waterloo), Gautam Kamath (University of Waterloo), Adam Dziedzic (CISPA), and Franziska Boenisch (CISPA) Efficient and Privacy-Preserving Soft Prompt Transfer for LLMs by Xun Wang (CISPA), Jing Xu (CISPA), Franziska Boenisch (CISPA), Michael Backes (CISPA), Christopher A. Choquette-Choo (Google DeepMind), and Adam Dziedzic (CISPA) Efficient privacy loss accounting for subsampling and random allocation by Vitaly Feldman and Moshe Shenfeld (Hebrew University of Jerusalem; work done while at Apple) Eyes Off My Data: Exploring Differentially Private Federated Statistics To Support Algorithmic Bias Assessments Across Demographic Groups by Partnership on AI Staff Finding NeMo: Localizing Neurons Responsible For Memorization in Diffusion Models by Dominik Hintersdorf (German Research Center for Artificial Intelligence (DFKI), Technical University of Darmstadt), Lukas Struppek (German Research Center for Artificial Intelligence (DFKI), Technical University of Darmstadt), Kristian Kersting (German Research Center for Artificial Intelligence (DFKI), Technical University of Darmstadt, Hessian Center for AI), Adam Dziedzic (CISPA), and…

