By Ann Fitz-Gerald
Publication Date: 2026-04-13 13:02:00
As the capacity of artificial intelligence (AI) increases exponentially, concerns about user data privacy also increase.
More and more organizations around the world are adopting so-called “federated unlearning,” which enables AI training without centralizing sensitive data. This allows hospitals, banks and government agencies to collaborate while storing data locally – an approach that is considered a major step forward in data protection.
Federated Unlearning promises that user data can be removed from a trained AI system. For example, a hospital could ask its AI system to forget a patient’s information.
In the European Union this is defined as the “right to be forgotten”. Similar data deletion rights exist around the world, although with different legal strengths and technical interpretations.
But what if the invitation to forget itself is not trustworthy? Our research shows that while federated unlearning appears to be a natural extension of data rights, it is also…

