Federated Learning as enabler for Collaborative Security between not Fully-Trusting Distributed Parties
Résumé
Literature shows that trust typically relies on knowledge about the communication partner. Federated learning is an approach for collaboratively improving machine learning models. It allows collaborators to share Machine Learning models without revealing secrets, as only the abstract models and not the data used for their creation is shared. Federated learning thereby provides a mechanism to create trust without revealing secrets, such as specificities of local industrial systems. A fundamental challenge, however, is determining how much trust is justified for each contributor to collaboratively optimize the joint models. By assigning equal trust to each contribution, divergence of a model from its optimum can easily happen-caused by errors, bad observations, or cyberattacks. Trust also depends on how much an aggregated model contributes to the objectives of a party. For example, a model trained for an OT system is typically useless for monitoring IT systems. This paper shows first directions how heterogeneous distributed data sources could be integrated using federated learning methods. With an extended abstract, it shows current research directions and open issues from a cyber-analyst's perspective.
Origine | Fichiers produits par l'(les) auteur(s) |
---|