Web-based services are foundation and driver for the ongoing digitalization of our society. They are also playing a critical role as management and distribution systems of digital identity information for billions of Internet users. Misuse of digital identities must be reliably prevented to avoid substantial damage for individual users as well as large-scale companies. With the advent of novel and effective artificial intelligence (AI) techniques a continuously growing interest for their application in IT security has emerged, which in turn inspired several AI-based IT security systems. However, current approaches mainly focus on attack detection and mitigation in isolated domains (e.g., application or network domain) while attacks against web-based services often build up over time and leave traces throughout the entire system. Consequently, AI-based cross-domain attack detection and mitigation constitute a highly innovative approach to enhance the capabilities of security systems.
The partners of the BMBF project KIWI contribute to the development and practical testing of AI-supported security management in complex web infrastructures. The project focuses on the application of federated machine learning techniques to merge security-relevant information collected from AI-based detectors co-located in different domains. By combining domain-specific information into an overarching picture of the threat landscape, the federated approach of KIWI allows to draw conclusions about the presence, absence and nature of attacks on complex web infrastructures. Due to the sensitive nature of digital identities, one important goal of KIWI is to uphold the data sovereignty of all involved domain operators and organizational units during the cross-domain information exchange. Therefore, methods are investigated to restrict the flow of information to the necessary and legally permitted data. Furthermore, since the quality of training data is of vital importance for the effectiveness of AI models, KIWI will also devise a framework for rigorous Data Governance. In the event that models are trained on deliberately corrupted or unsuited data, it is the frameworks obligation to detect and re-train affected models to ensure the reliable operation of the federated system.