Project Description
Artificial intelligence (AI) is at the core of modern-day applications. With the advent of massive datasets, the last two decades were dedicated to improve the mathematical modeling and training processes; And recently the focus has shifted towards security, privacy and trust based AI assisted solutions. Together with a number of other viable solutions, federated machine learning (FedML) has proven to be a suitable approach for privacy-preserving machine learning. In this project, our focus will be on security and privacy enhancing techniques for federated machine learning. The project addresses concerns related to security and privacy in federated machine learning against model poisoning and information leakage attacks. The approach is centered around developing new theories and methodologies to achieve two main aims, Secure aggregation of local models under poisoning attacks (Aim 1); Private distributed aggregation of local models (Aim 2).
The project will run in a close collaboration between Assoc. Prof. Salman Toor from the Division of Scientific Computing as main supervisor and Assoc. Prof. André Teixeira from the Division of Systems and Control as co-supervisor.
The successful candidates will be integrated in the newly established Graduate School in Cybersecurity at the Department of Information Technology. The Graduate School provides an environment where students and researchers in cybersecurity and related areas work together, through a core PhD-level curriculum of joint courses and research activities. Regular seminars are planned with presentations from the school’s participants and by invited speakers. Active participation in the Graduate School activities is expected
Announcement and application submission link: https://www.uu.se/en/about-uu/join-us/details/ positionId=507539
Submission deadline: 27 May 2022