Machine learning (ML) is currently the most developed and the most promising subfield of artificial intelligence for industrial and government infrastructures. By providing new opportunities to solve decision-making problems intelligently and automatically, artificial intelligence (AI) is applied in almost all sectors of our economy.
While the benefits of AI are significant and undeniable, the development of AI also induces new threats and challenges, identified in the ENISA AI Threat Landscape.
Machine learning algorithms are used to give machines the ability to learn from data in order to solve tasks without being explicitly programmed to do so. However, such algorithms need extremely large volumes of data to learn. And because they do, they can also be subjected to specific cyber threats.
The Securing Machine Learning Algorithms report presents a taxonomy of ML techniques and core functionalities. The report also includes a mapping of the threats targeting ML techniques and the vulnerabilities of ML algorithms. It provides a list of relevant security controls recommended to enhance cybersecurity in systems relying on ML techniques. One of the challenges highlighted is how to select the security controls to apply without jeopardising the expected level of performance.
The mitigation controls for ML specific attacks outlined in the report should in general be deployed during the entire lifecycle of systems and applications making use of ML.
Machine Learning Algorithms Taxonomy
Based on desk research and interviews with the experts of the ENISA AI ad-hoc working group, a total of 40 most commonly used ML algorithms were identified. The taxonomy developed is based on the analysis of such algorithms.
The non-exhaustive taxonomy devised is to support the process of identifying which specific threats target ML algorithms, what are the associated vulnerabilities and the security controls needed to address those vulnerabilities.
- Public/government: EU institutions & agencies, regulatory bodies of Member States, supervisory authorities in data protection, military and intelligence agencies, law enforcement community, international organisations and national cybersecurity authorities.
- Industry at large including small & medium enterprises (SMEs) resorting to AI solutions, operators of essential services ;
- AI technical, academic and research community, AI cybersecurity experts and AI experts such as designers, developers, ML experts, data scientists, etc.
- Standardisation bodies.
The EU Agency for Cybersecurity continues to play a bigger role in the assessment of Artificial Intelligence (AI) by providing key input for future policies. The Agency takes part in the open dialogue with the European Commission and EU institutions on AI cybersecurity and regulatory initiatives to this end.
The Agency set up the ENISA Ad Hoc Working Group on Cybersecurity for Artificial Intelligence last year. The working group supports ENISA in the process of building knowledge on AI Cybersecurity. Members of the group come from the European Commission Directorate-General Communications Networks, Content and Technology (DG CONNECT), the European Commission Directorate-General Joint Research Committee (DG JRC), Europol, the European Defence Agency (EDA), the European Union Agency for the Operational Management of Large-Scale IT Systems in the Area of Freedom, Security and Justice (eu-LISA), the European Telecommunications Standards Institute (ETSI), as well as academics and industry experts.
ENISA Report - Securing Machine Learning Algorithms – December 2021
ENISA Report - Artificial Intelligence Cybersecurity Challenges
For questions related to the press and interviews, please contact press(at)enisa.europa.eu
Stay updated - subscribe to RSS feeds of both ENISA news items & press releases!