Artificial Intelligence (AI) is an emerging concept facilitating intelligent and automated decision-making and thus becoming a prerequisite to the deployment of IoT and Industry 4.0 scenarios, as well as other application areas. Whereas undoubtedly beneficial, one should not sidestep the fact that AI and its application on automated decision making – especially in safety critical deployments such as in autonomous vehicles- might open new avenues in manipulation and attack methods, while creating new privacy challenges.

When considering security in the context of AI, the duality of this interplay needs to be highlighted. On one hand, one needs to consider that AI can be exploited to manipulate the expected outcomes, but on the other hand AI techniques can conversely be utilised to support security operations and even to augment adversarial attacks. Before considering using AI as a tool to support cybersecurity, it is essential to understand what needs to be secured and to develop specific security measures to ensure that AI itself is secure and trustworthy.

ENISA is actively working on mapping the AI cybersecurity ecosystem and providing security recommendations for the foreseen challenges.

Ad Hoc Working Group

ENISA is in the process of establishing an ad hoc Working Group on AI Security.

You can find terms of reference and the application form in the page dedicated to the ad hoc Working Group call.

We use cookies on our website to support technical features that enhance your user experience.
We also use analytics. To opt-out from analytics, click for more information.

I've read it More information