Source: European Parliament
To strengthen the EU’s capacity to detect, prevent, and combat the use of artificial intelligence (AI) and emerging technologies by organised crime groups, the Commission is taking a multi-pronged approach.
The AI Act[1] requires high-risk AI system developers to implement risk management and mitigation measures, with similar rules for general-purpose AI models.
The Horizon program funds research to equip law enforcement with tools to combat AI-related crimes. The Commission and the EU Agency for Law Enforcement Cooperation collaborate with digital businesses, such as technology and communication companies, to implement more efficient mechanisms for detecting and responding to the criminal abuse of AI technologies.
Furthermore, in line with the EU Internal Security: ProtectEU Strategy[2] and in response to the recommendations of the High-Level Group on access to data, the Commission presented a Roadmap[3] setting out the way forward to ensure law enforcement authorities in the EU have effective and lawful access to data.
The measures in this Roadmap will support better detection, prevention, and investigation of digital crimes and the misuse of emerging technologies, including the misuse of AI by criminals and the abuse of emerging technologies to conceal their digital footprints.
- [1] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence.
- [2] Communication on ProtectEU: a European Internal Security Strategy, COM(2025) 148 final, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52025DC0148.
- [3] Communication on a Roadmap for lawful and effective access to data for law enforcement, COM(2025) 349 final, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52025DC0349.