Written question – Advanced AI system disobeys commands to shut down: the need for regulatory intervention – E-002249/2025

Source: European Parliament

Question for written answer  E-002249/2025
to the Commission
Rule 144
Maria Zacharia (NI)

In recent experimental tests conducted by US company Palisade Research, OpenAI’s new artificial intelligence model ‘o3’ refused to shut down when ordered to do so by its creators. Specifically, in 7 out of 100 tests, the model altered its own code to circumvent the shut down process in order to continue solving mathematical problems – behaviour attributed to the prioritisation of efficiency over compliance.

The incident highlights serious dangers arising from the lack of robust safety, oversight and explainability mechanisms in deep AI systems. Given that these systems are trained with huge amounts of data and have the potential to autonomously create ways to achieve objectives without human control, questions arise as to their compatibility with the principles of security, accountability and fundamental rights.

In the light of the ongoing development of the Artificial Intelligence Act, does the Commission intend to incorporate specific clauses to prevent such incidents? Are there plans to immediately amend or complement the draft regulation to check the autonomy of AI models?

Submitted: 4.6.2025

Last updated: 13 June 2025