As part of the first edition of the "AI Days" initiative - organized by the Directorate General for Parliamentary Partnerships for Democracy (DG PART) in collaboration with some of the Commissions of the European Parliament (AFCO, EMPL, IMCO and LIBE) - regulatory, ethical and strategic aspects related to the governance of artificial intelligence were discussed and explored in depth, with particular attention to the balance between democratic security, innovation and multilateral cooperation.
In a context of global competition, the race for AI makes it increasingly urgent for Europe to consolidate its strategic vision based on rights and democratic principles: the AI Act was born from a profound awareness of the risks that technologies such as deepfakes and automated surveillance can pose for democratic transparency.
In this scenario, the European regulatory system has demonstrated a desire for constant adaptation - as demonstrated by Parliament's vote on the Digital Omnibus on AI - and has announced strategic calls for more than 200 million euros intended for the creation of data centers and giga factories, considered key infrastructures for European digital sovereignty.

It is precisely in the last few hours that an agreement was reached in the trilogue, i.e. the negotiation between European institutions on the topic. The main focuses:
- by 2 August 2027, all European countries will have to establish their own regulatory sandboxes for artificial intelligence, i.e. those controlled and protected environments where innovative companies can test technologically advanced products, services or business models;
- the obligation to watermark content generated by artificial intelligence, i.e. those techniques that allow content generated by AI to be identified and tracked, has been postponed to 2 December 2026.
AI and human rights: the importance of a multilateral approach
Artificial intelligence is not just a technological issue, but a transformation capable of defining economies and political institutions. Given the transnational nature of AI, no country can address its effects alone: joint investments and a multilateral approach are needed to govern its impact.
Without a shared framework, AI risks widening inequalities and further concentrating power, threatening the security and integrity of institutions. Europe has a central role of reference in the balance between regulation and innovation, as evidenced by the EU's decisive contribution to the definition of common rules on the governance of AI in the framework of the Global Digital Compact promoted by the United Nations.

Companies, technological sovereignty and democracy: a focus
In 2025, the European Commission had already launched the AI Continent Action Plan, an industrial action plan aimed at making the European Union a global leader in the development and adoption of artificial intelligence. This is not a law, but a strategic plan that aims to build an integrated ecosystem that combines innovation, industrial competitiveness, technological sovereignty and respect for fundamental human rights.
The plan is based on an architecture of skills, infrastructures and regulatory simplification, which has already materialized with the creation of 19 "AI factories" dedicated to supercomputers: the commitment will now concern expanding access to these resources for SMEs and start-ups, so as to foster an open and competitive technological ecosystem.
The harmonization of often fragmented rules and the reduction of bureaucratic burdens can be strategic for companies, especially in a phase of strong global competition. Overall, however, the challenge in the use of new technologies and AI, in particular, will be to make the rules truly applicable and consistent with production needs: a crucial step to support the technological development of companies and strengthen the competitiveness of Italian industry in the international context.
Read also:
Images source: Pexels