Artificial intelligence, US contradictions on regulations

The US is going through one of the most confusing political situations in its history, with uncertainty now also evident in the effects of the Trump administration’s decisions on economic policies so dear to Silicon Valley.

This week’s news was the rejection of the federal moratorium that would have prevented the United States from regulating artificial intelligence.

In recent years, the increasingly frenetic race towards digital innovation has pushed numerous American states towards the prospect of autonomous regulations on artificial intelligence: California, Vermont and Illinois had already introduced bills to regulate its use. At the same time, big tech companies had taken action: perceiving this as an impediment to innovation, they had made a formal request through Incompas, the organisation that represents many of these tech giants, to establish a ten-year US state AI regulatory freeze. The aim was to prevent individual states from adopting AI regulations over the next decade that were different or more stringent than federal regulations. The investment initiative certainly had its merits: legislative fragmentation would undoubtedly have impacted the growth of some tech companies and investments in different states.

The association has so far invested significant resources in promoting the US AI moratorium proposal among members of the Senate, after obtaining approval from the House of Representatives. The passage of the proposal in the House of Representatives, approved by a large majority, marked a first turning point for Silicon Valley.

But now the US Senate has approved the One Big Beautiful Bill Act, the Trump administration’s budget reform. Approved without one of its most controversial measures: the ten-year moratorium that would have prevented individual states from introducing new laws to regulate the use of AI was rejected by the Senate with 99 votes against and only one in favour.

Originally planned to last ten years, then reduced to five, the moratorium would have blocked the application of existing and future regulations on AI systems, including those against deepfakes, which are often used for political disinformation or non-consensual pornography.

It was the amendment by Tennessee Republican Senator Marsha Blackburn that dealt the blow to Silicon Valley, which had been riding high on its lobbying investments. But the moratorium would not only have rendered all these protections unenforceable for years. It could have blocked the use of civil rights laws that currently prohibit algorithmic discrimination, which occurs when computers unfairly penalise certain groups of people based on race, gender or income. And it would have overturned consumer privacy laws.

Also in the United States, during the same period, the Senate approved the Raise Act, a New York State measure to prevent the most advanced AI models, such as those developed by big tech companies, from contributing to disastrous scenarios that could cause over 100 deaths or damage exceeding one billion dollars.

Essentially, the measure obliges developers of so-called frontier AI models (the most powerful models, such as GPT-4 and its successors) to demonstrate that their public release does not pose catastrophic risks. Under this legislation, any AI model that could be used, for example, to design chemical weapons, sabotage critical infrastructure, manipulate information on a massive scale, or destabilise financial systems, will have to be treated as a high-risk technology, with mandatory audits, reports and specific responsibilities.

These events seem to contradict all the mainstream propaganda in our country, which until now has viewed Europe as a regulator rather than an innovator. The EU was the first in the world to adopt legislation on artificial intelligence. Today, events in the United States have overturned these considerations.

However, caution is needed: while it is true that the EU was the first political body to legislate on artificial intelligence, it seems to have been taking faltering steps lately. First with Von der Leyen’s competitiveness compass, which focuses heavily on investment in artificial intelligence, then with Rearm Europe and the concept of dual use (all goods and infrastructure for civilian use that can be converted to non-civilian use): some companies that once extolled the ecological benefits of their work are now turning their attention to defence and artificial intelligence. In fact, dual use is emerging in parallel with Rearm EU. . We explored this topic in depth in this article..

Yet, the very European AI that could be refinanced thanks to the substantial investments of Rearm EU now seems to be taking a step backwards. The European Commission did not simply stand by and watch the New York Raise Act: immediately after its approval, it launched a public consultation to gather feedback on the regulation of high-risk artificial intelligence systems. Similar to the Raise Act, this consultation focuses on two categories of systems: those relevant to product safety under harmonised EU legislation, and those that could have significant impacts on people’s health, safety or fundamental rights. The consultation will last six weeks, until 18 July 2025, and could cause possible delays in the implementation of some provisions of the AI Act. In short, European and American AI policy seems to have very similar similarities in their legislative processes.

Now that the US moratorium has been lifted, federal states will have the freedom to legislate, especially on issues of online safety, image rights and child protection. However, we could see the same thing happening in Europe, with the transposition of the AI Act by individual Member States. Italy will be the first European country to have transposed the AI Act, which has now been in force for a year, with the Senate’s approval of draft law 1146/24 at first reading last March. but it should be remembered that, at the outset, the European Commission itself, when examining the Italian bill, had highlighted discrepancies with the AI Act, such as divergent definitions, restrictions on low-risk systems and risks of regulatory overlap. Therefore, there is a risk that excessive regulatory fragmentation will develop. ( photo by Steve Johnson on Unsplash)

ALL RIGHTS RESERVED ©

    Subscribe to the newsletter