The EU AI Act, aiming to be a significant piece of AI legislation, is currently facing uncertainty. The debate centers around how to regulate ‘foundation’ models, massive AI models like GPT-4, Claude, and Llama.
Recently, the French, German, and Italian governments proposed limited regulation for these foundation models. Many argue that this move is influenced by heavy lobbying from Big Tech and open-source companies like Mistral, advised by Cédric O, a former digital minister for the French government. Critics view this as a “power grab” that could weaken the EU AI Act.
On the other side, supporters of regulating foundation models within the EU AI Act are pushing back. Just yesterday, the Future of Life Institute, known for the six-month ‘pause’ letter, released another open letter urging the German government not to exempt foundation models. They argue that such an exemption would harm public safety and European businesses. Signatories include prominent AI researchers like Geoffrey Hinton and Yoshua Bengio, along with AI critic Gary Marcus.
French experts, including Bengio, echoed these concerns in a joint op-ed in Le Monde, opposing what they see as attempts by Big Tech to undermine this crucial legislation in its final phase.
So, why is the EU AI Act hitting a roadblock now? Despite being in the final stages of negotiation, two and a half years after draft rules were proposed, disagreements persist many months into negotiations. The EU AI Act focuses on high-risk AI systems, transparency for AI interacting with humans, and AI systems in regulated products. The current phase called the trilogy, involves EU lawmakers and member states negotiating the final details. The European Commission hoped to vote on the AI Act by the end of 2023, avoiding potential political impacts from the 2024 European Parliament elections.
Surprisingly, the current drama involving OpenAI, whereby CEO, Sam Altman was ousted by the company’s non-profit board only to reinstate him again five days later after two other board members were dismissed and replaced, sheds light on similar controversies taking place within
According to OpenAI, CEO Sam Altman and president Greg Brockman – two members of the non-profit board – supported this endeavor. Instead of investing in the development of AGI, they suggested the search for opportunities to make money. However, three non-employees on the board – Adam D’Angelo, Tasha McCauley, and Helen Toner, had greater concerns about AI “security” that went as far as shutting down OpenAI, rather than releasing a new technology that could possibly pose the risk Following Ilya Sutskever’s recruitment (pun intended), Altman got fired.
Those three non-employee members were somehow linked to the Effective Altruism movement, as it turns out, with the whole lobbying effort around the European Union AI Act. Run by a president named Max Tegemark, the Future of Life Institute specifically deals with AI existential risk and also connects itself to effective altruism. Last week, the Wall Street Journal said that the Effective Altruism group has spent large amounts in telling people that artificial intelligence poses a big threat.
It is hard to believe that the current drama involving OpenAI – where its CEO Sam Altman was fired by the nonprofit board only to be reinstated after five days following the removal of two other board members – unveils some key factors happening in the EU. Just as it happened in the case
According to OpenAI, Sam Altman (CEO) and Greg Brockman (president), who have been members of OpenAI’s non-profit board, advocated for money-making ventures as a way of funding the creation of AGI. However, three other non-employees from the board; Adam D’Angelo, Tasha McCauley, and Helen Tonner are more into AI safety than AGI technologies which could result in something serious like the shutting down of FaceBook by the government. Following this, Altman got rid of chief scientist Ilya Sutskever (not a typo).
All three non-employee members were linked with the Effective Altruism movement – and that goes back to lobbying over the EU AI Act. The Future of Life Institute is led by President Max Tegmark who is concerned with the problems created by AI existential risk and connects it to effective altruism. Last week, the Wall Street Journal stated that the EA community spent billions of dollars selling a perception that AI presents an extinction-level event.
But Big Tech, including OpenAI, has been actively involved in lobbying as well. It’s important to remember that Sam Altman, the returning CEO of OpenAI, had sent mixed signals about AI regulation for months, especially in the EU. Back in June, a TIME investigation uncovered that while Altman was publicly advocating for global AI regulation, behind the scenes, OpenAI had lobbied to weaken “significant elements of the most comprehensive AI legislation in the world—the E.U.’s AI Act.” Documents obtained by TIME from the European Commission through freedom of information requests revealed this effort to reduce regulatory constraints on the company.
Gary Marcus recently pointed to the OpenAI situation as a reason for EU regulators to ensure that Big Tech doesn’t get the chance to regulate itself. Supporting the tiered approach backed by the European Parliament for handling risks linked to foundation models in the EU AI Act, Marcus emphasized on X: “The chaos at OpenAI only serves to highlight the obvious: we can’t trust big tech to self-regulate. Reducing critical parts of the EU AI Act to an exercise in self-regulation would have devastating consequences for the world.”
Brando Benifei, one of the European Parliament lawmakers leading negotiations on the laws, highlighted the recent drama around Altman being ousted from OpenAI and subsequently joining Microsoft. He told Reuters last week, “The understandable drama around Altman being sacked from OpenAI and now joining Microsoft shows us that we cannot rely on voluntary agreements brokered by visionary leaders.”
As of now, it still remains unclear if the EU AI act may actually be undone. This was stated by German consultant Benedict Kohn who penned down an analysis yesterday, pointing out that although additional negotiation rounds continue, “a deal must be made expeditiously
Continuing, he pointed out that a fiasco for the EU AI Act would likely be a disheartening blow for all those concerned because the European Union has long portrayed itself as a leading figure in attempting to govern artificial intelligence.
While negotiations continue, German advisor Benedikt Kohn pointed out in his analysis yesterday that there is still uncertainty surrounding the actual risk facing the EU AI Act. An agreement needs to be arrived upon before the trilogue scheduled for December 6 when Spain takes over as Council president and leaves it
He further noted that a failure of the EU-AI Act would constitute a bitter blow to all those involved since the EU had already considered itself among the front-runner states with respect to regulating artificial intelligence.