Geneva Science and Diplomacy Anticipator

GESDA is thrilled to co-organize the Anticipating the future of artificial intelligence and its impact on people and on society track on Monday 10 May in the framework of the 2021 Applied Machine Learning Days held online. Tickets are already available here.

Speakers and co-organizers Emmanuel Abbé, professor of mathematical data science, and Rüdiger Urbanke, professor of communication theory, both at Lausanne’s EPFL, tell us in today’s Geneva Solutions newsletter why laws alone are insufficient in regulating AI. Read their full opinion column:

 

“The European Commission has just announced that it will propose legislation to stimulate innovations in the area of Artificial Intelligence (AI) and also to rein in potential downsides. It is welcome that the EU initiates this discussion – it is badly needed. But regulations alone will not suffice.

The most recent AI wave has already had tremendous economic and societal impact. Want to communicate in any of the hundreds of active languages? AI makes this possible with automated translation systems. Worried about having glaucoma? Dr. AI can see you right away and inspect your retina. Caught in bumper-to-bumper traffic? Let AI worry about starting and stopping your vehicle. And we just start seeing the impact that AI will have on basic and natural sciences, such as scientific computing or computational biology (think AlphaFold).

But there is also reason to worry. The application of AI to social networks has been blamed for an increased polarisation, AI can reinforce bias, use subliminal techniques to manipulate vulnerable consumers, and AI can easily be misused to build the ultimate Orwellian surveillance state. We therefore appreciate that the EU is taking a proactive and forward-looking stance. What future do we want to live in and how do we shape the path ahead of us? How should one deal with the potential downsides without stymying the enormous promises?

AI thus presents a tremendous opportunity but it also carries potentially high risks. And Europe is falling behind on the AI innovation curve. All this has led the EU to consider AI regulations.

The aim of the new regulation is two-fold. Firstly, to create legal security for companies in the AI space, thus stimulating innovation, and secondly, to protect people from negative consequences of the wide-spread use of AI.

The approach taken by the European Commission is to place AI applications into four risk categories: “unacceptable”, “high”, “limited”, and “minimal”. With this classification, the EC intends to ban applications that carry an “unacceptable” risk, regulate applications with “high” or “limited” risk and leave the “minimal” category unregulated.

Self-regulation by companies alone will not suffice. The commercial lure to exploit some of the darker sides of AI is simply too strong, likely leading to a “race to the bottom” if left unchecked. So how shall we proceed?

It might be helpful to compare two extreme approaches: (1) describe all possible applications and scenarios of AI a priori and decide how to regulate them, and (2) adopt no regulations, but provide efficient and credible mechanisms for legal recourse so that abuses can be addressed and corrected.

The first approach leads to legal security, one of the desired and intended outcomes according to the EU, whereas the second approach does not. But it is highly questionable whether the first is feasible. Perhaps ironically, 30 years ago, AI itself followed this “logic-based” approach – systems were built by collecting large tables of “if … then …” scenarios. This approach proved to be unsuccessful. There are simply too many scenarios, most of which yet unexplored and unknown. And the legislative process moves slowly, whereas AI moves quickly and in unforeseen ways.

So shall we just give up? On the contrary. A possible way is to limit laws to the obvious and most egregious problems. Nobody is in favor of 24/7 surveillance, either by governments or by companies. But in order to address the quickly changing landscape of challenges and opportunities, it is more useful to establish a catalogue of robust and general principles together with credible and efficient mechanisms for legal recourse. This could be strengthened by requiring transparency and using reputation ratings and social influence to focus the use of AI on human well-being and the respect of fundamental rights. We have no expertise in law. But we have observed the evolution of AI over the years and we have continued to be surprised how quickly and unpredictably things change. A static law is in our view no match for AI. We need to unleash equally powerful and flexible forces to keep rapidly evolving technologies in check.

To further discuss the opportunities, risks and regulatory challenges posed by AI, we will moderate a panel at the virtual Applied Machine Learning Days 2021 on 10 May. This event is organised jointly with the Geneva Science and Diplomacy Anticipator (GESDA), a Swiss foundation dedicated to anticipating future science developments and their impact on people, society and the planet. The panel includes worldwide experts such as Eric Horvitz, chief scientific officer at Microsoft; Ken Natsume, deputy director-general, WIPO; Michael I. Jordan, University of California, Berkeley; Janette Wing, Columbia University; Nanjira Sambuli, policy analyst and GESDA Diplomacy Moderator.”

Image: Mike MacKenzie via www.vpnsrus.com (CC BY 2.0)