Geneva Science and Diplomacy Anticipator

It is widely recognized that the economic and political impacts of machine learning techniques will be profound. Three world leading experts from civil society, industry and international governance gathered by GESDA during the Applied Machine Learning Days EPFL 2021 conference underlined the importance of making sure that future AI technology is deployed in a way which benefits the most and not only the few, by anticipating its impact on society and global governance. This will require flexible regulation, anticipating arising societal issues, and the active inclusion of the voices of the many which will be impacted.

 

Artificial intelligence has the potential to improve individual lives and human-machine collaborations, pointed out Nanjira Sambuli, an independent policy analyst, member of the UN Secretary-General High Level Panel on Digital Cooperation and GESDA Diplomacy Moderator. “But it can take us on divergent paths. It can contribute to greater good or to greater harm – it will depend on how coherent and inclusive its development is. The data used by machine learning is always a simplification of reality, and it embeds not only facts but also opinions. One of the biggest risks with the digital world is that it might exclude some populations further, making them even more invisible.” In a debate led by Emmanuel Abbé, professor of mathematical data sciences at EPFL and GESDA Academic Expert, She underlined the divide between the actors who currently design the technology, mostly within private companies, and the people who will use it or be affected by it, found all over the world.


Open the conversation

Ken-Ichiro Natsume, Assistant Director General at the World Intellectual Property Organization (WIPO), reported hearing growing concern that technology might actually increase rather than decrease the gap between rich and poor. For Nanjira Sambuli, it is crucial to include much more diverse voices in the conversation about the development of AI and to start formulating different scenarios to anticipate the benefits and harms the new technology could bring.

“The rise of automation has the potential to bring what makes us human to the forefront,” said Eric Horvitz, Chief Scientific Officer at Microsoft. He also pointed at known threats such as massive government surveillance of citizen, information manipulation, cyberattacks but also at much subtle risks, such as adding new layers of administration running automatically with no human intervention not because it would add benefits but mainly “because it’s possible, and easy, to do.”

The three experts agreed that some regulation of AI systems will probably be necessary. “Important is that it does not slow down innovation but rather fosters it by creating the appropriate environment”, noted Ken-Ichiro Natsume. A legal framework requiring the documenting and sharing of data related to the use of artificial intelligence could work as a catalyst helping the field move forward, added Eric Horvitz: “For instance, the industry could learn a lot from the systematic analysis of incidents involving semi-autonomous cars, but this information is currently proprietary, and therefore not shared.” While uniform rules valid across the globe would be desirable, it is not realistic to expect them soon, stressed Ken-Ichiro Natsume: “Regulation struggles to follow the rapid pace of progress in AI. We need to follow a pragmatic route, by prioritizing soft law, guidelines and the exchange of best practices.” This echoes the view that attempts at regulating all possible use and abuse of AI might be unfeasible, a view which was put forward in a recent column in Geneva Solutions by the two EPFL Professors Rüdiger Urbanke and Emmanuel Abbé, co-organizers of this AMLD session with GESDA. According to them, it would be “more useful to establish a catalogue of robust and general principles together with credible and efficient mechanisms for legal recourse.”

Another concern brought into the discussion was the fact that governance issues related to AI are often dealt in silos (AI for transport, AI for health, etc.), while the fundamental issues are not discussed because of a lack of appropriate fora.

Don’t forget the low-hanging fruits

While robotics and self-driving cars catch our imagination, they tend to distract from low-hanging fruits with huge potential impact on society, according to Eric Horvitz: “Take human errors in hospitals, which are the third leading cause of death in the US before the Covid-19 pandemic. Systematically documenting them and using this data to train machine learning algorithms could massively improve how help health care institutions and professionals learn from their mistakes.”

But the fear of losing one’s job because of AI can create resistance against innovation, added Nanjira Sambuli: “We need to develop convincing narratives that underlines the added value for society that can arise from new technology. We need to create trust. It is important that public institutions also drive innovation in AI, not only actors from the private sector.”

She stressed the danger of blindly trusting metrics when implementing technology as they tend to overlook societal impact: “Not everything that matters get measured, and not everything that is measured matters. The world is anything but a series of 0 and 1.”

For Eric Horvitz, the implementation of AI technology cannot be let unchecked. He sees some reasons for optimism: “I see many people showing a lot of interest in getting it right. Take education: there are more and more computer science syllabuses which include socio-technology, and not as mere add-ons. This gives me hope to see the rise of a new cohort of computer science leaders who are aware of the societal issues created by the technology they help design. AI has the potential to assist and augment human intelligence and creativity, through a symphony of intelligence.”

(Text by Daniel Saraga for GESDA)