Day 3: Geneva Science Diplomacy Week 2025

Participants visited the Geneva Centre for Security Policy, taking a deep dive into the benefits of strategic anticipation, the subtleties of polarity thinking, and the gripping nature of the latest global and emerging risks.

Spending the day with the GCSP at Geneva’s Maison de la Paix, participants focused on strategic anticipation as an action-oriented skillset. Emily Munro, Head of Strategic Anticipation and Senior Advisor, Research & Policy Advice at GCSP, said the idea is to move away from situations “where reaction is the norm” and more toward the ability to harness insights from foresight for decision-making today. The cohort broke into small groups to practice for a science diplomacy exercise with several future scenarios of global governance. It involved applying foresight to future-oriented discussion and present action, showing how trends and weak signals must be differentiated.

“We are thinking about tomorrow with today in mind, taking some distance to have some free thinking about how we want the future to unfold. We are not predicting or eliminating surprise or eliminating the work of crisis managers,” Munro said. “What we want to do is use foresight, use the skills we have, to assess the opportunities and challenges.”

Photos by Marc Bader/GESDA

Christina Orisich, Managing Director of the European Centre for Executive Development and GCSP Associate Fellow, introduced the concept of polarity thinking, which involves the ability to hold two or more opposing ideas in mind and see all of their values or benefits. She said it is a more conducive method for problem-solving than simply weighing risks against opportunities, because it encourages a mindset that sets aside the “either/or” mentality in favor of a “both/and” lens. Not all challenges are “either/or” problems to be solved that are time-limited, have an end point, and seem to have a “right” answer; some are “both/and” polarities to be leveraged that are ongoing, interdependent and have no end point.

At first she envisaged becoming a diplomat, but she took a different path that included the private sector, international organizations, and work on mainstreaming leadership development into international security education offerings. After the March 2016 Brussels attacks, she said, she joined GCSP, determined to advance peace, security and international cooperation creating a brighter future for the world. “Being curious about the other, showing empathy, and the ability to understand the other is what’s important for international cooperation and solving interconnected, complex challenges ” she concluded, drawing loud applause. “And if you work with people from other cultures, ecosystems and backgrounds, like here with scientists and diplomats, it is important to really understand their specific perpectives, their needs, their values, their fears.”

Jean-Marc Rickli, Head of Global and Emerging Risks for GCSP and Director of the Polymath Initiative, and Tobias Knappe, a GCSP research and project officer, conducted a group exercise around the world’s efforts to create global governance for lethal autonomous weapons systems (LAWS). Participants were divided into two working groups: policy makers role playing as scientists, and scientists role playing as policy makers. The groups discussed options for dealing with LAWS, including regulations or an outright ban, and the ethics of people being killed by machines that are making decisions beyond the realm of human intervention.

The group of scientists role-playing as policymakers at first opted to study the materials provided and broke into small group discussions. After Knappe declined a request for him to facilitate the group, several of the participants with backgrounds in business and policy took the reins and brought the group into a collective discussion. “We don’t like violence, we don’t like the concept of lethal weapons; but we do appreciate dual laws, thinking about technology and the application and impact of technology,” participant Erika Fille Legara, Managing Director and Chief AI & Data Officer of the Philippine Center for AI Research, later summed up on behalf of the group. “Now, if it’s not a complete ban, our proposal is to look at a tiered regulatory act.”

The group of policymakers role-playing as scientists at first went off in the opposite direction, discussing policy options and political considerations. Eventually they came around to the scientific considerations as Connie Bolte, Secretariat and Multilateral Affairs Coordinator for the Belmont Forum, kept notes on a whiteboard. “Ethical considerations, future usage – the discussion began with people concerned about their lane of science, how a ban might affect their funding, then we got into regulation,” she later summed up for the group. “Could we help develop fail-safes? That topic came up quite a bit, as well as discussions around who is to blame if the LAWS are ‘erroneous.’ Not the scientists, right?”

Rickli, who also leads the GCSP delegation at the UN Governmental Group of Experts (GGE) on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems, gave a chilling overview of “exponential thinking,” or the exponential growth of emerging AI-assisted technologies and their security implications and applications, such as killer robots, deepfake videos and the potential for new bioattacks. Rickli emphasized the security implications of technologies with exponential growth, which implies a potential for the associated risks to grow in a similar fashion. The risks include those stemming from drones, AI and facial recognition used in mass surveillance and social media algorithms. The quality and quantity of deep fake videos first used by the porn industry has surged. “The world’s pace of change is exponential,” he said, explaining that our blind spot comes from the fact that we have lived in a linear world but today’s changes are exponential. “And that is a major challenge for the type of discussion we’re having today. The technology on drones moved in 10 years from massive drones driven by pilots to much smaller drones increasingly flying in swarms. For governance, this rapid pace of evolution is a major issue.”

Rickli said artificial intelligence is good at making sense of a lot of data, and this is increasingly used on the battlefield. He pointed to numerous instances of government surveillance and big data collection infringing on citizens’ privacy, including the use of new commercially available smart glasses with facial recognition software, and the quest for a DNA data drive that could store massive amounts of information in a volume about the size of a sugar cube. Loitering munitions and weaponized drones are changing the battlefields in Europe and the Middle East, and with the addition of AI their accuracy has skyrocketed.

The use of weaponized drones in swarms will be an emerging threat, and in 2024, we witnessed the first robots in combat at Ukraine. “We are starting to witness the creation of machines that have the capabilities of humans or even better,” he said, explaining that research into swarming is increasing. “Super swarms imply a high level of autonomy and decreasing human control,” he said.

Algorithms and AI manipulations are being used in ways that have led to chatbots ranking higher in surveys on medical quality and empathy than even physicians, which shows that machines are learning to mimic empathy. “It means we are on the verge of having machines that can fool human beings,” Rickli said. “The issue we’re facing now is that increasingly the data out there is fake data that is being used to train a frontier model. In addition, fake data can be used to push a certain narrative.” He warned that the new disinformation techniques relying on AI threaten trust in democracy, because a society that can no longer form opinions based on collectively agreed-upon established facts is increasingly vulnerable to polarizing opinions. Lastly, the use of AI to passively read people’s thoughts means that “we have entered officially the era of mindreading. What is the next step? Mind writing. Increasingly, we’re heading to technology that will be able to capture our minds,” he said. “Cognitive warfare is about controlling what and how people think in order to control what they do. It goes beyond information warfare.”

Story by John Heilprin
Photos By Michael Chiribau/UNITAR

Share this post on Social Media

Day 2: Geneva Science Diplomacy Week 2025
Day 1: Geneva Science Diplomacy Week 2025
Radar Spotlight: Living in space will require mining, manufacturing, and peacekeeping