10 Break-Out Sessions

  • Time: 3:30 pm - 4:30 pm

[timetable id="9" column_title="0" filter_visible="1" filter_multiple="1" event_box_time="0"]

Sign up for our Newsletter

Sign up for our Newsletter

Healthy Skepticism Instead of Blind Trust

The young leaders surveyed for this year’s Voices of the Leaders of Tomorrow Report call on their own generation to do more to enable the ethical and inclusive use of new technologies.

From gene modification to nanotechnology and geoengineering, increasingly high hopes as well as dangers are associated with new technologies. This ambivalence is exemplified by two recently published books. On the one hand, in “How to Avoid a Climate Disaster” (2021), Bill Gates pragmatically lays out the most promising technological fixes for the climate crisis. Interestingly and maybe unintentionally, the metaphor of “Spaceship Earth,” often employed by climate activists to highlight planetary boundaries, may also stimulate the quest for technocentric solutions.

On the other hand, Pulitzer Prize-winning author Elizabeth Kolbert takes a critical stand in “Under a White Sky” (2021), whose title refers to the potential alteration of the spectrum of light by injecting sulphur dioxide into the atmosphere to offset global warming. In her “book about people trying to solve problems created by people trying to solve problems,” she provides many examples of how technological interventions to counter human-made damages to nature have caused new problems, necessitating new fixes. Overreliance on technological fixes may only buy us little time and lead to a constant need for new fixes.

That also young, tech-savvy people are skeptical of techno-optimism can be seen in responses to Elon Musk’s recent announcement of a $100 million prize for the best carbon capture technology on Twitter (Clifford, 2021). Many came up with the same simple solution and posted a picture: A tree.

Although this distrust of technology as a problem solver for all human challenges is meaningful and important, the discussion about the use of its capabilities is as well. Solely relying on new technologies to fix things instead of leaving our comfort zone and changing our mindset and behavior may prove an illusion, but ignoring technological potential is not an alternative. How do the Leaders of Tomorrow surveyed for this year’s Voices of the Leaders of Tomorrow Report – a collaboration of the Nuremberg Institute for Market Decisions (NIM) and the St. Gallen Symposium – judge the role of technology?

A majority of 62% of the Leaders of Tomorrow believe that new technologies have the potential to solve at least some of humanity’s pressing problems. However, most are not completely convinced, but only cautiously optimistic. A total of 45% “tend to agree,” while only 17% “completely agree” with the statement “New technologies will soon be able to solve many of humanity’s pressing problems.”

While for some Leaders of Tomorrow, hope clearly prevails, others also express serious concerns. “As we‘ve seen in the pandemic, the only limit to innovation is dedication”, says Kiera O’Brien, a policy entrepreneur from the United States. “Similar advances in medicine hold a lot of promise, as do clean and green technologies for decarbonizing our economy”, she adds. For Seiya Kato, an M&A advisor from Japan, the question of technology’s contribution to solving humanity’s most pressing problems is more ambiguous: “It is interesting to see new technologies solve a social issue. However, at the same time, they tend to create new challenges. For example, the internet solved many problems and increased the efficiency of how people communicate, but created new problems like cybersecurity. Then, new technologies come into play and try to solve cybersecurity issues. The consequence would be that there will be always a problem, and this sequence never ends.”

Criticism of Their Own Generation

The Leaders of Tomorrow are critical of their own generation when it comes to new technologies. As with social media before, the Leaders of Tomorrow condemn shortcomings in their own generation’s approach toward new technologies. Once again, their peers reap the most criticism for their handling of fake news. A total of 75% either fully or partially agree with the statement “My generation does not do enough to combat the effects of fake facts amplified by new technologies.” A total of 66% confirm a lack of commitment to ethical standards in new technologies (“My generation does not put enough emphasis on ethical standards in new technologies”) and 59% criticize a too naive and trusting attitude toward artificial intelligence (“My generation is not critical enough of new technologies such as artificial intelligence”).

Measures to Enhance Digital Trust

The potential and actual impact of technologies on society depends not only on its capabilities but also on the level of acceptance. So how can “digital trust” be built and expanded? It is a matter of two dimensions: On the one hand, trust in the effectiveness and functioning of the technology itself, and on the other hand, trust in the norms and rules under which the technologies are used. To put it more concretely: Even if all machine processes work well, people might suspect that they are subject to the mercy of some uncontrolled power and unpredictable masterminds and will not trust applications.

To get the Leaders of Tomorrow’s perspective on how to enhance confidence in technology, we provided them with a list of initiatives and (potential) legislation that might (or might not) encourage trust in new technologies. We wanted to know how urgent and effective each of these would be seen in boosting trust in tech.

Transparency is once again the most important criterion when it comes to trust building. In the context of technology, this means providing easy access to information about how one’s data is used. A total of 49% of the Leaders of Tomorrow find this to be extremely urgent, and a further 33% see it as necessary. The second pillar, rated almost as highly, is education – in the sense of providing a better understanding of the underlying processes of new technologies. The measure ”Enhancing education on emerging technologies to make people aware of their benefits and risks” is considered very urgent by 48% and necessary by 34%. In contrast, least important, both in urgency and necessity, are the dismantling of powerful big tech companies and a mandatory commitment of programmers to act only for the common good. The other four proposed measures were ranked somewhere in between. These measures involve, in descending order of their rated urgency, independent supervisory authorities for regulating Big Tech (rated urgent by 35% and necessary by 34%), global agreements on rules (28% and 34%), involving marginalized groups in AI design to prevent biases (27% and 32%), and the extension of systems to identify potential social biases of AI (22% and 34%). To put it in a nutshell: To  strengthen trust in technology, the Leaders of Tomorrow put most emphasis on empowering individual responsibility, transparency and supervision.

Perceived Trustworthiness of AI

We already talked about two basic components of trust: competence and goodwill. Both components are relevant in private relationships as well as for trust in institutions and organizations – in other words, in all relationships in which people are involved on both sides. But trust in technology is different from trust in people. Technology has no consciousness and no emotions – neither good nor bad. Ideally, it is just reliable and objective. Theoretically, this lack of feelings could be trust-promoting, as machines do not possess an inherent inclination for moral evaluation, rivalry, vanity, or revenge.

In practice, however, matters are more complicated. Examples abound of discriminating algorithms – against minorities, against ethnic groups, against women. For example, Apple’s algorithms associated with their newly launched credit cards in 2019 sparked an inquiry (Vigdor, 2019). The system had offered men much higher credit limits than women, even if they were married, sharing all their bank accounts. And in 2020, Twitter had to apologize for racial bias in its image-cropping algorithm that is supposed to select the most interesting part of an image. Users had found out that the algorithm systematically preferred white over black faces (Hern, 2020). Obviously, algorithms can learn prejudices from humans.

With all these arguments and examples in mind, it is clear that dealing with AI is a very complex and emotionally charged issue. However, AI has already replaced some of the work and tasks of human beings, including decision-making in many economic sectors, and could replace human beings in many more. Artificial Intelligence (AI) is developing rapidly and is capable of more and more tasks. In which domains do the Leaders of Tomorrow trust AI’s capabilities, and in which do they want to keep relying on humans?

61% – thus the majority of Leaders of Tomorrow – would, as passengers, rather rely on AI than on a human driver. However, this is the only majority in favor of AI in the competition ”human versus machine” in this survey. Parity exists, at least, on the topic of law enforcement. Automatic monitoring and punishment of violations (e.g., in traffic) would be handed over to AI by at least half of the Leaders of Tomorrow. For all other listed tasks and responsibilities, humans are preferred, albeit with varying degrees of preference.

These results are also supported by recent experimental evidence. In a series of experiments, Berkeley Dietvorst and his colleagues (Dietvorst, Simmons & Massey, 2015) observed that people tend to trust human judgment over algorithms, leading them to coin the term algorithm aversion. In particular, people lose confidence in algorithms when they observe them making a mistake. Even when an algorithm still consistently beats human judgment, people then tend to prefer to go with their gut. It seems that when it comes to AI, perfection is expected, and errors are not forgiven.

The Leaders of Tomorrow express the lowest level of trust in AI in the area of psychotherapy. Humans are also trusted much more when it comes to jurisdiction. Recruitment is the third topic with little approval and should preferably not be handed over to AI according to the Leaders of Tomorrow. What do the domains for which AI skepticism is largest have in common? All are traditionally characterized by direct, personal interaction and a high need for empathy, which sometimes (for better or worse) requires an intuitive expertise that goes beyond the objective data points provided. Obviously, many doubt that AI has the capabilities required for these tasks.

And indeed, the so-called algorithm aversion seems to be task-dependent (Castelo, Bos & Lehmann, 2019). People seem especially reluctant to trust algorithms for tasks that require intuition and empathy (e.g., in one experiment people trusted algorithms more for financial guidance than for dating advice). This finding suggests that in people’s perception, AI may still lack the social and emotional intelligence relevant in domains where the need for such qualities is high and where there are therefore no straightforward criteria for evaluation. The Leaders of Tomorrow seem to share this view, as some comments show.

However, confidence in AI may well increase in the near future. Once AI manages to bridge the uncanny valley – imperfect resemblance to humans leading to eerie feelings and rejection – our relationship with this technology may change. Research has shown that anthropomorphism, the attribution of human characteristics to a non-human agent, can predict responsibility and trust placed on the agent as well as increase social influence by the agent (Waytz, Cacioppo & Epley, 2010). Thus, human appearance and behavior, such as responsive movements and natural voice, of AI interfaces may increase our trust and help overcome barriers to adoption, broadening the domains for which applications are embraced.

From a practical point of view, different levels of acceptance of AI will require different measures to increase trust. While driverless cars and automated law enforcement appear to be ready for testing prototypes with exemplary character, AI-based medical diagnoses and chatbots in service management may need more research to improve user experience. For AI-based recruitment, jurisdiction, and psychotherapy, on the other hand, much deeper research will certainly be needed to understand the reasons for barriers and ways to overcome them.

But whatever is possible in the future, AI will not be able to fully replace face-to-face personal interaction for psychological well-being, social calibration and human trust – and it is questionable

whether this is something to strive for in the first place. So instead of just optimizing the human-likeness of machines for interaction, we must not neglect fostering humanity and community between people.

Read the full Voices of the Leaders of Tomorrow Report here for all findings and detailed analysis.

Share the article

Leave a Reply

Your email address will not be published. Required fields are marked *