10 Break-Out Sessions
[timetable id="9" column_title="0" filter_visible="1" filter_multiple="1" event_box_time="0"]
[timetable id="9" column_title="0" filter_visible="1" filter_multiple="1" event_box_time="0"]
Why do you not trust your government? Only 45% of citizens in OECD countries trusted their government in 2019. Based on this finding, some of you reading this article may be reluctant to trust your government to respond to crises such as economic inequality, racial injustice, climate change, and a widespread pandemic. One has to wonder – how are government responses to the Covid-19 pandemic going to influence public trust in 2020?
Study after study has shown that a government’s values, such as responsiveness, integrity, and the fairness and openness of institutions are drivers of public trust in government. That more than half of citizens question their trust in governments is a telling metric of the delivery of public services – or the communication of the delivery of these services. Often, entrenched processes make it difficult for institutions to deliver the services – and above all, security – demanded by their citizens. From contact tracing in Switzerland, South Korea, and Australia to ensuring social distancing in Belgium and Singapore, the increased use of artificial intelligence in various government responses to Covid-19 raises the question of whether AI can be a vehicle through which to rebuild trust in government – or completely tear it down.
In a recent study I conducted during confinement earlier this year, I found that security, public, industry, non-profit, and academic professionals all agreed on the risk of not using AI placing their nation at a competitive disadvantage to others. Whether the technology is to be used for offensive, defensive – or paradoxically, as a classic deterrence mechanism, citizens seem to agree that AI comprises an important set of technologies that nations need to build to protect their systems – and themselves. The problem however, arises with the misuse of these emerging technologies, that can be used to disrupt our current systems.
Data is the food that drives all AI-driven algorithms. With the default currently set to policy instruments working to retroactively catch up with the economic incentivization of the development of these technologies, the privacy of our citizens seems to be constantly undermined as we aim to deliver better products for our citizens. In developing solutions to better protect our citizens – through managing cyber risk, contact tracing, and innovations in healthcare, we are at the same time undermining the privacy concerns and protections of our citizens. We are misidentifying marginalised populations in the training datasets fed into AI-driven systems and skewing outcomes. We are sharing our data with companies that sell this precious and deeply personal commodity to those who run influence campaigns, build conspiracy theories, and sow disinformation.
A few days ago, my friend bought a toaster oven. She was excited – she said that it was a state-of-the-art toaster oven, with several built-in capabilities to deliver what she promised was perfectly toasted bread. It was innovative – it had various settings for the different ingredients you planned to use on the sandwich, so that your end result wouldn’t be too soggy etc. It was supposed to make my friend’s life easier – she often made sandwiches to go, and this was supposed to require minimal effort. A few days into taking the toaster for a spin, my friend complained to me, “I will never trust this brand again. They were supposed to deliver good products but – my bread is burned too quickly. I also saw somewhere that this toaster is not very good for your health – something about oils in the toaster that can be poisonous over time.”
“Did you read the manual?” I asked her. I thought it may be a technical problem, something with regard to the various, fancy settings. “Where did you read about the oil issue?”
“I was frustrated and was looking up problems with this toaster online. I saw a few reviews by people who were frustrated too. Funny – I only saw positive things when I first looked it up. I didn’t read the manual. Too long and boring!”
What do we do when the manual for our AI products is too long and boring – or simply not there?
AI-driven technologies, like most other products, have assistive capacities that are already revolutionising our ways of life. However, it is very easy to erode our trust in these technologies if the default is to deliver convoluted, opaque statements on the purpose and functionality of the technologies. The ways in which our data is protected should be clearly communicated, keeping human biases in mind. We don’t like to read the fine print. Companies should not be taking advantage of this fact – leading to an erosion of our trust in automated products used by governments to keep our citizens safe, fuel innovation, and maximize productivity.
The classic 5W “Who is doing what, where, when, and why?” when it comes to these technologies should not require a doctoral dissertation to unpack. Regulations to protect citizen data in the aftermath of electoral influence campaigns should make use of nudges for good – setting up a choice architecture that doesn’t set the default to share all, but share the basics only. Citizens should not have to opt-in to actively learn about where their data is going – if it’s possible at all – but opt-out only if they do not wish to do so. The default matters – it can determine whether you are targeted by far-right extremist groups if you are a swing voter and whether you see fact or fiction.
So, why do we not trust our governments? Why do we trust our public services sometimes more than the governments that enact and fund these services? Is it because it’s difficult to understand – a legislative black box that keeps the non-experienced professional out, much like AI algorithms and technologies themselves? Humans find things that are difficult to understand aversive. Classic studies have demonstrated how a lack of information can lead to rumours and exploitation by those working this information deficit, confusion, instinctive dislike, and gossip to their advantage. In the same way, governments ill-communicating their use of AI, where the data is going, and how purpose-limited this data is, can cause rampant speculation and an erosion of trust in the initiatives they build to serve the interests of their own citizens. Trust is built on transparency and information. The harder it is to read the toaster manual, the more likely it will be thrown away.