10 Break-Out Sessions

  • Time: 3:30 pm - 4:30 pm

[timetable id="9" column_title="0" filter_visible="1" filter_multiple="1" event_box_time="0"]

Sign up for our Newsletter

Sign up for our Newsletter

Shaping the Development of Trustworthy AI

As various artificial intelligence systems developed so far carry significant risks, the EU has emerged as a key player to define and shape trustworthy AI.

Trust in technology is a precarious topic. While the goal is to encourage the development and deployment of trustworthy technology broadly speaking, trust in the sense most people use it must be earned by living up to expectations and commitments over time. The question as to whether technology can be trustworthy is a nuanced one and answers from various scholars differ starkly.  
 
More apt may be questions as to whether technology fulfills the requirements we expect it to fulfil, thus becoming deserving to form part of our broader societal construct. Those requirements form the core threshold and vital research of those involved in working on artificial intelligence (AI) governance today.

Trustworthy AI has recently become the non plus ultra as the ‘type’ of AI we, as a society, should aim for. And rightly so, as AI that interacts with us and our environment ought to be deployed in a manner that makes the process and its existence worthy of our trust, broadly speaking. Unlike many other aspirational ‘types’ of AI, be that for example ‘AI for good’ or ‘beneficial AI’, ‘trustworthy AI’ looks as if it is here to stay. Especially in the European Union (EU) where it is a red thread through the European Commission’s policy making and grounded in the clear conceptual definition of an AI system that is “lawful, complying with all applicable laws and regulations; ethical, ensuring adherence to ethical principles and values; and robust, both from a technical and social perspective, since, even with good intentions, AI systems can cause unintentional harm (AI HLEG, 2018).”

The EU is emerging as a key player when it comes to developing a framework and governance mechanism to encourage the development and deployment of ethical and human-centric AI. Indeed, the legislative proposal, in the form of a White Paper on AI, published beginning of 2020, follows in these footsteps and tries to elevate trustworthy AI from a conceptual idea towards the building block of a legal mechanism.

The current proposal defines “high risk” AI systems as those that fall under the cumulative criteria of being deployed in a high-risk area and those that are a high-risk application (for example AI systems analysing medical imaging in a hospital environment). It requires them to fulfill a range of legal obligations which are largely grounded in the seven key requirements of the European Commission’s High Level Expert Group on AI published in their Ethics Guidelines for Trustworthy AI. 

Recent times have taught individuals and democratic society alike that the AI developed so far is not worthy of blind trust. We are in the middle of an urgent and crucial time to meaningfully shape the current ecosystem fostering appropriate certification, standardisation, forecasting measures, regulatory or other suitable governance efforts at European and international level without further increasing the pacing gap between technology policy and the technology itself.

There need to be clear and agreed upon boundaries denoting which AI systems are fulfilling the requirements we as a society have set out for them and which ones do not. Otherwise, the proverbial silver bullet may quickly turn out to be the emperor shrouded in his new clothes.  

Share the article

Leave a Reply

Your email address will not be published. Required fields are marked *