APRU is proud to announce the publication of the final synthesis report of the Regulating AI webinar series brought together by the Hong Kong-chapter of the Germany-based Heinrich-Böll-Stiftung (hbs HK) and APRU.
“Regulating AI: Debating Approaches and Perspectives from Asia and Europe” addresses key questions that surround the appropriate regulation of AI including: What constitutes an unacceptable risk? How does AI become explainable? How can data rights be protected without throttling AI’s potential?
The joint synthesis report comes at a critical time, as AI has been leaving the labs and is rapidly gaining footholds in our everyday lives. Millions of decisions – many of them invisible – are being driven by AI.
“The project facilitated a fruitful exchange of perspectives from Asia and Europe and allows us to better understand a wide range of emerging approaches to the regulation of AI in different parts of the world,” says Christina Schönleber, APRU’s Chief Strategy Officer and member of the Regulating AI webinar series working group.
Webinar 1 under the theme “Risk-based Approach of AI Regulation” was moderated by Zora Siebert (hbs Brussels) and featured Toby Walsh (University of New South Wales), Alexandra Geese (Member of European Parliament), and Jiro Kokuryo (Keio University) as speakers. The event highlighted that the EU’s proposed AI Act is taking a significant step in defining the types of AI with unacceptable risks, as well as how these can be clearly defined.
Webinar 2 under the theme “Explainable AI” was moderated by Kal Joffres (Tandemic) and brought in perspectives of Liz Sonenberg (University of Melbourne), Matthias Kettemann (Hans-Bredow-Institute / HIIG), and Brian Lim (National University of Singapore). Participants agreed that enabling humans to understand why a system makes a particular decision is key to fostering public trust.
Webinar 3 under the theme “Protection of Data Rights for Citizens and Users” was moderated by Axel Harneit-Sievers (hbs HK) with Sarah Chander (European Digital Rights), M Jae Moon (Yonsei University), and Sankha Som (Tata Consultancy Services) looking into various risks deriving from both under-regulation and over-regulation.
The synthesis report concludes that while governments are fully capable of banning or restricting entire categories of AI uses, the risks posed by AI are so context-sensitive that regulating them a priori and regardless of context is a blunt instrument.
The working group furthermore notes that policy discussions on AI have too often focused on individuals’ fundamental rights; they recommend that discussions should be rebalanced for greater consideration of the broader societal impacts of AI.
Finally, the synthesis report warns that policy discussions centred on the risks of AI can sometimes lose sight of the opportunities AI offers for creating a better future.
“AI has the potential to help address human biases in decision-making and deliver a level of explainability that many of today’s institutions cannot, from banks to government agencies,” the working group writes. “The opportunities of AI must be monitored and acted upon as rigorously as the risks.”
Find out more information about Regulating AI here.
Download the report here.