Webinar by Heinrich Böll Stiftung and APRU takes deep dive into Explainable AI
June 1, 2022
01

On May 25, a webinar held jointly by the Hong Kong office of the Heinrich Böll Stiftung (hbs) and the Association of Pacific Rim Universities (APRU) highlighted that many of the algorithms that run artificial intelligence (AI) are shrouded by opaqueness, with expert speakers identifying approaches in making AI much more explainable than it is today.

The webinar held under the theme Explainable AI was the second in a joint hbs-APRU series of three webinars on regulating AI. The series comes against the backdrop of ever more AI-based systems leaving the laboratory stage and entering our everyday lives.

While AI algorithmic designs can enhance robust power and predictive accuracy of the applications, they may involve assumptions, priorities and principles that have not been openly explained to users and operation managers. The proposals of “explainable AI” and “trustworthy AI” are initiatives that seek to foster public trust, informed consent and fair use of AI applications. They also seek to move against algorithmic bias that may work against the interest of underprivileged social groups.

“There are many AI success stories, but algorithms are trained on datasets and proxies, and developers too often and unintentionally use datasets with poor representation of the relevant population,” said Liz Sonenberg, Professor of Information Systems at the University of Melbourne, who featured as one of the webinar’s three speakers.

“Explainable AI enables humans to understand why a system decides in certain way, which is the first step to question its fairness,” she added.

Sonenberg explained that the use of AI to advise a judicial decision maker of a criminal defendant’s risk of recidivism, for instance, is a development that should be subject to careful scrutiny. Studies of one existing such AI system suggest that it offers racially biased advice, and while this proposition is contested by others, these concerns raise the important issue of how to ensure fairness.

Matthias C. Kettemann, head of the Department for Theory and Future of Law at the University of Innsbruck, pointed out that decisions on AI systems’ explanations should not be left to either lawyers, technicians or program designers. Rather, he said, the explanations should be made with a holistic approach that investigates what sorts of information are really needed by the people.

“The people do not need to know all the parameters that shape an AI system’s decision, but they need to know what aspects of the available data influenced those decisions and what can be done about it,” Kettemann said.

“We all have the right of justification if a state or machine influences the way rights and goods are distributed between individuals and societies, and in the next few years, it will be one of the key challenges to nurture Explainable AI to make people not feeling powerless against AI-based decisions,” he added.

Brian Lim, Assistant Professor in the Department of Computer Science at the National University of Singapore (NUS), in his research explores how to improve the usability of explainable AI by modeling human factors and applying AI to improve decision making and user engagement towards healthier and safer lifestyles.

Speaking at the webinar, Lim explained that one of the earliest uses of Explainable AI is to identify problems in the available data. Then, he said, the user can investigate whether the AI reasons in a way that follows the standards and conventions in the concerned domain.

“Decisions in the medical domain, for instance, are important because they are a matter of life and death, and the AI should be like the doctors who understand the underlying biological processes and causes of mechanisms,” Lim said.

“Explainable AI can help people to interpret their data and situation to find reasonable, justifiable and defensible answers,” he added.

The final webinar will be held on June 15 under the theme Protection of Data Rights for Citizens and Users. The event will address the challenges for regulators in striking a balance between data rights of citizens, and the rights for enterprises and states to make use of data in AI.

More information

Contact Us
Lucia Siu
Programme Manager, Heinrich Böll Stiftung, Hong Kong, Asia | Global Dialogue
Email: Lucia.Siu [at] hk.boell.org

Christina Schönleber
Senior Director, Policy and Research Programs, APRU
Email: policyprograms [at] apru.org

Related Articles
No Easy Answers on Protection of AI Data Rights, Webinar by HBS and APRU Shows
more
Heinrich Böll Stiftung and APRU Discuss Risk-based Governance of AI in First Joint Webinar
more
APRU on China Daily: Your seat at the table depends on how innovative you are
more
APRU on The Business Times: Safeguarding Our Future With AI Will Need More Regulations
more
APRU on South China Morning Post: Governments, business and academia must join hands to build trust in AI’s potential for good
more
APRU on Times Higher Education: ‘Oversight needed’ so AI can be used for good in Asia-Pacific
more
APRU releases AI for Social Good report in partnership with UN ESCAP and Google: Report calls for AI innovation to aid post-COVID recovery
more
AI for Social Good network releases new report
more
DiDi and APRU strengthen partnership with MoU and new APEC project
more
AI For Everyone: New Open Access Book
more
Accelerating Indonesia’s Human Capital Transformation for Future of Work
more
AI Policy for the Future: Can we trust AI?
more
APRU stepping up research infrastructure and network to build technology systems free from human rights risks
more
Automation and the Transformation of Work in Asia-Pacific in the 21st Century
more
APRU Partners to Close the Digital Skills Gap at APEC
more
Kick-off for AI for Social Good―A United Nations ESCAP-APRU-Google Collaborative Network and Project
more
APRU Partners with United Nations ESCAP and Google on AI for Social Good
more
1
17