- What kind of regulatory regime can put effective checks on misuse or socially dangerous developments without harming technological progress in the field?
- How can accountability of AI-supported decision-making be secured if the details of the process cannot be fully and transparently explained?
- How is it possible, in an environment of large-scale data usage, to safeguard privacy and data protection?
We are holding 3 joint online expert forums focusing on Asia-Europe dialogue on AI regulation and governance, on 3 critical themes of debate that stand at the frontier of current attempts to develop AI regulatory policy and are likely to constitute future-shaping parameters on how AI will be implemented in global industries and societies. Participants include governmental and non-governmental actors and experts from Asia and Europe involved in the wider process of tech regulation.
Deliverables will include 3 webinars, video recording, web articles, followed by a publication of a policy insight brief developed from the proceedings.
- Event 1: Risk-based approach of AI regulation (5 May 2022)
AI applications can be categorized by the levels of risk they may imply, with appropriate regulatory restrictions and exemptions specified in the regulatory framework. Which type of AI would constitute an “unacceptable risks” that may be strictly prohibited? How can these areas be defined clearly? What kind of applications should be exempted? The European Union’s proposed AI Act (Apr 2021) and ongoing discussion in 2022 has taken a significant step along this approach; initiatives in Asia-Pacific will also be discussed. (View an event recording and a news article).
- Event 2: Explainable AI (25 May 2022)
AI algorithmic designs may involve algorithms such as neural networks and machine learning mechanisms, which can enhance robust power and predictive accuracy of the applications. However, how AI systems arrive at their decisions may appear opaque and incomprehensible to general users, non-technical managers, or even technical personnel. Algorithmic design may involve assumptions, priorities and principles that have not been openly explained to users and operation managers. The proposals of “explainable AI” and “trustworthy AI” are initiatives to create AI applications that are transparent, interpretable, and explainable to users and operations managers. These initiatives seek to foster public trust, informed consent and fair use of AI applications. They also seek to move against algorithmic bias that may work against the interest of underprivileged social groups. (View an event recording and a news article).
With increasingly powerful AI applications, enterprises and states are now able to collect, store, access, and analyse data in manners that influence important aspects of life. The challenge of regulators is to strike a balance between data rights of citizens / users, and the rights for enterprises and states to make use of data in AI. The data rights for individuals, including personal data privacy, informed consent, opt-out rights, and fair use of personal data should be protected; abusive manipulation of consumer bahaviours, public opinions, or abusive surveillance should be limited. Conversely, regulators should also leave reasonable room for robust innovation and effective business strategies, and facilitate effective operation of government bureaus to deliver public services. (View an event recording and a news article).
The Association of Pacific Rim Universities (APRU), with experts from its member universities and external partners in the region, has been pursuing debates in the field of AI policies and ethics since 2016. Specifically, APRU in collaboration with UN ESCAP and Google set up the AI for Social Good network supporting governments and key stakeholders in developing insights how best to develop governance approaches that will address challenges associated with AI, while maximizing the technology’s potential in the Asia Pacific region. The latest activity is working with specific government agencies in South East to identify specific regulatory, governance or capability needs they may be facing and develop high-impact insights towards the development of suitable country specific governance frameworks and national capabilities.
The Heinrich Böll Stiftung (hbs), from Germany with a global network of more than 30 offices, is involved in the discussion regulatory and governance issues of digitalization especially through its Brussels, Washington and Hong Kong offices and its head office in Berlin. hbs is networked to relevant actors especially in Europe, including civil society and members of parliament, policy-makers and other experts involved in the EU’s AI Law initiative.
Find out the Regulating AI activities on the hbs’ website here.