Original post in The Business Times.
More has to be done to ensure that AI is used for social good.
A SILVER lining emerging from Covid-19’s social and economic fallout is the unprecedented application of artificial intelligence (AI) and Big Data technology to aid recovery and enable governments and companies to effectively operate. However, as AI and Big Data are rapidly adopted, their evolution is far outpacing regulatory processes for social equity, privacy, and political accountability, fuelling concern about their possible predatory use.
No matter whether contributing to essential R&D for coronavirus diagnostic tools or helping retailers and manufacturers transform their processes and the global supply chain, AI’s impressive achievements do not fully allay anxieties around their perceived dark side.
Public concern about the threats of AI and Big Data ranges from privacy breaches to dystopian takes on the future that account for a technological singularity. Meanwhile, there is fairly strong sentiment that tech giants like Facebook, Amazon and Apple have too much unaccountable power. Amid rising antitrust actions in the US and legislative pushback in Europe, other firms like Microsoft, Alibaba and Tencent also risk facing similar accusations.
Despite their advancements, breakthrough technologies always engender turbulence. The pervasiveness of AI across all aspects of life and its control by elites, raise the question of how to ensure its use for social good.
For the ordinary citizen, justifiable suspicion of corporate motives can also render them prey to misinformation. Multilateral organisations have played critical roles in countering false claims and building public trust, but there is more to be done.
AI FOR SOCIAL GOOD
Against this backdrop, APRU (the Association of Pacific Rim Universities), the United Nations ESCAP and Google came together in 2018 to launch an AI for Social Good partnership to bridge the gap between the growing AI research ecosystem and the limited study into AI’s potential to positively transform economies and societies.
Led by Keio University in Japan, the project released its first flagship report in September 2020 with assessments of the current situation and the first-ever research-based policy recommendations on how governments, companies and universities can develop AI responsibly.
Together they concluded that countries effective in establishing enabling policy environments for AI that both protect against possible risks and leverage it for social and environmental good will be positioned to make considerable leaps towards the Sustainable Development Goals (SDGs). These include providing universal healthcare, ensuring a liveable planet, and decent work opportunities for all.
However, countries that do not create this enabling environment risk forgoing the potential upsides of AI and may also bear the brunt of its destructive and destabilising effects: from weaponised misinformation, to escalating inequalities arising from unequal opportunities, to the rapid displacement of entire industries and job classes.
Understanding of the long-term implications of fast-moving technologies and effectively calibrating risks is critical in advancing AI development. Prevention of bias and unfair outcomes produced by AI systems is of top priority, while government and private sector stakeholders should address the balance between data privacy, open data and AI growth.
For governments, it will be tricky to navigate this mix. The risk is that sluggish policy responses will make it impossible to catch up with AI’s increasingly rapid development. We recommend governments establish a lead public agency to guard against policy blind spots. These lead agencies will encourage “data loops” that provide feedback to users on how their data are being used and thus facilitate agile regulation. This is necessary due to AI’s inherently rapid changing nature and the emergence of aspects that may not have been obvious even weeks or months earlier.
Another important ability that governments have to acquire is the ability to negotiate with interest groups and ethical considerations. Otherwise, progress of promising socially and environmentally beneficial AI applications ranging from innovative medical procedures to new transportation options can be blocked by vested interests or a poor understanding of the trade-offs between privacy and social impact.
Governments should also strengthen their ability to build and retain local technical know-how. This is essential, given that AI superpower countries are built on a critical mass of technical talent that has been trained, attracted to the country, and retained.
DIASPORA OF TALENT
Fortunately, many countries in Asia have a diaspora of talent who have trained in AI at leading universities and worked with leading AI firms. China has shown how to target and attract these overseas Chinese to return home by showcasing economic opportunities and building confidence in the prospects of a successful career and livelihood.
To deal with the use and misuse of advanced technologies such as AI, we need renewed commitment to multilateralism and neutral platforms on which to address critical challenges.
At the next level, the United Nations recently launched Verified, an initiative aimed at delivering trusted information, advice and stories focused on the best of humanity and opportunities to ‘build back better’, in line with the SDGs and the Paris agreement on climate change. It also invites the public to help counter the spread of Covid-19 misinformation by sharing factual advice with their communities.
The education sector is playing its part to facilitate exchange of ideas among thought leaders, researchers, and policymakers to contribute to the international public policy process. I am hopeful that universities will be able to partner with government, the private sector and the community at large in constructing a technological ecosystem serving the social good.
- The writer is secretary general of APRU (the Association of Pacific Rim Universities)