Concerns about the predatory use of technology, privacy intrusions and worsening social inequalities must be jointly addressed by all stakeholders in society – through sensible regulations, sound ethical norms and international collaboration.
In September, it was reported that Zhu Songchun, an expert in artificial intelligence at UCLA, had been recruited by Peking University. It was seen as part of the Chinese government’s strategy to become a global leader in AI, amid competition with the US for technological dominance.
In the West, a new US administration has been elected amid anxiety about cyber interference. Tech giants Apple, Facebook, Amazon and Google are facing antitrust accusations in the US, while the European Union has unveiled sweeping legislation to enable regulators to head off bad behaviour by big tech before it happens.
Meanwhile, Shoshana Zuboff’s bestselling book The Age of Surveillance Capitalism has alerted social media users to a new economic order that “claims human experience as free raw material for hidden commercial practices”.
In addition, the public is regularly bombarded with dystopian scenarios (like in Black Mirror ) about intelligent machines taking control of society, often at the service of ruling elites or criminals.
The dual character of AI – its promise for social good and its threat to human society through absolute control – has been a familiar theme for some time. Also, AI systems are evolving rapidly, outpacing regulatory processes for social equity and privacy.
Especially during a pandemic, the urgent question facing governments, the private sector and universities is how to promote public trust in the beneficial side of AI technologies. One way to build public trust is to deliver for the global common good, beyond national or corporate self-interest.
With the world facing crises ranging from the current pandemic to worsening inequalities and the massive effects of climate change, it is obvious that no single country can solve any of them alone.
The technological advances of AI already hold out promise in everything from medical diagnosis and drug development to creating smart cities and transitioning to a renewable-energy economy. MIT has reportedly developed an app that can immediately diagnose 98.5 per cent of Covid-19 infections by people just coughing into their phones.
A recent report on “AI for Social Good”, co-authored by the UN, Google and the Association of Pacific Rim Universities, concluded that AI can help us “build back better” and improve the quality of life. But it also said “the realisation of social good by AI is effective only when the government adequately sets rules for appropriate use of data”.
With respect to limiting intrusions on individual rights, it said that “the challenge is how to balance the reduction of human rights abuses while not suffocating the beneficial uses”.
These observations go to the core of the problem. Are governments accountable in real ways to their citizens or are they more aligned with the interests of hi-tech monopolies? Who owns the new AI technologies? Are they used for concentrating power and wealth or do they benefit those most in need of them?
The report recommends that governments develop abilities for agile regulation; for negotiation with interest groups to establish ethical norms; for leveraging the private sector for social and environmental good; and to build and retain local know-how.
While these issues will be approached in different ways in each country, international collaboration will be essential. International organisations, globally connected social movements as well as enhanced political participation by informed citizens will be critical in shaping the environment for regulation in the public interest.
At the same time, geopolitical rivalry need not constrain our building of trust and cooperation for the common good.
The Covid-19 crisis has shown that it is possible for governments to move decisively towards the public interest and align new technologies to solutions that benefit everyone. We should not forget that, in January, a team of Chinese and Australian researchers published the first genome of the new virus and the genetic map was made accessible to researchers worldwide.
International organisations such as the World Health Organization and international collaborations by biomedical researchers also play critical roles in building public trust and countering false information.
Universities have played an important role in advancing research cooperation with the corporate sector and in bolstering public confidence that global access takes priority over the profit motive of Big Pharma.
For example, the vaccine developed by Oxford University and AstraZeneca will be made available at cost to developing countries and can be distributed without the need for special freezers.
Peking University and UCLA are cooperating with the National University of Singapore and the University of Sydney to exchange best practices on Covid-19 crisis management.
Competition for international dominance in AI applications also fades as we focus on applying its beneficial uses to common challenges. Global frameworks for cooperation such as the UN 2030 Agenda for Sustainable Development or the Paris Climate Agreement set out the tasks.
Google, for example, has established partnerships with universities and government labs for advanced weather and climate prediction, with one project focusing on communities in India and Bangladesh vulnerable to flooding.
To deal with the use and misuse of advanced technologies like AI, we need a renewed commitment to multilateralism and to neutral platforms on which to address critical challenges.
Universities that collectively exercise independent ethical leadership internationally can also, through external partnerships, help to shape national regulatory regimes for AI that are responsive to the public interest.
Find out more about the UN ESCAP-APRU-Google AI for Social Good project here.