Tag #AI for Social Good
Programme (3)
Event (5)
News (14)
AI for Social Good (Phase II)
AI for Social Good (Phase I)
AI for Social Good
While the AI research ecosystem is growing, there is currently still limited research into how AI can positively transform economies and societies. In light of this, United Nations ESCAP, APRU and Google partnered in 2018 to fill this void by developing a network of regional scholars to formulate policies and strategies that support, advance and maximize AI for Social Good. The project draws on new insights for the development of a set of papers and report to inform senior policy makers, experts, and governments how to cultivate an ecosystem. The aim is to foster and enhance AI for social good within economies and identify what government approaches will address the challenges associated with AI while maximizing the technology’s potential. Visit the UNESCAP-APRU-Google AI for Social Good Summit, November 2020 here. Find out more about the two AI projects with the Thai Government here. Find out more about the AI project with the Bangladeshi Government here.
Call For Expression of Interest for 'AI For Social Good Project' with the Bangladesh Government
  The call for EOIs is closed.
March 30, 2022 - April 24, 2022
Call For Expression of Interest for 'AI For Social Good Project' with the Thai Government
The call for EOIs is closed.
January 28, 2022 - February 22, 2022
'AI For Social Good: Strengthening Capabilities and Government Frameworks in Asia and the Pacific’ informing AI Policies and Strategies in Bangladesh
January 1, 2022 - June 30, 2022
'AI For Social Good: Strengthening Capabilities and Government Frameworks in Asia and the Pacific’ informing AI Policies and Strategies in Thailand
January 1, 2022 - June 30, 2022
UN ESCAP-APRU-Google AI for Social Good Summit, November 2020
October 22, 2020 - November 26, 2020
APRU on Bloomberg: The next stage: APRU-Google-UN ESCAP AI for Social Good Project now working directly with government agencies
Original post on Bloomberg. The AI for Social Good Project – Strengthening AI Capabilities and Governing Frameworks in Asia and the Pacific has recently passed the milestone of onboarding two key government agencies. The project is the latest collaboration between the Association of Pacific Rim Universities (APRU), UN ESCAP, and Google.org, which commenced in mid-2021 and will run until the end of 2023. Over the past year, meetings and workshops have been held with government agencies from Thailand and Bangladesh. The confirmed government partners to join the project are the Office of National Higher Education, Science, Research and Innovation Policy Council (NXPO) of Thailand, in close collaboration with the National Electronics and Computer Center (NECTEC) and the National Science and Technology Development Agency and the Institute of Field Robotics (FIBO) under the King Mongkut’s University of Technology Thonburi, and the Bangladesh Aspire to Innovate (a2i) Programme. NXPO and a2i are affiliated with Thailand’s Ministry of Higher Education, Science, Research and Innovation and the ICT Division and Cabinet Division of Bangladesh, respectively. The AI for Social Good multi-stakeholder network was initially set up in 2019, among the first milestones being the creation of a platform that convenes leading experts from the region to explore opportunities and challenges for maximizing AI benefits for society. After these activities engaged a wide range of policy experts and practitioners, the three project partners decided that it was the right time to move on to the next stage of working directly with government agencies to apply the insights generated through the collaborative project to date. The aim has been to work with government partners in Asia and the Pacific to grow sound and transparent AI ecosystems that support sustainable development goals. “Recognizing that AI offers transformative solutions for achieving the SDGs, we are pleased to participate in the AI for Social Good Project to share experience and research insights to develop enabling AI policy frameworks,” said Dr. Kanchana Wanichkorn, NXPO’s Vice President. NXPO identified ‘Poverty Alleviation’ and ‘Medicine and Healthcare’ as two areas of need that are now tackled by two academic project teams. To alleviate poverty and inequality, the Thai government has developed data-driven decision-making systems to improve public access to state welfare programs. The project, under the academic leadership of the Australia National University (ANU) team, will focus on enhancing the human-centered design and public accessibility of these technologies to support successful implementation. In addition, research on AI for medical applications has increased exponentially in the past few years in Thailand. However, the progress in developing and applying AI from research to market in these areas is relatively slow. To support and accelerate the use of AI in medicine and healthcare, the expert team from the National University of Singapore (NUS) will focus their research and analysis on identifying crucial bottlenecks and gaps that impede the beneficial use of AI. While the two Bangladesh projects both focus on the need for ‘Continuing and Personalized Pregnancy Monitoring’ (to improve health outcomes during and after birth), they are exploring different aspects of this key focus area for the government of Bangladesh. Under the leadership of the team from NUS & KAIST, the first project investigates challenges in perceptions and reception of incorporating AI into continuous pregnancy monitoring systems. Under the leadership of the University of Hawai‘i Team, the second project circles in on technological issues of Bangladesh’s healthcare sector and their impacts on AI-based data analysis and decision-making processes. The academic integrity of both sets of country projects is overseen by Toni Erskine, Professor of International Politics and Director of the Coral Bell School of Asia Pacific Affairs at ANU. Erskine guides both the conception of the research questions in collaboration with the government partners and the delivery of the project outputs by providing support for the four academic teams in developing their projects. “It has been incredibly rewarding to lead a project that brings together such an impressive, multidisciplinary group of researchers with government agencies that are so passionate about finding solutions to crucial problems – ranging from poverty alleviation to maternal health care,” Erskine said. She added that “the process of working closely with government agencies from the outset to discuss these problems and co-design research questions makes this project unique and genuinely collaborative. I’m very proud to be part of it.” The following steps for the ‘AI for Social Good Project: Strengthening AI Capabilities and Governing Frameworks in Asia and the Pacific’ project will be to review and discuss the first complete drafts of the research papers by the four academic teams at a workshop in January. The partner government agencies from Bangladesh and Thailand will attend the workshop. Workshops with both government teams will also follow the presentation of final papers in the second quarter of 2023. To mark the project’s conclusion, a summit with all participants in the project will be held in mid-2023 at the Australia National University. More APRU AI for Social Good
November 28, 2022
APRU and Government Partners Organize Workshop to Strengthen AI policy in the Asia-Pacific Region
On 31 August 2022, the Office of National Higher Education, Science, Research and Innovation Policy Council (NXPO) of Thailand in close collaboration with the National Electronics and Computer Center (NECTEC) and the National Science and Technology Development Agency and the Institute of Field Robotics (FIBO) under King Mongkut’s University of Technology Thonburi co-hosted a workshop to review research proposals to drive “AI for Social Good: Strengthening Capabilities and Government Frameworks in Asia and the Pacific” project. Co-hosts of this event include the United Nations Economic and Social Commission for Asia and the Pacific (UNESCAP), the Association of Pacific Rim Universities (APRU), Google.org, Australian National University (ANU) and leading universities and research institutes in Thailand and abroad. In this workshop, four AI policy research proposals were presented and reviewed by the experts. The four proposals are: 1) AI in Pregnancy Monitoring: Technical Challenges for Bangladesh, 2) Mobilizing AI for Maternal Health in Bangladesh, 3) Responsible Data Sharing, AI Innovation and Sandbox Development: Recommendations for Digital Health Governance in Thailand, and 4) Raising Awareness of the Importance of Data Sharing and Exchange to Advance Poverty Alleviation in Thailand. Presenting the background and importance of this project in Thailand was NXPO Policy Specialist Dr. Soontharee Namliwal. She proceeded to introduce project members from Thailand which are NXPO, NECTEC of FIBO under King Mongkut’s University of Technology Thonburi. Dr. Kommate Jitvanichphaibool, NXPO Senior Division Director and Dr. Suttipong Thajchayapong, Leader of NECTEC Strategic Analytics Networks with Machine Learning and AI Research Team – provided additional information relating to the research and application of AI in Thailand, namely 1) the poverty alleviation policy, 2) the healthcare system and guidelines for data collection and 3) Personal Data Protection Act B.E. 2562 and policy and guidelines for personal data protection. The experts also offered useful suggestions to the two projects submitted by Thailand to improve the coverage and maximize the benefits to the countries in the Asia-Pacific region. Initiated in 2021, AI for Social Good: Strengthening Capabilities and Government Frameworks in Asia and the Pacific is a collaboration between the UNESCAP, APRU and partners. Under this project, the UNESCAP and APRU, with funding from Google.org, established a multi-stakeholder network to provide support in the development of country-specific AI governance frameworks and national capabilities. For more information on this project, please visit here. View the article in a Thai version here.
September 6, 2022
No Easy Answers on Protection of AI Data Rights, Webinar by HBS and APRU Shows
On June 15, a webinar held jointly by the Hong Kong office of the Heinrich Böll Stiftung (HBS) and the Association of Pacific Rim Universities (APRU), a consortium of leading research universities in 19 economies of the Pacific Rim, highlighted the complexity of data rights for citizens and users, with risks deriving from both under-regulation and over-regulation of AI applications. The webinar held under the theme Protection of Data Rights for Citizens and Users completed a joint hbs-APRU series consisting of three webinars on regulating AI. The series came against the backdrop of ever more AI-based systems leaving the laboratory stage and entering our everyday lives. While AI enables private sector enterprises and governments to collect, store, access, and analyse data that influence crucial aspects of life, the challenge for regulators is to strike a balance between data rights of users and the rights for enterprises and governments to make use of AI to improve their services. The webinar’s three speakers representing an NGO network, academia and the private sector explained that the fair use of personal data should be protected while abusive manipulation and surveillance should be limited. Conversely, regulators should leave reasonable room for robust innovation and effective business strategies and facilitate effective operation of government bureaus to deliver public services. “We not only talk about the use of personal data but also a broader range of fundamental rights, such as rights to social protection, non-discrimination and freedom of expression,” said Sarah Chander, Senior Policy Adviser at European Digital Rights (EDRi), a Brussels-based advocacy group leading the work on AI policy and specifically the EU AI Act. “Besides these rights in an individual sense, we have also been looking into AI systems’ impact on our society, impact on broader forms of marginalization, potential invasiveness, as well as economic and social justice, and the starting point of our talks with the different stakeholders is the question of how we can empower the people in this context,” she added. M. Jae Moon, Underwood Distinguished Professor and Director of the Institute for Future Government at Yonsei University, whose research focuses on digital government, explained that governments are increasingly driven to implement AI systems by their desire to improve evidence-based policy decision-making. “The availability of personal data is very important to make good decisions for public interest, and, of course, privacy protection and data security should always be ensured,” Moon said. “The citizens, for their part, are increasingly demanding customized and targeted public services, and the balancing of these two sides’ demands requires good social consensus,” he added. Moon went on to emphasize that citizens after consenting to the use of their private data by the government should be able to track the data usage while also being able to withdraw their consent. Sankha Som, Chief Innovation Evangelist of Tata Consultancy Services, explained that the terms Big Data and AI are often intertwined despite describing very different things. According to Som, Big Data is the ability to manage the input side of AI and drawing insights from the data whereas AI is about predictions and decision-making. “If you look at how AI systems are built today, there are several different Big Data approaches used on the input side, but there are also processing steps such as data labelling which are AI specific; and many issues related to AI actually come from the these processing steps,” Som said. “Biases can, intentionally or unintentionally, cause long-term harm to individuals and groups, and they can creep into these processes, so it will not only take regulation on use of input data but also on end use, while at the same time complying with enterprise specific policies,” he added. The webinar was moderated by Dr. Axel Harneit-Sievers, Director, Heinrich Böll Stiftung Hong Kong Office. The series’ previous two webinars were held in May under the themes Risk-based Approach of AI Regulation and Explainable AI. More information Listen to the recording here. Find out more about the webinar series here. Contact Us Lucia Siu Programme Manager, Heinrich Böll Stiftung, Hong Kong, Asia | Global Dialogue Email: Lucia.Siu [at] hk.boell.org Christina Schönleber Senior Director, Policy and Research Programs, APRU Email: policyprograms [at] apru.org
June 27, 2022
Webinar by Heinrich Böll Stiftung and APRU takes deep dive into Explainable AI
On May 25, a webinar held jointly by the Hong Kong office of the Heinrich Böll Stiftung (hbs) and the Association of Pacific Rim Universities (APRU) highlighted that many of the algorithms that run artificial intelligence (AI) are shrouded by opaqueness, with expert speakers identifying approaches in making AI much more explainable than it is today. The webinar held under the theme Explainable AI was the second in a joint hbs-APRU series of three webinars on regulating AI. The series comes against the backdrop of ever more AI-based systems leaving the laboratory stage and entering our everyday lives. While AI algorithmic designs can enhance robust power and predictive accuracy of the applications, they may involve assumptions, priorities and principles that have not been openly explained to users and operation managers. The proposals of “explainable AI” and “trustworthy AI” are initiatives that seek to foster public trust, informed consent and fair use of AI applications. They also seek to move against algorithmic bias that may work against the interest of underprivileged social groups. “There are many AI success stories, but algorithms are trained on datasets and proxies, and developers too often and unintentionally use datasets with poor representation of the relevant population,” said Liz Sonenberg, Professor of Information Systems at the University of Melbourne, who featured as one of the webinar’s three speakers. “Explainable AI enables humans to understand why a system decides in certain way, which is the first step to question its fairness,” she added. Sonenberg explained that the use of AI to advise a judicial decision maker of a criminal defendant’s risk of recidivism, for instance, is a development that should be subject to careful scrutiny. Studies of one existing such AI system suggest that it offers racially biased advice, and while this proposition is contested by others, these concerns raise the important issue of how to ensure fairness. Matthias C. Kettemann, head of the Department for Theory and Future of Law at the University of Innsbruck, pointed out that decisions on AI systems’ explanations should not be left to either lawyers, technicians or program designers. Rather, he said, the explanations should be made with a holistic approach that investigates what sorts of information are really needed by the people. “The people do not need to know all the parameters that shape an AI system’s decision, but they need to know what aspects of the available data influenced those decisions and what can be done about it,” Kettemann said. “We all have the right of justification if a state or machine influences the way rights and goods are distributed between individuals and societies, and in the next few years, it will be one of the key challenges to nurture Explainable AI to make people not feeling powerless against AI-based decisions,” he added. Brian Lim, Assistant Professor in the Department of Computer Science at the National University of Singapore (NUS), in his research explores how to improve the usability of explainable AI by modeling human factors and applying AI to improve decision making and user engagement towards healthier and safer lifestyles. Speaking at the webinar, Lim explained that one of the earliest uses of Explainable AI is to identify problems in the available data. Then, he said, the user can investigate whether the AI reasons in a way that follows the standards and conventions in the concerned domain. “Decisions in the medical domain, for instance, are important because they are a matter of life and death, and the AI should be like the doctors who understand the underlying biological processes and causes of mechanisms,” Lim said. “Explainable AI can help people to interpret their data and situation to find reasonable, justifiable and defensible answers,” he added. The final webinar will be held on June 15 under the theme Protection of Data Rights for Citizens and Users. The event will address the challenges for regulators in striking a balance between data rights of citizens, and the rights for enterprises and states to make use of data in AI. More information Listen to the recording here. Find out more about the webinar series here. Register for the June 15th session here. Contact Us Lucia Siu Programme Manager, Heinrich Böll Stiftung, Hong Kong, Asia | Global Dialogue Email: Lucia.Siu [at] hk.boell.org Christina Schönleber Senior Director, Policy and Research Programs, APRU Email: policyprograms [at] apru.org
June 1, 2022
Heinrich Böll Stiftung and APRU Discuss Risk-based Governance of AI in First Joint Webinar
May 12, 2022
APRU on The Business Times: Safeguarding Our Future With AI Will Need More Regulations
Original post in The Business Times. More has to be done to ensure that AI is used for social good. A SILVER lining emerging from Covid-19’s social and economic fallout is the unprecedented application of artificial intelligence (AI) and Big Data technology to aid recovery and enable governments and companies to effectively operate. However, as AI and Big Data are rapidly adopted, their evolution is far outpacing regulatory processes for social equity, privacy, and political accountability, fuelling concern about their possible predatory use. No matter whether contributing to essential R&D for coronavirus diagnostic tools or helping retailers and manufacturers transform their processes and the global supply chain, AI’s impressive achievements do not fully allay anxieties around their perceived dark side. Public concern about the threats of AI and Big Data ranges from privacy breaches to dystopian takes on the future that account for a technological singularity. Meanwhile, there is fairly strong sentiment that tech giants like Facebook, Amazon and Apple have too much unaccountable power. Amid rising antitrust actions in the US and legislative pushback in Europe, other firms like Microsoft, Alibaba and Tencent also risk facing similar accusations. Despite their advancements, breakthrough technologies always engender turbulence. The pervasiveness of AI across all aspects of life and its control by elites, raise the question of how to ensure its use for social good. For the ordinary citizen, justifiable suspicion of corporate motives can also render them prey to misinformation. Multilateral organisations have played critical roles in countering false claims and building public trust, but there is more to be done. AI FOR SOCIAL GOOD Against this backdrop, APRU (the Association of Pacific Rim Universities), the United Nations ESCAP and Google came together in 2018 to launch an AI for Social Good partnership to bridge the gap between the growing AI research ecosystem and the limited study into AI’s potential to positively transform economies and societies. Led by Keio University in Japan, the project released its first flagship report in September 2020 with assessments of the current situation and the first-ever research-based policy recommendations on how governments, companies and universities can develop AI responsibly. Together they concluded that countries effective in establishing enabling policy environments for AI that both protect against possible risks and leverage it for social and environmental good will be positioned to make considerable leaps towards the Sustainable Development Goals (SDGs). These include providing universal healthcare, ensuring a liveable planet, and decent work opportunities for all. However, countries that do not create this enabling environment risk forgoing the potential upsides of AI and may also bear the brunt of its destructive and destabilising effects: from weaponised misinformation, to escalating inequalities arising from unequal opportunities, to the rapid displacement of entire industries and job classes. WAY FORWARD Understanding of the long-term implications of fast-moving technologies and effectively calibrating risks is critical in advancing AI development. Prevention of bias and unfair outcomes produced by AI systems is of top priority, while government and private sector stakeholders should address the balance between data privacy, open data and AI growth. For governments, it will be tricky to navigate this mix. The risk is that sluggish policy responses will make it impossible to catch up with AI’s increasingly rapid development. We recommend governments establish a lead public agency to guard against policy blind spots. These lead agencies will encourage “data loops” that provide feedback to users on how their data are being used and thus facilitate agile regulation. This is necessary due to AI’s inherently rapid changing nature and the emergence of aspects that may not have been obvious even weeks or months earlier. Another important ability that governments have to acquire is the ability to negotiate with interest groups and ethical considerations. Otherwise, progress of promising socially and environmentally beneficial AI applications ranging from innovative medical procedures to new transportation options can be blocked by vested interests or a poor understanding of the trade-offs between privacy and social impact. Governments should also strengthen their ability to build and retain local technical know-how. This is essential, given that AI superpower countries are built on a critical mass of technical talent that has been trained, attracted to the country, and retained. DIASPORA OF TALENT Fortunately, many countries in Asia have a diaspora of talent who have trained in AI at leading universities and worked with leading AI firms. China has shown how to target and attract these overseas Chinese to return home by showcasing economic opportunities and building confidence in the prospects of a successful career and livelihood. Ultimately, for any emerging technology to be successful, gaining and maintaining public trust is crucial. Covid-19 contact tracing applications are a good case in point, as transparency is key to gaining and maintaining public trust in their deployment. With increased concerns about data privacy, governments can explain to the public the benefits and details of how the tracing application technology works, as well as the relevant privacy policy and law that protects data. To deal with the use and misuse of advanced technologies such as AI, we need renewed commitment to multilateralism and neutral platforms on which to address critical challenges. At the next level, the United Nations recently launched Verified, an initiative aimed at delivering trusted information, advice and stories focused on the best of humanity and opportunities to ‘build back better’, in line with the SDGs and the Paris agreement on climate change. It also invites the public to help counter the spread of Covid-19 misinformation by sharing factual advice with their communities. The education sector is playing its part to facilitate exchange of ideas among thought leaders, researchers, and policymakers to contribute to the international public policy process. I am hopeful that universities will be able to partner with government, the private sector and the community at large in constructing a technological ecosystem serving the social good. The writer is secretary general of APRU (the Association of Pacific Rim Universities)
March 18, 2021
APRU on South China Morning Post: Governments, business and academia must join hands to build trust in AI’s potential for good
By Christopher Tremewan December 31, 2020 Original post in SCMP. Concerns about the predatory use of technology, privacy intrusions and worsening social inequalities must be jointly addressed by all stakeholders in society – through sensible regulations, sound ethical norms and international collaboration. In September, it was reported that Zhu Songchun, an expert in artificial intelligence at UCLA, had been recruited by Peking University. It was seen as part of the Chinese government’s strategy to become a global leader in AI, amid competition with the US for technological dominance. In the West, a new US administration has been elected amid anxiety about cyber interference. Tech giants Apple, Facebook, Amazon and Google are facing antitrust accusations in the US, while the European Union has unveiled sweeping legislation to enable regulators to head off bad behaviour by big tech before it happens. Meanwhile, Shoshana Zuboff’s bestselling book The Age of Surveillance Capitalism has alerted social media users to a new economic order that “claims human experience as free raw material for hidden commercial practices”. In addition, the public is regularly bombarded with dystopian scenarios (like in Black Mirror ) about intelligent machines taking control of society, often at the service of ruling elites or criminals. The dual character of AI – its promise for social good and its threat to human society through absolute control – has been a familiar theme for some time. Also, AI systems are evolving rapidly, outpacing regulatory processes for social equity and privacy. Especially during a pandemic, the urgent question facing governments, the private sector and universities is how to promote public trust in the beneficial side of AI technologies. One way to build public trust is to deliver for the global common good, beyond national or corporate self-interest. With the world facing crises ranging from the current pandemic to worsening inequalities and the massive effects of climate change, it is obvious that no single country can solve any of them alone. The technological advances of AI already hold out promise in everything from medical diagnosis and drug development to creating smart cities and transitioning to a renewable-energy economy. MIT has reportedly developed an app that can immediately diagnose 98.5 per cent of Covid-19 infections by people just coughing into their phones. A recent report on “AI for Social Good”, co-authored by the UN, Google and the Association of Pacific Rim Universities, concluded that AI can help us “build back better” and improve the quality of life. But it also said “the realisation of social good by AI is effective only when the government adequately sets rules for appropriate use of data”. With respect to limiting intrusions on individual rights, it said that “the challenge is how to balance the reduction of human rights abuses while not suffocating the beneficial uses”. These observations go to the core of the problem. Are governments accountable in real ways to their citizens or are they more aligned with the interests of hi-tech monopolies? Who owns the new AI technologies? Are they used for concentrating power and wealth or do they benefit those most in need of them? The report recommends that governments develop abilities for agile regulation; for negotiation with interest groups to establish ethical norms; for leveraging the private sector for social and environmental good; and to build and retain local know-how. While these issues will be approached in different ways in each country, international collaboration will be essential. International organisations, globally connected social movements as well as enhanced political participation by informed citizens will be critical in shaping the environment for regulation in the public interest. At the same time, geopolitical rivalry need not constrain our building of trust and cooperation for the common good. The Covid-19 crisis has shown that it is possible for governments to move decisively towards the public interest and align new technologies to solutions that benefit everyone. We should not forget that, in January, a team of Chinese and Australian researchers published the first genome of the new virus and the genetic map was made accessible to researchers worldwide. International organisations such as the World Health Organization and international collaborations by biomedical researchers also play critical roles in building public trust and countering false information. Universities have played an important role in advancing research cooperation with the corporate sector and in bolstering public confidence that global access takes priority over the profit motive of Big Pharma. For example, the vaccine developed by Oxford University and AstraZeneca will be made available at cost to developing countries and can be distributed without the need for special freezers. Peking University and UCLA are cooperating with the National University of Singapore and the University of Sydney to exchange best practices on Covid-19 crisis management. Competition for international dominance in AI applications also fades as we focus on applying its beneficial uses to common challenges. Global frameworks for cooperation such as the UN 2030 Agenda for Sustainable Development or the Paris Climate Agreement set out the tasks. Google, for example, has established partnerships with universities and government labs for advanced weather and climate prediction, with one project focusing on communities in India and Bangladesh vulnerable to flooding. To deal with the use and misuse of advanced technologies like AI, we need a renewed commitment to multilateralism and to neutral platforms on which to address critical challenges. Universities that collectively exercise independent ethical leadership internationally can also, through external partnerships, help to shape national regulatory regimes for AI that are responsive to the public interest. Find out more about the UN ESCAP-APRU-Google AI for Social Good project here.
December 31, 2020
APRU on Times Higher Education: ‘Oversight needed’ so AI can be used for good in Asia-Pacific
By Joyce Lau Original post in THE. Academics urge governments to set up frameworks for ethical use of technology and reaffirm the need for greater multidisciplinarity Asia-Pacific universities could use artificial intelligence to harness their strengths in combating epidemics and other global problems, but only if there were regulatory frameworks to ensure ethical use, experts said. Artificial Intelligence for Social Good, a nearly 300-page report by academics in Australia, Hong Kong, India, Singapore, South Korea and Thailand, was launched the same day as the event, held by the Association of Pacific Rim Universities (APRU), the United Nations’ Economic and Social Commission for Asia and the Pacific (ESCAP) and Google. The research, co-published by APRU and Keio University in Japan, laid out recommendations for using AI in the region to achieve the UN’s sustainable development goals (SDGs). While the report outlined the great potential for AI in the region, it also said that risks must be managed, privacy concerns must be addressed and testing must be conducted before large-scale technology projects were implemented. Christopher Tremewan, APRU’s secretary general and a former vice-president at the University of Auckland, said that Pacific Rim universities “have incredible research depth in the challenges facing this region, from extreme climate events and the global Covid-19 pandemic to complex cross-border problems. Their collective expertise and AI innovation makes a powerful contribution to our societies and our planet’s health.” However, he also said there were potential problems with “rapid technological changes rolled out amid inequality and heightened international tensions”. “As educators, we know that technology is not neutral and that public accountability at all levels is vital,” he said. The APRU, which includes 56 research universities in Asia, Australasia and the west coast of the Americas, is based at the Hong Kong University of Science and Technology. In answering questions, Dr Tremewan drew on his own observations in New Zealand and Hong Kong, two places where Covid responses have been lauded. “The feeling in Hong Kong is that there is tremendous experience from Sars,” he said, referring to a 2003 epidemic. “The universities here have capability in medical research, particularly on the structure of this type of disease, and also in public health strategy.” Meanwhile, in New Zealand, “confidence in science” and the prominence of researchers and experts speaking out aided in the public response. “Universities are playing key roles locally and internationally,” he said, adding that expertise was also needed in policy, communications and social behaviour. “The solutions are multidisciplinary, not only technological or medical.” Soraj Hongladarom, director of the Center for Ethics of Science and Technology at Chulalongkorn University in Bangkok, and one of the authors of the report, said their work had “broken new ground” in Asia. “We’re trying to focus on the cultural context of AI, which hasn’t been done very much in an academic context,” he said. Professor Hongladarom, a philosopher, urged greater interdisciplinarity in tackling social problems. “Engineers and computer scientists must work with social scientists, anthropologists and philosophers to look beyond the purely technical side of AI – but also at its social, cultural and political aspects,” he said. He added that policy and regulation were vital in keeping control over technology: “Every government must take action – it’s particularly important in South-east Asia.” Dr Tremewan said that, aside from crossing disciplinary boundaries, AI also had to cross national borders. “Universities have huge social power in their local contexts. So how do we bring that influence internationally?” he asked. Find out more about the UN ESCAP-APRU-Google AI for Social Good project here.
November 12, 2020
APRU releases AI for Social Good report in partnership with UN ESCAP and Google: Report calls for AI innovation to aid post-COVID recovery
Hong Kong, November 10, 2020 – APRU partners with UN ESCAP and Google to launch the AI for Social Good report. This is the third project exploring AI’s impact on Asia-Pacific societies to offer research-based recommendations to policymakers that focus on how AI can empower work towards the 2030 UN Sustainable Development goals. With COVID-19’s ongoing social and economic fallout, the role of AI is even more pronounced in aiding recovery. Researchers’ insights underpin the report’s recommendations for developing an environment and governance framework conducive to AI for Social Good – a term encompassing increasingly rapid technological changes occurring amidst inequality, the urgent transition to renewable energy and unexpected international tensions. Chris Tremewan, Secretary General of APRU commented, “APRU members have incredible research depth in the challenges facing this region, from extreme climate events and the global COVID-19 pandemic to complex cross-border problems. Bringing their expertise and AI innovation together in a collective effort will make a powerful contribution to our societies and the health of the planet.” Jonathan Wong, Chief of Technology and Innovation, United Nations ESCAP said, “We designed the 2030 UN Sustainable Development Goals with a strong commitment to harness AI in support of inclusive and sustainable development while mitigating its risks. Public policies play a critical role in promoting AI for social good while motivating governments to regulate AI development and applications so that they contribute to aspirations of a sustainable future.” Dan Altman, AI Public Policy, Google shared, “Google and APRU share the belief that AI innovation can meaningfully improve people’s lives. Google introduced the AI for Social Good program to focus our AI expertise on solving humanitarian and environmental challenges. Google is excited to be working with experts across all sectors to create solutions that make the biggest impact.” The report’s multidisciplinary studies provide the knowledge and perspectives of researchers from Singapore, Hong Kong, Korea, Thailand, India, and Australia. Combining local understanding with international outlook is essential for policymakers to respond with regulation that enables international tech firms to contribute to the common good. Here are the key recommendations: Multi-stakeholder governance must push innovation to realize AI’s full potential In addition to overseeing major players controlling data, governance must take manageable risks and conduct controlled testing before large scale tech implementation. Establish standardized data formats and interoperability Information asymmetries create inequities, therefore standardized data formats and interoperability between systems is critical. Address data privacy concerns and protect individual dignity Data needs anonymization, encryption, and distributed approaches. Governments must enforce privacy and individual dignity protection. Incorporating the Asian values of altruism in data governance can also help encourage data sharing for the social good. November is “AI for Social Good Month” featuring investigative discussions, conversations, and policy briefings with leading AI thinkers and doers from Asia and beyond. Visit the Summit here. View the original release here. Media contact: [email protected] / [email protected]
November 10, 2020
AI for Social Good network releases new report
AI For Social Good, a partnership between APRU, UN ESCAP and Google, released a new report exploring the impact of AI on societies in the Asia-Pacific region and offering research-based recommendations to policymakers. Providing perspectives of multidisciplinary researchers from Singapore, Hong Kong, Korea, Thailand, India, and Australia, each chapter of the report presents a unique research-based policy paper offering a set of key conclusions and policy suggestions aiming to support and inform policy makers and policy influencers. The report seeks to inform the development of governance frameworks that can help to address the risks and challenges associated with AI, while maximizing the potential of the technology to be developed and used for good. It also furthers understanding for developing the enabling environment in which policymakers can promote the growth of an AI for Social Good ecosystem in their respective countries in terms of AI inputs (e.g., data, computing power, and AI expertise) and ensuring that the benefits of AI are shared widely across society. The AI for Social Good network was launched in December 2018 under the academic lead of Keio University Vice-President, Jiro Kokuryo. It aims to provide a multi-year platform to enable scholars and experts to collaborate with policymakers to generate evidence and cross-border connections. “We worked very hard to come up with a set of recommendations that will make AI truly contribute to the well-being of humankind. I hope this voice from Asia will be heard not only within the region, but by people around the world.” ‘Governments are encouraged to invest in promoting AI solutions and skills that bring greater social good and help us “build back better” as we recover from the impacts of the COVID-19 pandemic.’ said Mia Mikic, Director of the United Nations ESCAP’s Trade, Investment and Innovation Division. To share the report’s findings with policymakers, industry leaders, and academics from around the region, the Virtual AI for Social Good Summit will be held in November. The series will feature  working and policy insight panels with details to be shared on apru.org soon. Find the full report here. See a press release from Keio University here.
September 9, 2020
AI Policy for the Future: Can we trust AI?
AI Policy for the Future: Can we trust AI? Date & Time: August 23 from 9 am to 5 pm Venue: Korea Press Center, 20th floor, International Conference Hall   Seoul National University Initiative will host a one-day conference focusing on AI trust for the future. The conference will invite AI experts and scholars from academia, industry, and government to address the current concerns on accountability and enhance social beneficial outcomes related to AI governance through technology, policy, and law. Considering the critical issues such as fairness and equity will be analyzed on both a macro and micro level to develop key recommendations on the responsible use of AI. Find out the program here. Visit the website at https://bit.ly/31A1iG9
August 16, 2019
APRU stepping up research infrastructure and network to build technology systems free from human rights risks
Hong Kong, August 9, 2019 — APRU has been doubling down on its efforts to build strategies that ensure privacy and secure technology systems—free from human rights risks. At the Responsible Business and Human Rights Forum (RBHRForum) held in mid-June in Bangkok, Thailand, APRU Director for Policy & Programs, Christina Schönleber, spoke on the challenges and risks affiliated with AI, such as the black box nature of techonologies, privacy and security concerns, and the disruptions and access to a transformed workforce. She proposed new ways for governments to adapt to the fast-changing environment to ensure new technologies have a positive impact. Schönleber advised on the ways that universities play a role in helping governments navigate this environment. APRU is currently working with ESCAP and Google on the developing research on AI for Social Good with a focus on policy frameworks and governance. The Asia Pacific AI for Social Good initiative was officially launched by APRU, the United Nations ESCAP and Google in December last year. “There is the existential fear that AI acts like a black box and ultimately society will be manipulated, so APRU’s main objective is to build on the research within member institutions to propose implementation of a framework certification system for trustworthy AI,” Schönleber said at a #RBHRForum panel. “There is a lot to gain if we actually drive policy development for disruptive technologies for social good, including the education and re-education of the new and existing workforce; faster implementation of the SDGs; and more trade bargaining power to vulnerable economies against traditionally strong trading partners. We have identified these issues in our ‘Transformation of Work in the Asia Pacific‘ project,” she added. Universities are key players alongside government and industry, given the importance for the emerging and young generations to have a high degree of digital literacy, explained Schönleber. STEM education is crucial for this as well as for the understanding of how digital tools and technologies work. Universities should take an active approach to work with schools in order to support early introduction of essential STEM skills in the primary school curricula. “The curriculum should facilitate the development of soft skills, as the future workforce will need more creative and innovative abilities to navigate the AI era,” Schönleber said. “Meanwhile, governments should play a key role in mitigating the psychological effects of humans’ interaction with robots by creating a supportive infrastructure to facilitate smooth transitions to digital futures,” she added. The #RBHRForum is an an annual event co-organized by the Royal Thai Government, Organisation for Economic Co-operation and Development (OECD), United Nations Development Programme (UNDP), United Nations Economic and Social Commission for Asia and the Pacific (ESCAP), International Labour Organization (ILO), and with the participation of the Working Group on Business and Human Rights. The aim is to elevate awareness and ensure effective implementation of the Business and Human Rights and Responsible Business Conduct agendas in Asia-Pacific and beyond.
August 15, 2019
Kick-off for AI for Social Good―A United Nations ESCAP-APRU-Google Collaborative Network and Project
On Wednesday, June 5, a kick-off meeting for the “AI for Social Good ― a United Nations ESCAP-APRU-Google Collaborative Network and Project” was held at Keio University’s Mita Campus. The project brought together 8 scholars from all across Asia Pacific under the academic lead of Keio’s Vice-President Professor Jiro Kokuryo for the meeting, with the support of UN ESCAP and Google, and organization by the APRU-Association of Pacific Rim Universities. The scholars, encompassing a wide range of academic backgrounds from technical aspects of AI such as computer science to ethical views including philosophy, had lively discussions on their research plans as well as providing mutual feedback, alongside representatives from the project organizations ― UN ESCAP, Google, and APRU. Their work at meetings set to take place over the coming year will be published as a policy recommendation paper for government policymakers and other stakeholders including those in industry, NGOs, and academic institutions. Originally published by Keio University  Vice-President Professor Jiro Kokuryo Chairs Meeting of AI for Social Good ― A United Nations ESCAP-APRU-Google Collaborative Network and Project
June 15, 2019
APRU Partners with United Nations ESCAP and Google on AI for Social Good
Artificial Intelligence (AI) has the potential to benefit many sectors while it may greatly impact societal structures. For example, it is widely expected that the future of work will be considerably transformed by the ubiquity of AI in this digital era. However, current research remains limited in terms of how AI can positively transform economies and societies, while addressing governance and policy needs, as well as assessing key areas of concern relating to the technology. In order to fill this gap, APRU, United Nations ESCAP and Google have come together to set up a new research network, called, ‘AI for Social Good’, which was officially launched at the start of the Asia-Pacific AI for Social Good Summit in Bangkok on December 13, 2018. Launch of the Asia-Pacific AI for Social Good Summit in Bangkok, Thailand The AI for Social Good network will provide a multi-year platform to enable scholars and experts to collaborate with policymakers to generate evidence and cross-border connections on “AI for Social Good”, while promoting an enabling policy environment at both domestic and international levels. “ESCAP has a mandate to strengthen the regional technology and innovation agenda through our role as a think tank, policy adviser and convener,” says Armida Alisjahbana, United Nations Under-Secretary-General and ESCAP Executive Secretary. “We hope that multi-stakeholder partnerships, such as the ones we are launching here today, will support member States in their efforts to harness technology and innovation in pursuit of the Sustainable Development Goals.” Armida Alisjahbana, UN Under-Secretary-General and ESCAP Executive Secretary Although AI’s revolutionary prowess is well known; it is yet to be extensively applied to scale and sustain impact for all in important sectors, such as education and social inclusion. The AI for Social Good collaboration is about supporting policy framework, which ultimately will benefit the population across the Asia Pacific through sharing the best practices and solutions to promote its benefits. This project is a continuation of APRU’s previous Google-collaborated AI research project, namely AI for Everyone: Benefitting from and Building Trust in the Technology. This project initiative will see scholars across the region developing and publishing a collection of research-based policy recommendation papers to influence the development of policy process to support AI for Social Good. Keio University Vice-President, Jiro Kokuryo, is the academic lead and will be supported by a Steering Committee, bringing together policymakers and experts from across Asia. Policymakers, industry, universities and other stakeholders will convene to utilize the research results to develop partnerships to grow and sustain the use of AI for social good. “This network will bring together leading academics from around the region to produce research on how to promote the use of AI for social good and how best to manage risks and concerns,” says Kent Walker, Google Senior Vice-President of Global Affairs. “It will also be a forum for these academics to discuss their research with government, civil society and the private sector.” (L-R): Jake Lucchi, Google Head of AI Policy, APAC; Jiro Kokuryo, Keio University Vice President; Christina Schönleber, APRU Director Policy and Programs; Atsuko Okuda, ESCAP Chief ICT and Development Section; Marta Pérez Cusó, UN ESCAP Economic Affairs Officer at the Asia-Pacific AI for Social Good Research Network event (L-R): Jake Lucchi, Google Head of AI Policy, APAC; Jiro Kokuryo, Keio University Vice President; Christina Schönleber, APRU Director Policy and Programs Kent Walker, Google Senior Vice President of Global Affairs AI for Social Good project’s first meeting is planned in Tokyo alongside the G20 Summit, while it is planned to hold a second meeting and stakeholder event in Bangkok, Hong Kong or Tokyo, in the winter of 2019-2020. The submitted papers will be collated into a final report to be published in June 2020, and disseminated widely by UNESCAP and Google. Find out more photos of the event here.  
February 28, 2019