Artificial Intelligence and Public Policy

Edward Hunter Christie

 

AI Policy Blog

11 June 2021

 

This note addresses the scope of enquiry of the AI Policy Blog series and some of the broad challenges that governments and policy makers need to contend with as they seek both to adopt and to govern Artificial Intelligence and related technologies.

 

Artificial Intelligence (AI) is the ability of machines to perform tasks that typically require human intelligence – for example, recognising patterns, learning from experience, drawing conclusions, making predictions, or taking action – whether digitally or as the smart software behind autonomous physical systems [1]. 

 

The dominant current wave of AI is centered on Machine Learning, namely software that run mathematical algorithms, aimed at pattern recognition and prediction, with levels of performance that improve as a result of ingesting additional training data. For example, a classification algorithm can be trained on a large set of correctly labelled examples to determine to which previously encountered category a newly observed object belongs. Deep Learning is a subset of Machine Learning, which uses multiple computational layers (Artificial Neural Networks with multiple layers). Machine Learning approaches can be designed to be trained and make predictions regarding any kind of data that can be represented digitally, such as text, audio files, images, video footage. Machine Learning outperforms humans, in terms of both predictive performance and of course speed, over an increasing range of narrow pattern recognition and prediction tasks. This is the central reason for its increasing adoption across vast areas of human activity.

 

As noted by the European Commission in the explanatory memorandum of the European Union’s proposed Artificial Intelligence Act, by improving prediction, optimising operations and resource allocation, and personalising digital solutions, AI can provide key competitive advantages to companies and to the economy at large, and also support socially and environmentally beneficial outcomes.

 

AI can also be used for military purposes, where the range of potential applications is equally vast, including for example analysing and classifying visual data for intelligence assessments, optimising logistics, operating support vehicles, or tracking hostile targets. 

 

The formidable potential of AI naturally implies a range of new risks and threats, within economies and societies, as well as in terms of relations between states. Recognising the importance of AI, governments and international organisations have adopted general AI Strategies and are increasingly developing and adopting more detailed policies, with the aim of fostering technological development and adoption while also ensuring good governance and sound ethical principles.

 

From broad-based AI Strategies and commitments to ethical principles, the trend is accelerating towards more detailed sector-specific and issue-specific strategies and implementation plans and towards the setting of norms, including legal norms, to regulate AI applications, for example the European Union’s Artificial Intelligence Act of 21 April 2021.

 

States will have a natural interest in further adapting policies and programmes aimed at further advances in AI and related technologies [2] and their adoption by both private sector and public sector entities, across industries and across areas of public policy. Policy questions regarding R&D programmes and funding, innovation ecosystems, state venture capital, and other policies to foster competitiveness in new technologies will be relevant topics in this blog series. Separately, contributions exploring best practices in digital transformation and technology adoption in public sector institutions will also be welcome.

 

The theme of ethical and trustworthy AI will remain an important guardrail for policy makers and practitioners alike. The OECD’s values-based principles for the responsible stewardship of trustworthy AI was adopted by the 37 governments of the OECD’s member nations [3] in May 2019 and by the governments of the G20 nations [4] in June 2019. 

 

The practical implementation, monitoring, and verification of such norms and commitments will require well-calibrated, inter-disciplinary efforts, connecting economic, legal, ethical, and political understanding with technical expertise. States will establish or enhance existing regulatory bodies and mechanisms, standards and legal norms will be proposed, discussed, amended, adopted. Legal and organisational avenues for challenges, reviews, and redress will develop. This is an exciting time for professionals involved in matters of good governance, regulatory compliance, and related fields. Cross-cutting contributions addressing these questions are particularly welcome, as are contributions that focus on particular sectors of domestic policy, such as health, transport, or home affairs – as well as contributions that explore the linkages between emerging AI policies and other digital policies, such as data protection and the regulation of digital services and markets.

 

In terms of foreign, security, and defence policy, the international security landscape has shifted rapidly in the last decade, marking a return to Great Power competition. Understanding the technology competition between the Great Powers, and its impacts on other states in the international system, is of great interest for foreign and security policy experts and practitioners, and for economists with a focus on international security questions. Questions pertaining to the protection of intellectual property rights, as well as to economic statecraft (including export controls, foreign investment screening, investment bans) deserve notable attention.

 

With respect to ethical and legal standards in the realm of defence, distinct policy developments have emerged, with national commitments such as those of the United States [5], as well as at the United Nations with the Group of Governmental Experts on Lethal Autonomous Weapon Systems, leading to the formulation of 11 guiding principles [6]. Contributions on the evolving norms and commitments regarding the military applications of AI, and how such commitments may be implemented in practice, would be most welcome.

 

In summary, it is with a view to these two complementary pillars of public policy – how to support the development and adoption of AI and related technologies, on the one hand, and how to ensure its good governance and regulation – that AI Policy Blog will seek contributions, analyses, and insights. 

 

How to cite this article (APA):

Christie, E. H. (2021, June 11). Artificial Intelligence and Public Policy. AI Policy Blog.

 

Notes

 

[1] Reding, D. F. & J. Eaton (2020). Science & Technology Trends 2020-2040 – Exploring the S&T Edge. NATO Science & Technology Organization.

[2] Such as Big Data Analytics, Data Science, Autonomy and Robotics, Quantum Computing

[3] OECD (2019). Recommendation of the Council on Artificial Intelligence. OECD/LEGAL/0449.

[4] G20 (2019). G20 Ministerial Statement on Trade and Digital Economy.

[5] US Department of Defense. (2020, February 24). DOD Adopts Ethical Principles for Artificial Intelligence [press release].

[6] See Annex III of United Nations (2019). CCW/MSP/2019/9.