Governance – Artificial Intelligence
Forty-two countries adopt new OECD Principles on Artificial Intelligence
OECD and partner countries formally adopted the first set of intergovernmental policy guidelines on Artificial Intelligence (AI) today, agreeing to uphold international standards that aim to ensure AI systems are designed to be robust, safe, fair and trustworthy.
The OECD’s 36 member countries, along with Argentina, Brazil, Colombia, Costa Rica, Peru and Romania, signed up to the OECD Principles on Artificial Intelligence at the Organisation’s annual Ministerial Council Meeting, taking place today and tomorrow in Paris and focused this year on “Harnessing the Digital Transition for Sustainable Development”. Elaborated with guidance from an expert group formed by more than 50 members from governments, academia, business, civil society, international bodies, the tech community and trade unions, the Principles comprise five values-based principles for the responsible deployment of trustworthy AI and five recommendations for public policy and international co-operation. They aim to guide governments, organisations and individuals in designing and running AI systems in a way that puts people’s best interests first and ensuring that designers and operators are held accountable for their proper functioning.
“Artificial Intelligence is revolutionising the way we live and work, and offering extraordinary benefits for our societies and economies. Yet, it raises new challenges and is also fuelling anxieties and ethical concerns. This puts the onus on governments to ensure that AI systems are designed in a way that respects our values and laws, so people can trust that their safety and privacy will be paramount,” said OECD Secretary-General Angel Gurría. “These Principles will be a global reference point for trustworthy AI so that we can harness its opportunities in a way that delivers the best outcomes for all.” (Read the full speech.)
The AI Principles have the backing of the European Commission, whose high-level expert group has produced Ethics Guidelines for Trustworthy AI, and they will be part of the discussion at the forthcoming G20 Leaders’ Summit in Japan. The OECD’s digital policy experts will build on the Principles in the months ahead to produce practical guidance for implementing them.
While not legally binding, existing OECD Principles in other policy areas have proved highly influential in setting international standards and helping governments to design national legislation. For example, the OECD Privacy Guidelines, which set limits to the collection and use of personal data, underlie many privacy laws and frameworks in the United States, Europe and Asia. The G20-endorsed OECD Principles of Corporate Governance have become an international benchmark for policy makers, investors, companies and other stakeholders working on institutional and regulatory frameworks for corporate governance. Download the AI Principles in full.
In summary, the AI Principles state that:
 AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.
 AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society.
 There should be transparency and responsible disclosure around AI systems to ensure that people understand when they are engaging with them and can challenge outcomes.
 AI systems must function in a robust, secure and safe way throughout their lifetimes, and potential risks should be continually assessed and managed.
 Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.
The OECD recommends that governments:
:: Facilitate public and private investment in research & development to spur innovation in
:: Foster accessible AI ecosystems with digital infrastructure and technologies, and mechanisms to
share data and knowledge.
:: Create a policy environment that will open the way to deployment of trustworthy AI systems.
:: Equip people with the skills for AI and support workers to ensure a fair transition.
:: Co-operate across borders and sectors to share information, develop standards and work towards
responsible stewardship of AI.