www.deepmind.com Open in urlscan Pro
34.249.200.254  Public Scan

URL: https://www.deepmind.com/blog/exploring-institutions-for-global-ai-governance
Submission: On October 04 via api from US — Scanned from DE

Form analysis 1 forms found in the DOM

/search

<form action="/search" class="c_site-search__form w-form"><input type="search" class="c_site-search__input w-input" autofocus="true" maxlength="256" name="query" placeholder="Search DeepMind.com" id="search" required=""><input type="submit" value=" "
    class="c_site-search__button w-button">
  <div class="c_site-search__icon"><img src="https://assets-global.website-files.com/621d30e84caf0be3291dbf1c/621debb799a5f2bc575eca8d_icon__40x__white__search__2px.svg" loading="lazy" alt="" class="c_site-search__icon__asset"></div>
</form>

Text Content

Google DeepMind may serve cookies to analyse traffic to this site. Information
about your use of this site is shared with Google DeepMind for that purposeSee
detailsOK, got it
Research
Blog
Impact
Safety & Ethics
About
Careers

Research
Publications
Open source
AlphaFold
AlphaGo
SynthID
WaveNet
Impact
Optimising computer systems
Advancing Google Cloud solutions
Enhancing the YouTube experience
Saving energy at Google scale
Blog
Applied
Company
Ethics and Society
Events
Life at DeepMind
Open source
Research
Technical blog
About
Safety & Ethics
Careers
Education
Learning resources
The Podcast
Press
Terms and conditions
Privacy policy
Vulnerability disclosure
Modern Slavery Statement
Alphabet Inc.

Company


EXPLORING INSTITUTIONS FOR GLOBAL AI GOVERNANCE

July 11, 2023

Company


EXPLORING INSTITUTIONS FOR GLOBAL AI GOVERNANCE

July 11, 2023

NEW WHITE PAPER INVESTIGATES MODELS AND FUNCTIONS OF INTERNATIONAL INSTITUTIONS
THAT COULD HELP MANAGE OPPORTUNITIES AND MITIGATE RISKS OF ADVANCED AI

Growing awareness of the global impact of advanced artificial intelligence (AI)
has inspired public discussions about the need for international governance
structures to help manage opportunities and mitigate risks involved.

Many discussions have drawn on analogies with the ICAO (International Civil
Aviation Organisation) in civil aviation; CERN (European Organisation for
Nuclear Research) in particle physics; IAEA (International Atomic Energy Agency)
in nuclear technology; and intergovernmental and multi-stakeholder organisations
in many other domains. And yet, while analogies can be a useful start, the
technologies emerging from AI will be unlike aviation, particle physics, or
nuclear technology.

To succeed with AI governance, we need to better understand:

 1. What specific benefits and risks we need to manage internationally.
 2. What governance functions those benefits and risks require.
 3. What organisations can best provide those functions.
    

Our latest paper, with collaborators from the University of Oxford, Université
de Montréal, University of Toronto, Columbia University, Harvard University,
Stanford University, and OpenAI, addresses these questions and investigates how
international institutions could help manage the global impact of frontier AI
development, and make sure AI’s benefits reach all communities.

THE CRITICAL ROLE OF INTERNATIONAL AND MULTILATERAL INSTITUTIONS

Access to certain AI technology could greatly enhance prosperity and stability,
but the benefits of these technologies may not be evenly distributed or focused
on the greatest needs of underrepresented communities or the developing world.
Inadequate access to internet services, computing power, or availability of
machine learning training or expertise, may also prevent certain groups from
fully benefiting from advances in AI.

International collaborations could help address these issues by encouraging
organisations to develop systems and applications that address the needs of
underserved communities, and by ameliorating the education, infrastructure, and
economic obstacles to such communities making full use of AI technology.

Additionally, international efforts may be necessary for managing the risks
posed by powerful AI capabilities. Without adequate safeguards, some of these
capabilities – such as automated software development, chemistry and synthetic
biology research, and text and video generation – could be misused to cause
harm. Advanced AI systems may also fail in ways that are difficult to
anticipate, creating accident risks with potentially international consequences
if the technology isn’t deployed responsibly.

International and multi-stakeholder institutions could help advance AI
development and deployment protocols that minimise such risks. For instance,
they might facilitate global consensus on the threats that different AI
capabilities pose to society, and set international standards around the
identification and treatment of models with dangerous capabilities.
International collaborations on safety research would also further our ability
to make systems reliable and resilient to misuse.

Lastly, in situations where states have incentives (e.g. deriving from economic
competition) to undercut each other's regulatory commitments, international
institutions may help support and incentivise best practices and even monitor
compliance with standards.

FOUR POTENTIAL INSTITUTIONAL MODELS

We explore four complementary institutional models to support global
coordination and governance functions:

 * An intergovernmental Commission on Frontier AI could build international
   consensus on opportunities and risks from advanced AI and how they may be
   managed. This would increase public awareness and understanding of AI
   prospects and issues, contribute to a scientifically informed account of AI
   use and risk mitigation, and be a source of expertise for policymakers.
 * An intergovernmental or multi-stakeholder Advanced AI Governance Organisation
   could help internationalise and align efforts to address global risks from
   advanced AI systems by setting governance norms and standards and assisting
   in their implementation. It may also perform compliance monitoring functions
   for any international governance regime.
 * A Frontier AI Collaborative could promote access to advanced AI as an
   international public-private partnership. In doing so, it would help
   underserved societies benefit from cutting-edge AI technology and promote
   international access to AI technology for safety and governance objectives.
 * An AI Safety Project could bring together leading researchers and engineers,
   and provide them with access to computation resources and advanced AI models
   for research into technical mitigations of AI risks. This would promote AI
   safety research and development by increasing its scale, resourcing, and
   coordination.

OPERATIONAL CHALLENGES

Many important open questions around the viability of these institutional models
remain. For example, a Commission on Advanced AI will face significant
scientific challenges given the extreme uncertainty about AI trajectories and
capabilities and the limited scientific research on advanced AI issues to date.

The rapid rate of AI progress and limited capacity in the public sector on
frontier AI issues could also make it difficult for an Advanced AI Governance
Organisation to set standards that keep up with the risk landscape. The many
difficulties of international coordination raise questions about how countries
will be incentivised to adopt its standards or accept its monitoring.

Likewise, the many obstacles to societies fully harnessing the benefits from
advanced AI systems (and other technologies) may keep a Frontier AI
Collaborative from optimising its impact. There may also be a difficult tension
to manage between sharing the benefits of AI and preventing the proliferation of
dangerous systems.

And for the AI Safety Project, it will be important to carefully consider which
elements of safety research are best conducted through collaborations versus the
individual efforts of companies. Moreover, a Project could struggle to secure
adequate access to the most capable models to conduct safety research from all
relevant developers.

Given the immense global opportunities and challenges presented by AI systems on
the horizon, greater discussion is needed among governments and other
stakeholders about the role of international institutions and how their
functions can further AI governance and coordination.

We hope this research contributes to growing conversations within the
international community about ways of ensuring advanced AI is developed for the
benefit of humanity.

Read our paper
Notes

Authors
Lewis Ho
* External authors
Published
July 11, 2023
Tags
Company
Share
 * 
 * 
 * 


RELATED

Technical blog
An early warning system for novel AI risks
May 25, 2023

Research
How can we build human values into AI?
April 24, 2023

Company
Breaking down global barriers to access
November 5, 2020

Research
Blog
Impact
Safety & Ethics
About
Careers
Press
Terms of Service
Privacy policy
Vulnerability disclosure
Human rights policy
Modern slavery statement

We are making changes to our Privacy Policy and T&Cs.
With effect from 4th August, your use of deepmind.com will be covered by the
Google Terms of Service and any data processing operations carried out by us in
relation to deepmind.com will be covered by the Google Privacy Policy.
OK,got it!