www.brookings.edu Open in urlscan Pro
2606:4700:10::6816:2be9  Public Scan

URL: https://www.brookings.edu/articles/the-bletchley-park-process-could-be-a-building-block-for-global-cooperation-on-ai-safety/
Submission: On November 02 via api from ES — Scanned from ES

Form analysis 3 forms found in the DOM

GET https://www.brookings.edu/

<form action="https://www.brookings.edu/" method="get" class="search-form search-wrapper">
  <input class="search-field form-input aa-input" type="search" name="s" id="search" placeholder="Start typing to search" autocomplete="off" spellcheck="false" role="combobox" aria-autocomplete="list" aria-expanded="false"
    aria-owns="algolia-autocomplete-listbox-0" dir="auto" style="">
  <pre aria-hidden="true"
    style="position: absolute; visibility: hidden; white-space: pre; font-family: Inter, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, &quot;Segoe UI&quot;, Roboto, &quot;Helvetica Neue&quot;, Arial, &quot;Noto Sans&quot;, sans-serif, &quot;Apple Color Emoji&quot;, &quot;Segoe UI Emoji&quot;, &quot;Segoe UI Symbol&quot;, &quot;Noto Color Emoji&quot;; font-size: 18px; font-style: normal; font-variant: normal; font-weight: 400; word-spacing: 0px; letter-spacing: normal; text-indent: 0px; text-rendering: auto; text-transform: none;"></pre>
  <button class="search-submit btn text-btn" type="submit">
    <span class="sr-only">Search</span>
  </button>
</form>

<form id="newsletterWidget2093985100" class="subscribe-component brookings-newsletter-subscribe">
  <a class="uk-alert-close text-white uk-icon uk-close" uk-close=""><svg width="14" height="14" viewBox="0 0 14 14" xmlns="http://www.w3.org/2000/svg"><line fill="none" stroke="#000" stroke-width="1.1" x1="1" y1="1" x2="13" y2="13"></line><line fill="none" stroke="#000" stroke-width="1.1" x1="13" y1="1" x2="1" y2="13"></line></svg></a>
  <div class="separator w-5 h-2 bg-orange mb-2"></div>
  <h3>Subscribe to Global Connection</h3>
  <input type="hidden" name="g-recaptcha-response">
  <input type="text" name="name" value="" maxlength="255" class="-classify">
  <textarea name="notes" rows="4" cols="40" class="-classify"></textarea>
  <input type="checkbox" name="verification" class="-classify" value="0"> <input class="screen-reader-text sr-only" type="checkbox" name="nl_globalupdate" checked="" data-name="Global Connection" data-program="Global Economy and Development">
  <div class="email-signup subscribe-form-fields">
    <input type="email" name="email" placeholder="Email Address">
    <button type="submit">
      <svg width="24" height="24" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg">
        <g id="arrow / small / right / thinnest">
          <path id="Union" fill-rule="evenodd" clip-rule="evenodd" d="M17.6483 12.1204L11.6746 7L12.3254 6.24074L19.7683 12.6204L12.3254 19L11.6746 18.2407L17.6484 13.1204L4 13.1204L4 12.1204L17.6483 12.1204Z" fill="black"></path>
        </g>
      </svg>
      <span class="sr-only">Sign Up</span>
    </button>
  </div>
  <div class="form-message"></div>
</form>

<form id="footerSubscribe" class="subscribe-form subscribe-component email-and-submit">
  <input class="screen-reader-text sr-only" type="checkbox" name="nl_brookingsbrief" checked="" data-name="Brookings Brief" data-program="">
  <input type="hidden" name="g-recaptcha-response">
  <input type="text" name="name" value="" maxlength="255" class="-classify">
  <textarea name="notes" rows="4" cols="40" class="-classify"></textarea>
  <input type="checkbox" name="verification" class="-classify" value="0">
  <div class="email-signup mx-auto subscribe-form-fields">
    <input type="email" name="email" placeholder="Email Address">
    <button type="submit">
      <svg width="24" height="24" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg">
        <g id="arrow / small / right / thinnest">
          <g id="Group 2">
            <path id="Combined Shape Copy" fill-rule="evenodd" clip-rule="evenodd"
              d="M17.8352 11.075H4.125L4.125 12.075H18.0744L12.0811 18.3408L12.6231 18.8592L19.1123 12.075H19.275V11.9049L19.471 11.7L19.275 11.4951V11.075H18.8732L12.6231 4.54081L12.0811 5.05922L17.8352 11.075Z" fill="black"></path>
          </g>
        </g>
      </svg> <span class="sr-only">Sign Up</span>
    </button>
  </div>
  <div class="form-message"></div>
</form>

Text Content

 * Experts
 * Events
 * Research Programs
 * Research & Commentary
 * Newsletters
 * For Media
 * About Us
    * Leadership
    * Careers
    * Our Commitments
    * Our Finances
    * BI Press
    * WashU at Brookings

 * Donate

Home


 * Election ’24
 * U.S. Economy
   U.S. Economy
    * U.S. Economy
    * Banking & Finance
    * Economic Indicators
    * Federal Fiscal & Tax Policy
    * Federal Reserve
    * Labor & Unemployment
    * Regulatory Policy
    * Retirement
    * Social Safety Net
    * State & Local Finance
    * U.S. Trade Policy
   
   Explore topic
   Hutchins Center Fiscal Impact Measure
   
   Hutchins Center Fiscal Impact Measure
   Has pay kept up with inflation?
   
   Has pay kept up with inflation?
   What everyone should know about rural America ahead of the 2024 election
   
   What everyone should know about rural America ahead of the 2024 election
 * International Affairs
   International Affairs
    * International Affairs
    * Democracy, Conflict, & Governance
    * Diplomacy & Multilateralism
    * Foreign Politics & Elections
    * Fragile States
    * Geopolitics
    * Humanitarian & Disaster Assistance
    * Migrants, Refugees & Internally Displaced Persons
    * Trafficking & Illicit Trade
    * U.S. Foreign Policy
   
   Explore topic
   Will the EU agree to use economic sanctions against China?
   
   Will the EU agree to use economic sanctions against China?
   A conversation with Commander of US Indo-Pacific Command Admiral Samuel
   Paparo
   
   A conversation with Commander of US Indo-Pacific Command Admiral Samuel
   Paparo
   The United States, China, and the Competition for Control
   
   The United States, China, and the Competition for Control
 * Technology & Information
   Technology & Information
    * Technology & Information
    * Artificial Intelligence
    * Cryptocurrency
    * Cybersecurity
    * Internet & Telecommunications
    * Media & Journalism
    * Privacy
    * Social Media
    * Space Exploration
    * Technology Policy & Regulation
   
   Explore topic
   Leading the digital and Fourth Industrial Revolutions in Africa
   
   Leading the digital and Fourth Industrial Revolutions in Africa
   Donald Trump has threatened to shut down broadcasters, but can he?
   
   Donald Trump has threatened to shut down broadcasters, but can he?
   Is climate change a forgotten priority for election 2024? | The TechTank
   Podcast
   
   Is climate change a forgotten priority for election 2024? | The TechTank
   Podcast
 * Race in Public Policy
   Race in Public Policy
    * Society & Culture
    * Children & Families
    * Crime, Justice & Safety
    * Demographics & Population
    * Economic Security & Mobility
    * Human Rights & Civil Liberties
    * Immigrants & Immigration
    * Race in Public Policy
    * Religion & Society
    * Social Equity & Inclusion
   
   Explore topic
   Not like us? Exploring foreign-born Black men’s distinct voting patterns in
   the 2020 election
   
   Not like us? Exploring foreign-born Black men’s distinct voting patterns in
   the 2020 election
   Expanding the care economy will be huge for Latino families
   
   Expanding the care economy will be huge for Latino families
   What have Democrats done for Black men?
   
   What have Democrats done for Black men?
 * Topics
    * 
      Business & Workforce
    * 
      Cities & Communities
    * 
      Climate & Energy
    * 
      Defense & Security
    * 
      Education
    * 
      Global Economy & Development
    * 
      Health Care
    * 
      International Affairs
    * 
      Society & Culture
    * 
      Technology & Information
    * 
      U.S. Economy
    * 
      U.S. Government & Politics

 * Regions
    * 
      Africa
    * 
      Asia & the Pacific
    * 
      Europe & Eurasia
    * 
      Latin America & the Caribbean
    * 
      Middle East & North Africa
    * 
      North America

Search
Home


 * Election ’24
 * U.S. Economy
 * International Affairs
 * Technology & Information
 * Race in Public Policy
 * All Topics
 * All Regions

 * Experts
 * Events
 * Research Programs
 * About Us
 * Research & Commentary
 * Newsletters
 * Careers
 * For Media





Search
Home


THE BLETCHLEY PARK PROCESS COULD BE A BUILDING BLOCK FOR GLOBAL COOPERATION ON
AI SAFETY

Sections
Sections

 * Share
    * Share
       * 
       * 
       * 
       * 
       * 
       * 
   
      

Search


Sections
Sections

 * Share
    * Share
       * 
       * 
       * 
       * 
       * 
       * 
   
   


SUBSCRIBE TO GLOBAL CONNECTION

Sign Up




Research


THE BLETCHLEY PARK PROCESS COULD BE A BUILDING BLOCK FOR GLOBAL COOPERATION ON
AI SAFETY

JOSHUA P. MELTZER AND
JOSHUA P. MELTZER SENIOR FELLOW - GLOBAL ECONOMY AND DEVELOPMENT



PAUL TRIOLO
PAUL TRIOLO PARTNER, DENTONS GLOBAL - ALBRIGHT STONEBRIDGE

October 4, 2024

Updated October 7, 2024


Britain's Prime Minister Rishi Sunak speaks during the closing press conference
on the second day of the UK Artificial Intelligence (AI) Safety Summit at
Bletchley Park, Milton Keynes November 2, 2023. Justin Tallis/Pool via REUTERS
 * 13 min read

 * 
 * 
 * 
 * 
 * 
 * 

Print
Sections

Print
Follow the authors
 * @JoshuaPMeltzer
 * @pstAsiatech
 * See More

MORE ON

International Affairs
Sub-Topics

Diplomacy & Multilateralism

Technology & Information
Sub-Topics

Artificial Intelligence Technology Policy & Regulation

Program

Global Economy and Development

Project

Election ’24: Issues at Stake

Considerable progress has been made on international governance of artificial
intelligence (AI). This includes work under the G7 Hiroshima Process, at the
Organization for Economic Co-operation and Development (OECD), the Global
Partnership on AI (GPAI), international standards bodies, and in various U.N.
bodies. Meanwhile, bilateral engagement on AI, including the U.S.-EU Trade and
Technology Council, is in a holding pattern pending the outcome of the U.S.
presidential elections. A more recent entrant into this global AI governance
space has been the so-called Bletchley Park process that comprises development
and global networking of AI safety institutes (AISIs) in a number of countries.
The Bletchley Park process was kicked off by a meeting at Bletchley Park in
November 2023. The Bletchley Park meeting was spearheaded by the United Kingdom
and United States and attended by China and a small number of other countries.
It aims to develop a framework for how governments and companies developing AI
can assess AI safety. A follow-up meeting in Seoul in May 2024 provided momentum
to this process, and the key players are now gearing up for a February 2025 AI
Action Summit in Paris that will be critical to demonstrating ongoing high-level
commitment to the process and for determining whether the process can deliver on
AI safety. Recently, the U.S. has announced that it will host a global AI safety
summit in November with the goal of kick-starting technical collaboration among
the various AISIs ahead of the Paris AI Action Summit.

While progress to date has been significant, major challenges to reaching
agreement on a networked approach to AI safety remain. The following takes a
closer look at where the Bletchley Park process now stands in the run-up to
Paris 2025.


THE BLETCHLEY PARK PROCESS: A NETWORKED APPROACH TO ADDRESSING AI SAFETY

The Bletchley Park AI Safety Summit last November was launched amid growing
attention by governments and industry to addressing risks from so-called
frontier AI models—generative AI models that includes OpenAI’s ChatGPT 4,
Anthropic’s Claude, and Meta’s Llama, to name a few.

Indeed, since the release in November 2022 of OpenAI’s ChatGPT 3, governments in
the U.S., European Union, U.K., and China in particular had stepped up efforts
to address safety issues arising from generative AI models. For instance, the EU
AI Act was updated and passed in 2024 to include specific and overlapping
obligations for generative AI and foundational AI models. This includes
obligations to train and develop generative AI with state-of-the-art safeguards
against content breaching EU laws, to document copyright training data, and to
comply with stronger transparency obligations. In the U.S., the White House
secured voluntary commitments by seven AI companies (now expanded to 15 major AI
companies) to address and report on risk from foundational AI models. Chinese
regulators also pushed forward significant regulations governing generative AI
during 2023 focused on content, training data, and managing risks such as
misinformation, national security, data privacy, intellectual property rights,
social stability, and ethical concerns. In September 2024, Chinese regulators
released a document that sets out a framework for AI governance that is
somewhere in between the EU AI Act and the U.S. approach and provides a
comprehensive blueprint for AI governance principles and safety guidelines.

The Bletchley Park process has the potential to build on these domestic AI
developments and become a key building block for global cooperation on AI safety
for foundational AI models. Bletchley Park put testing and evaluation of AI
safety front and center, identified AI developers as having a particular
responsibility when it comes to ensuring the safety of foundational AI systems,
and underscored the importance of international cooperation when it comes to
understanding and mitigating AI risk. The goal is to develop largely voluntary
agreement on how to assess and mitigate AI risk, and how to test and verify
these often private sector efforts.


THE SEOUL MEETING ON AI SAFETY

The May meeting in Seoul was important in establishing the continuity of the
process and moving forward progress on key documents that will be part of the
overall framework. The range of countries participating at the Seoul Summit was
similar to Bletchley Park, though in most cases, ministers rather than leaders
attended. Significantly, while China signed onto the Bletchley Park outcome and
attended the Seoul meeting, it was not a signatory to the Seoul Declaration. The
reason for this is not entirely clear but could signal reluctance by China to
sign on to AI governance mechanisms they view as promoting a Western-centric
view of global AI governance.

There were a number of important outcomes from the Seoul Summit. First, the
Seoul Declaration and Ministerial Statement reemphasized the signatories’
commitment to AI safety, innovation, and inclusivity, stressing a commitment to
guarding against the full spectrum of AI risks while recognizing the
game-changing potential of AI across sectors. They articulate a list of AI
safety principles as including “transparency, interpretability and
explainability, privacy and accountability, meaningful human oversight and
effective data management and protection.” These principles should help guide
development of AI safety practices within companies and governments, as well as
guide standards and practices for AI safety.

The Seoul Declaration also includes an agreement to create or expand AI safety
institutes and cooperate on AI safety research. In pursuit of this goal, the
declaration welcomes the Seoul Statement of Intent toward International
Cooperation on AI Safety Science, which underscores the importance of building a
“reliable, interdisciplinary, and reproducible body of evidence to inform policy
efforts related to AI safety.” The Seoul Summit also included an intention to
promote common scientific understandings of AI and referenced the interim report
International Scientific Report on the Safety of Advanced AI, released by the
U.K. Department for Science, Innovation and Technology (DSIT) and chaired by
leading AI researcher Yoshua Bengio from Canada.

When it comes to the innovation agenda, the Seoul Summit touched on a range of
important needs, such as research and development, workforce development,
privacy, protecting intellectual property, and energy/resource consumption. As
to the goal of inclusivity, mention is made of developing AI systems that
protect human rights, strengthening social safety nets, and ensuring safety from
risks, including disasters and accidents. Compared to the commitments on AI
safety, the declaration and ministerial statement have little to say about next
steps by either companies or governments when it comes to innovation or
inclusivity. This is not surprising given the complexity of these issues, but
also underscores the challenge in keeping the Bletchley Park process focused on
AI safety. Indeed, whether the Bletchley Park process can remain focused on
delivering on AI safety will be key to its success.

At Seoul, a smaller group of 10 nations and the EU also released a declaration
called the Seoul Statement of Intent toward International Cooperation on AI
Safety Science, which committed signatories to creating an international network
of national-level AI safety institutions. The development of AISIs has been the
most concrete outcome from the Bletchley Park process so far. The U.K., the
U.S., South Korea, Canada, Japan, Singapore, and France have set up AISIs, and
technical cooperation between the EU AI Office and the U.S. AISI has already
commenced. Currently, the AISIs are not regulators, and progress is needed on
how the AISIs will operate, share best practices, and establish a testing
framework for foundational AI models. In addition, for any global network of
AISIs to strengthen outcomes on AI safety will require agreement on thresholds
for risk as well as the standards and process for testing and verifying steps to
understand and mitigate AI risk. The U.S. and the U.K. have already inked a
memorandum of understanding committing their AI safety institutes to
collaborate, and other MOUs between AISIs are expected. To facilitate
collaboration on AI safety research, testing, and evaluation, the U.S. AISI has
also concluded MOUs with Anthropic and OpenAI.

Another key outcome from Seoul was the Frontier AI Safety Commitments signed by
16 technology companies. These signatories agreed to publish safety frameworks
used to measure risk, thresholds for what they will deem “intolerable” risk, and
a promise not to deploy models that exceed their particular thresholds. It is
likely that these frameworks and thresholds will be published prior to or during
the Paris AI Summit.

While marking a significant step forward in terms of international cooperation
on AI safety, the commitments so far come with a lower degree of specificity
than the White House voluntary AI commitments or the OECD Code of Conduct for
Organizations Developing AI Systems. For example, the White House voluntary
commitments and OECD Code of Conduct include specific commitments to red teaming
for assessing AI risk, commitments to watermarking content to distinguish
AI-generated content, as well as relatively detailed commitments on release of
transparency reports to help users understand the capabilities and limitations
of frontier AI systems. That said, the commitment in the Frontier AI Safety
Commitments to “set out thresholds at which severe risks posed by a model or
system, unless adequately mitigated, would be deemed intolerable” is not
reflected in the voluntary commitments or the OECD Code of Conduct and holds out
the promise of a globally harmonized approach.

Another important aspect of the Frontier AI Safety Commitments was the range of
companies that are signatories. They include the Chinese AI company Zhipu.ai,
the UAE technology conglomerate and AI developer G42, and UAE Technology
Innovation Institute. Beijing’s approval for Zhipu.ai’s participation is likely
a trial balloon for Beijing’s AI regulators to determine the implications of
allowing the country’s leading AI firms to sign on, even where the government is
reluctant to do so. The role of the Chinese government, Chinese AI safety
organizations, and Chinese AI technology platforms and startups will be one of
the critical issues under discussion among the Bletchley Park process organizers
in the run-up to the Paris Summit in early 2025.


SINCE THE SEOUL SUMMIT: NEW FOUNDATIONAL AI MODELS AND GOVERNMENT ACTION

Within the broader AI sector there have been major developments since the U.K.
Bletchley Summit last November that will impact aspects of the preparations for
the Paris meeting in February 2025. The U.S. AISI appears to be ramping up
capacity for testing models and will be convening a meeting of all AISIs in San
Francisco in November 2024. The U.K.’s AI safety institute appears to have
already developed capacity fairly rapidly, having released its open source
“Inspect” platform, which provides benchmarks to evaluate model capabilities.
Singapore has also released an AI testing framework and toolkit.

In addition, developments in the U.S. at the state level are shaping AI
governance outcomes that will be relevant for U.S. leadership on AI safety. In
particular, California’s controversial SB-1047 AI bill which passed the
legislature in late August and was vetoed in September by Governor Gavin Newsom
because the bill did not adequately address AI risk as it only applied to
foundational AI models. At the federal level, Senate Majority Leader Chuck
Schumer in late August suggested that AI legislation could be coming soon. How
these developments play out in the U.S. will likely affect whether the voluntary
AI commitments that currently underpin the Bletchley Park process turn into
something more binding.

Related Content

Misrepresentations of California’s AI safety bill

Artificial Intelligence Misrepresentations of California’s AI safety bill

Joshua Turner, Nicol Turner Lee

September 27, 2024

Within the broader industry, the development and release of even more powerful
foundational AI models continues, underscoring just how important and difficult
it will be for international AI governance to keep pace. This includes OpenAI’s
ChatGPT 4 and Anthropic’s Claude, along with more advanced open source models
from Meta and French AI company Mistral. At a minimum, rapid progress in both
the power of foundational AI models and in the availability of open source
models underscores the importance of making progress on AI safety and developing
a globally networked approach to assessing AI risk and agreeing thresholds
beyond which AI risk is unacceptable. Rapid developments in the capacity of
foundational AI models also underscore how a nimble, iterative, and networked
approach to international AI governance such as the Bletchley Park process will
be needed if frameworks for international cooperation on AI can keep pace with
AI innovation. The Bletchley Park process, with its focus on networking AISIs,
regular convenings to assess progress, and inclusion of nongovernment
stakeholders, could be an important element of the evolving AI governance
architecture.


LOOKING AHEAD: FRAGILE MULTILATERAL PROCESSES, GEOPOLITICS LIKELY TO BE MORE
IMPORTANT

The next several months will be critical to determining the direction and
relative success of the Bletchley Park process, which appears somewhat fragile
despite the clear progress over the past year. So far, the process appears to
have survived the change of government in the U.K., with the new Labor
government being supportive of work on AI regulation.

In the United States, the November election will be a major inflection point for
U.S. participation in international efforts to cooperate on AI safety. The
Republican platform, for example, calls for the revocation of the Biden
administration’s AI executive order, and it remains unclear how a second Trump
administration would view the Bletchley Park process and the participation of
China. An administration led by Kamala Harris would almost certainly continue to
resource existing efforts on AI.

In Europe, the EU is forming a new commission, and outcomes here will also
matter for the evolving EU approach to international cooperation. Now that that
the AI Act has passed, the focus has shifted to the development of AI standards,
including the extent that the EU AI Office will more formally engage with the
other AISIs.

In terms of the Paris meeting, there are a host of issues that need further
articulation. This includes developing standards for AI risk and assessing the
effectiveness of measures to mitigate AI risk. The rapid development of the
capacity of foundational AI models also creates new challenges for building
scientific consensus on AI risk. As noted, the interim AI report is a first step
in this direction. This approach is modeled on the work of the Intergovernmental
Panel on Climate Change (IPCC), which convenes experts to produce annual
assessments on the risks from climate change. However, climate change and its
impacts are relatively slow moving compared to developments in AI models. Going
forward, it will be important to find a more iterative and perhaps less formal
approach to assessing AI risk that can keep pace and still inform AI safety in a
meaningful way.

Over the next year, governments and companies engaged in the Bletchley Park
process, as well as other international efforts on AI in the G7, the OECD, and
the U.N., will all be grappling with how to balance the need for AI regulation
with the importance of also supporting innovation. A successful outcome from the
Paris Summit could showcase how a globally networked approach to AI safety can
address AI safety, support innovation, and remain nimble enough to respond to
rapid developments in the power of AI models.

Authors

Joshua P. Meltzer Senior Fellow - Global Economy and Development @JoshuaPMeltzer



Paul Triolo Partner, Dentons Global - Albright Stonebridge @pstAsiatech

Related Content

How does tech affect inequality? | The TechTank Podcast

Artificial Intelligence How does tech affect inequality? | The TechTank Podcast

Darrell M. West, Zia Qureshi

September 30, 2024

The most important question when designing AI

Artificial Intelligence The most important question when designing AI

Jacob Taylor

May 20, 2024

Reforming data regulation to advance AI governance in Africa

Technology & Information Reforming data regulation to advance AI governance in
Africa

Chinasa T. Okolo

March 15, 2024

 * Acknowledgements and disclosures
   
   
   Meta is a general, unrestricted donor to the Brookings Institution. The
   findings, interpretations, and conclusions posted in this piece are solely
   those of the authors and are not influenced by any donation.

More On
 * International Affairs
   
   Sub-Topics
   
   Diplomacy & Multilateralism

 * Technology & Information
   
   Sub-Topics
   
   Artificial Intelligence Technology Policy & Regulation

Program

Global Economy and Development

Project

Election ’24: Issues at Stake

Global economy faces a conflux of change

Global Economy & Development Global economy faces a conflux of change

Zia Qureshi, Daehee Jeong

October 17, 2024

Navigating new global dynamics: Challenges and policies
Past Event
October 21

2024

Global Economy & Development Navigating new global dynamics: Challenges and
policies

The Brookings Institution, Washington DC

Monday, 2:30 pm - 4:15 pm EDT

A new institution for governing AI? Lessons from GPAI

Artificial Intelligence A new institution for governing AI? Lessons from GPAI

Andrew W. Wyckoff

September 20, 2024

Get the latest from Brookings


Sign Up

 * twitter
 * facebook
 * linkedin
 * youtube
 * instagram

The Brookings Institution is a nonprofit organization based in Washington, D.C.
Our mission is to conduct in-depth, nonpartisan research to improve policy and
governance at local, national, and global levels.

Donate
 * Research Programs
 * Governance Studies
 * Economic Studies
 * Foreign Policy
 * Global Economy and Development
 * Brookings Metro

 * About Us
 * Leadership
 * Careers
 * Brookings Institution Press
 * WashU at Brookings
 * Contact Brookings

 * Research & Commentary
 * Experts
 * Events
 * Books
 * Podcasts
 * Newsletters

 * Privacy Policy, Updated August 2024
 * Terms of Use, Updated August 2024

Copyright 2024 The Brookings Institution