www.wired.com Open in urlscan Pro
151.101.2.194  Public Scan

Submitted URL: https://sincerememes.com/M5btpfq
Effective URL: https://www.wired.com/story/the-white-house-already-knows-how-to-make-ai-safer/
Submission: On September 12 via manual from IN — Scanned from DE

Form analysis 1 forms found in the DOM

Name: newsletter-subscribePOST

<form class="form-with-validation NewsletterSubscribeFormValidation-iCYa-Dt dweEln" id="newsletter-subscribe" name="newsletter-subscribe" novalidate="" method="POST"><span class="TextFieldWrapper-Pzdqp gLbdoV text-field"
    data-testid="TextFieldWrapper__email"><label class="BaseWrap-sc-gjQpdd BaseText-ewhhUZ TextFieldLabel-klrYvg iUEiRd bguvtk dGjIbL text-field__label text-field__label--single-line" for="newsletter-subscribe-text-field-email"
      data-testid="TextFieldLabel__email">
      <div class="TextFieldLabelText-cvvxBl ccLIEk">Your email</div>
      <div class="TextFieldInputContainer-jcMPhb oFrOs"><input aria-describedby="privacy-text" aria-invalid="false" id="newsletter-subscribe-text-field-email" required="" name="email" placeholder="Enter your email"
          class="BaseInput-fAzTdK TextFieldControlInput-eFUxkf eGzzTT ebwfTz text-field__control text-field__control--input" type="email" data-testid="TextFieldInput__email" value=""></div>
    </label><button class="BaseButton-bLlsy ButtonWrapper-xCepQ fhIjxp csoGqE button button--utility TextFieldButton-csBrgY edxbrw" data-event-click="{&quot;element&quot;:&quot;Button&quot;}" data-testid="Button" type="submit"><span
        class="ButtonLabel-cjAuJN hzwRuG button__label">SUBMIT</span></button></span>
  <div id="privacy-text" tabindex="-1" class="NewsletterSubscribeFormDisclaimer-bTVtiV kdZDAH"><span>By signing up you agree to our <a href="https://www.condenast.com/user-agreement">User Agreement</a> (including the
      <a href="https://www.condenast.com/user-agreement#introduction-arbitration-notice"> class action waiver and arbitration provisions</a>), our <a href="https://www.condenast.com/privacy-policy">Privacy Policy &amp; Cookie Statement</a> and to
      receive marketing and account-related emails from WIRED. You can unsubscribe at any time.</span></div>
</form>

Text Content

Skip to main content

Open Navigation Menu
Menu
Story Saved

To revist this article, visit My Profile, then View saved stories.

Close Alert


The White House Already Knows How to Make AI Safer
 * Backchannel
 * Business
 * Culture
 * Gear
 * Ideas
 * Science
 * Security
 * Merch

Story Saved

To revist this article, visit My Profile, then View saved stories.

Close Alert

Sign In

SUBSCRIBE


GET WIRED


FOR JUST $29.99 $5

SUBSCRIBE


Search
Search
 * Backchannel
 * Business
 * Culture
 * Gear
 * Ideas
 * Science
 * Security
 * Merch

 * Podcasts
 * Video
 * Artificial Intelligence
 * Climate
 * Games
 * Newsletters
 * Magazine
 * Events
 * Wired Insider
 * Jobs
 * Coupons

Chevron
ON SALE NOWGet WIRED - now only $29.99 $5This is your last free article. See the
future here first with 1 year of unlimited access.SUBSCRIBE NOW
Already a subscriber? Sign in

Get WIRED - now only $29.99 $5. SUBSCRIBE NOW




Suresh Venkatasubramanian

Ideas
Jul 25, 2023 12:12 PM


THE WHITE HOUSE ALREADY KNOWS HOW TO MAKE AI SAFER

The US already has a road map for the deployment of AI systems. Biden's promised
executive order just needs to put these guidelines into practice.
Photo-illustration: WIRED Staff; Getty Images
Save this storySave
Save this storySave

Ever since the White House released the Blueprint for an AI Bill of Rights last
fall (a document that I helped develop during my time at the Office of Science
and Technology Policy), there’s been a steady drip of announcements from the
executive branch, including requests for information, strategic plan drafts, and
regulatory guidance. The latest entry in this policy pageant, announced last
week, is that the White House got the CEOs of the most prominent AI-focused
companies to voluntarily commit to being a little more careful about checking
the systems they roll out.

There are a few sound practices within these commitments: We should carefully
test AI systems for potential harms before deploying them; the results should be
evaluated independently; and companies should focus on designing AI systems that
are safe to begin with, rather than bolting safety features on after the fact.
The problem is that these commitments are vague and voluntary. “Don’t be evil,”
anyone?

WIRED OPINION
ABOUT

Suresh Venkatasubramanian is the director of the Center for Tech Responsibility
at Brown University and a professor of computer science and data science. He
formerly served as the assistant director for science and justice within the
Office of Science and Technology in the Biden Administration, where he helped
co-author the Blueprint for an AI Bill of Rights.

Legislation is needed to ensure that private companies live up to their
commitments. But we should not forget the federal market’s outsize influence on
AI practices. As a large employer and user of AI technology, a major customer
for AI systems, a regulator, and a source of funding for so many state-level
actions, the federal government can make a real difference by changing how it
acts, even in the absence of legislation.



If the government actually wants to make AI safer, it must issue the executive
order promised at last week’s meeting, alongside specific guidance that the
Office of Management and Budget—the most powerful office you’ve never heard
of—will give to agencies. We don’t need innumerable hearings, forums, requests
for information, or task forces to figure out what this executive order should
say. Between the Blueprint and the AI risk management framework developed by the
National Institute of Standards and Technology (NIST), we already have a road
map for how the government should oversee the deployment of AI systems in order
to maximize their ability to help people and minimize the likelihood that they
cause harm.

The Blueprint and NIST frameworks are detailed and extensive and together add up
to more than 130 pages. They lay out important practices for every stage of the
process of developing these systems: how to involve all stakeholders (including
the public and its representatives) in the design process; how to evaluate
whether the system as designed will serve the needs of all—and whether it should
be deployed at all; and how to test and independently evaluate for system
safety, effectiveness, and bias mitigation prior to deployment. These frameworks
also outline how to continually monitor systems after deployment to ensure that
their behavior has not deteriorated. They stipulate that entities using AI
systems must offer full disclosure of where they are being used and clear and
intelligible explanations of why a system produces a particular prediction,
outcome, or recommendation for an individual. The guidelines also describe
mechanisms for individuals to appeal and request recourse in a timely manner
when systems fail or produce unfavorable outcomes, and what an overarching
governance structure for these systems should look like. All of these
recommendations are backed by concrete implementation guidelines and reflect
over a decade of research and development in responsible AI.



An executive order can enshrine these best practices in at least four ways.
First, it could require all government agencies developing, using, or deploying
AI systems that affect people’s lives and livelihoods to ensure that these
systems comply with best practices. For example, the federal government might
make use of AI to determine eligibility for public benefits and identify
irregularities that might trigger an investigation. A recent study showed that
IRS auditing algorithms might be implicated in disproportionately high audit
rates for Black taxpayers. If the IRS were required to comply with these
guidelines, it would have to address this issue promptly.

Featured Video



A.I. Expert Answers A.I. Questions From Twitter



Most Popular
 * Business
   This Is the True Scale of New York’s Airbnb Apocalypse
   
   Amanda Hoover

 * Gear
   How to Watch Apple’s iPhone 15 Launch, and What to Expect
   
   Boone Ashworth

 * Business
   The End of Airbnb in New York
   
   Amanda Hoover

 * Gear
   The 15 Best Electric Bikes for Every Kind of Ride
   
   Adrienne So

 * 





Second, it could instruct any federal agency procuring an AI system that has the
potential to “meaningfully impact [our] rights, opportunities, or access to
critical resources or services” to require that the system comply with these
practices and that vendors provide evidence of this compliance. This recognizes
the federal government’s power as a customer to shape business practices. After
all, it is the biggest employer in the country and could use its buying power to
dictate best practices for the algorithms that are used to, for instance, screen
and select candidates for jobs.



Third, the executive order could demand that anyone taking federal dollars
(including state and local entities) ensure that the AI systems they use comply
with these practices. This recognizes the important role of federal investment
in states and localities. For example, AI has been implicated in many components
of the criminal justice system, including predictive policing, surveillance,
pre-trial incarceration, sentencing, and parole. Although most law enforcement
practices are local, the Department of Justice offers federal grants to state
and local law enforcement and could attach conditions to these funds stipulating
how to use the technology.



Finally, this executive order could direct agencies with regulatory authority to
update and expand their rulemaking to processes within their jurisdiction that
include AI. Some initial efforts to regulate entities using AI with medical
devices, hiring algorithms, and credit scoring are already underway, and these
initiatives could be further expanded. Worker surveillance and property
valuation systems are just two examples of areas that would benefit from this
kind of regulatory action.

Of course, the testing and monitoring regime for AI systems that I’ve outlined
here is likely to provoke a range of concerns. Some may argue, for example, that
other countries will overtake us if we slow down to implement such guardrails.
But other countries are busy passing their own laws that place extensive
restrictions on AI systems, and any American businesses seeking to operate in
these countries will have to comply with their rules. The EU is about to pass an
expansive AI Act that includes many of the provisions I described above, and
even China is placing limits on commercially deployed AI systems that go far
beyond what we are currently willing to consider.


SEE WHAT’S NEXT IN TECH WITH THE FAST FORWARD NEWSLETTER

A weekly dispatch from the future by Will Knight, exploring AI advances and
other technology set to change our lives. Delivered every Thursday.
Your email

SUBMIT
By signing up you agree to our User Agreement (including the class action waiver
and arbitration provisions), our Privacy Policy & Cookie Statement and to
receive marketing and account-related emails from WIRED. You can unsubscribe at
any time.

Others may express concern that this expansive set of requirements might be hard
for a small business to comply with. This could be addressed by linking the
requirements to the degree of impact: A piece of software that can affect the
livelihoods of millions should be thoroughly vetted, regardless of how big or
how small the developer is. An AI system that individuals use for recreational
purposes shouldn’t be subject to the same strictures and restrictions.

There are also likely to be concerns about whether these requirements are
practical. Here again, it’s important not to underestimate the federal
government’s power as a market maker. An executive order that calls for testing
and validation frameworks will provide incentives for businesses that want to
translate best practices into viable commercial testing regimes. The responsible
AI sector is already filling with firms that provide algorithmic auditing and
evaluation services, industry consortia that issue detailed guidelines vendors
are expected to comply with, and large consulting firms that offer guidance to
their clients. And nonprofit, independent entities like Data and Society
(disclaimer: I sit on their board) have set up entire labs to develop tools that
assess how AI systems will affect different populations.

We’ve done the research, we’ve built the systems, and we’ve identified the
harms. There are established ways to make sure that the technology we build and
deploy can benefit all of us while reducing harms for those who are already
buffeted by a deeply unequal society. The time for studying is over—now the
White House needs to issue an executive order and take action.

--------------------------------------------------------------------------------

WIRED Opinion publishes articles by outside contributors representing a wide
range of viewpoints. Read more opinions here. Submit an op-ed at
ideas@wired.com.






GET MORE FROM WIRED

 * Want more WIRED in your life? Visit our brand new merch shop!

 * 📧 Get the best stories from WIRED’s iconic archive in your inbox

 * She sacrificed her youth to get the tech bros to grow up

 * The battle over Books3 could change AI forever

 * Preferring biological children is immoral

 * This brutal summer in 10 alarming maps and graphs

 * How to have asynchronous video calls

 * 🌞 See if you take a shine to our picks for the best sunglasses and sun
   protection

Suresh Venkatasubramanian is the director of the Center for Tech Responsibility
at Brown University and a professor of computer science and data science. He
formerly served as the assistant director for science and justice within the
Office of Science and Technology in the Biden Administration, where he helped
co-author the... Read more

TopicsgovernmentRegulationTech Policy and Lawartificial intelligence
More from WIRED
TikTok Is Letting People Shut Off Its Infamous Algorithm—and Think for
Themselves
TikTok is making its algorithm optional for users in the European Union. But
more legal and design changes are necessary to protect people's right to
"cognitive liberty."

Nita Farahany

Super Apps Are Terrible for People—and Great for Companies
Apps that offer to "do it all" will subject users to even more exploitation and
surveillance, while large tech companies profit.

Edward Ongweso Jr.

Nervous About ChatGPT? Try ChatGPT With a Hammer
Once generative AI can use real-world tools, it will become exponentially more
capable. Companies and regulators need to get ahead of these rapidly evolving
algorithms.

Bruce Schneier


AI Can Be an Extraordinary Force for Good—if It’s Contained
No one has a plan for regulating AI yet. These are the questions that leaders
must ask to contain the coming wave.

Mustafa Suleyman

The Internet Is Turning Into a Data Black Box. An ‘Inspectability API’ Could
Crack It Open
Unlike web browsers, mobile apps increasingly make it difficult or impossible to
see what companies are really doing with your data. The answer? An
inspectability API.

Surya Mattu

Do Not Fear the Robot Uprising. Join It
Stories about AI liberation aren’t obsolete—and they aren't really about robots,
either.

Katherine Alejandra Cross

You Are Not Responsible for Your Own Online Privacy
In the age of generative AI, it’s impossible to know where your information is
going—or what it’s going to be used for.

Alice Marwick

Using Generative AI to Resurrect the Dead Will Create a Burden for the Living
AI technologies promise more chatbots and replicas of people who have passed.
But giving voice to the dead comes at a human cost.

Tamara Kneese







ONE YEAR FOR $29.99 $5

SUBSCRIBE
WIRED is where tomorrow is realized. It is the essential source of information
and ideas that make sense of a world in constant transformation. The WIRED
conversation illuminates how technology is changing every aspect of our
lives—from culture to business, science to design. The breakthroughs and
innovations that we uncover lead to new ways of thinking, new connections, and
new industries.
 * Facebook
 * X
 * Pinterest
 * YouTube
 * Instagram
 * Tiktok

More From WIRED

 * Subscribe
 * Newsletters
 * FAQ
 * Wired Staff
 * Press Center
 * Coupons
 * Editorial Standards
 * Archive

Contact

 * Advertise
 * Contact Us
 * Customer Care
 * Jobs

 * RSS
 * Accessibility Help
 * Condé Nast Store
 * 
   Manage Preferences

© 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance
of our User Agreement and Privacy Policy and Cookie Statement and Your
California Privacy Rights. WIRED may earn a portion of sales from products that
are purchased through our site as part of our Affiliate Partnerships with
retailers. The material on this site may not be reproduced, distributed,
transmitted, cached or otherwise used, except with the prior written permission
of Condé Nast. Ad Choices

Select international siteUnited StatesLargeChevron
 * UK
 * Italia
 * Japón




We and our partners store and/or access information on a device, such as unique
IDs in cookies to process personal data. You may accept or manage your choices
by clicking below or at any time in the privacy policy page. These choices will
be signaled to our partners and will not affect browsing data.More Information


WE AND OUR PARTNERS PROCESS DATA TO PROVIDE:

Use precise geolocation data. Actively scan device characteristics for
identification. Store and/or access information on a device. Personalised ads
and content, ad and content measurement, audience insights and product
development. List of Partners (vendors)

I Accept
Show Purposes