futureoflife.org Open in urlscan Pro
2606:4700:20::681a:904  Public Scan

URL: https://futureoflife.org/
Submission: On February 14 via api from LU — Scanned from DE

Form analysis 3 forms found in the DOM

GET https://futureoflife.org/

<form role="search" method="get" class="oxy-header-search_form" action="https://futureoflife.org/">
  <div class="oxy-header-container">
    <label>
      <span class="screen-reader-text">Search for:</span>
      <input type="search" class="oxy-header-search_search-field" placeholder="Search..." value="" name="s" title="Search for:">
    </label><button aria-label="Close search" type="button" class="oxy-header-search_toggle"><svg class="oxy-header-search_close-icon" id="close-header-search-141-30729-icon">
        <use xlink:href="#Lineariconsicon-cross"></use>
      </svg></button><input type="submit" class="search-submit" value="Search">
  </div>
</form>

POST https://futureoflife.org/wp-json/ws-form/v1/submit

<form action="https://futureoflife.org/wp-json/ws-form/v1/submit" class="wsf-form wsf-form-canvas" id="ws-form-1" data-id="17" data-instance-id="1" method="POST" novalidate="" data-wsf-rendered="">
  <div class="wsf-grid wsf-sections" id="wsf-1-sections-31" data-id="31">
    <fieldset class="wsf-extra-small-12 wsf-tile wsf-section" id="wsf-1-section-44" data-id="44">
      <div class="wsf-grid wsf-fields" id="wsf-1-fields-44" data-id="44">
        <div class="wsf-extra-small-8 wsf-tile wsf-field-wrapper" id="wsf-1-field-wrapper-259" data-id="259" data-type="email"><input type="email" id="wsf-1-field-259" name="field_259" value="" placeholder="Email address" class="wsf-field"
            aria-label="Email" data-init-validate-real-time="">
          <div id="wsf-1-invalid-feedback-259" class="wsf-invalid-feedback" aria-hidden="true">Please provide a valid email.</div>
        </div>
        <div class="wsf-extra-small-4 wsf-tile wsf-field-wrapper wsf-bottom" id="wsf-1-field-wrapper-260" data-id="260" data-type="submit"><button type="submit" id="wsf-1-field-260" name="field_260"
            class="wsf-button wsf-button-full wsf-button-primary" aria-label="Submit" data-init-validate-real-time="">Submit</button></div>
      </div>
    </fieldset>
  </div>
</form>

POST https://futureoflife.org/wp-json/ws-form/v1/submit

<form action="https://futureoflife.org/wp-json/ws-form/v1/submit" class="wsf-form wsf-form-canvas" id="ws-form-2" data-id="17" data-instance-id="2" method="POST" novalidate="" data-wsf-rendered="">
  <div class="wsf-grid wsf-sections" id="wsf-2-sections-31" data-id="31">
    <fieldset class="wsf-extra-small-12 wsf-tile wsf-section" id="wsf-2-section-44" data-id="44">
      <div class="wsf-grid wsf-fields" id="wsf-2-fields-44" data-id="44">
        <div class="wsf-extra-small-8 wsf-tile wsf-field-wrapper" id="wsf-2-field-wrapper-259" data-id="259" data-type="email"><input type="email" id="wsf-2-field-259" name="field_259" value="" placeholder="Email address" class="wsf-field"
            aria-label="Email" data-init-validate-real-time="">
          <div id="wsf-2-invalid-feedback-259" class="wsf-invalid-feedback" aria-hidden="true">Please provide a valid email.</div>
        </div>
        <div class="wsf-extra-small-4 wsf-tile wsf-field-wrapper wsf-bottom" id="wsf-2-field-wrapper-260" data-id="260" data-type="submit"><button type="submit" id="wsf-2-field-260" name="field_260"
            class="wsf-button wsf-button-full wsf-button-primary" aria-label="Submit" data-init-validate-real-time="">Submit</button></div>
      </div>
    </fieldset>
  </div>
</form>

Text Content

Skip to content
 * Our mission
 * Cause areas
   Cause area overview
   
   
   Artificial Intelligence
   
   Biotechnology
   
   Nuclear Weapons
 * Our work
   Our work overview
   
   
   Policy
   
   Futures
   
   Outreach
   
   Grantmaking
   
   FEATURED PROJECTS
   
   UK AI Safety Summit
   Policy
   Strengthening the European AI Act
   Policy
   Imagine A World Podcast
   Futures
   Artificial Escalation
   Futures
   Our content
   Articles, Podcasts, Newsletters, Resources, and more.
 * About us
   About us overview
   
   
   Our people
   
   Careers
   
   Donate
   
   Finances
   
   FAQs
   
   Contact us
 * Take action


Search for:

Take action

 1. Home »
 2. Home

Our mission


STEERING TRANSFORMATIVE TECHNOLOGY TOWARDS BENEFITTING LIFE AND AWAY FROM
EXTREME LARGE-SCALE RISKS.


We believe that the way powerful technology is developed and used will be the
most important factor in determining the prospects for the future of life. This
is why we have made it our mission to ensure that technology continues to
improve those prospects.
Our mission
Learn more



OUR MISSION


ENSURING THAT OUR TECHNOLOGY REMAINS BENEFICIAL FOR LIFE

Our mission is to steer transformative technologies away from extreme,
large-scale risks and towards benefiting life.
Read more

How certain technologies are developed and used has far-reaching consequences
for all life on earth.

If properly managed, these technologies could change the world in a way that
makes life substantially better, both for those alive today and those who will
one day be born. They could be used to treat and eradicate diseases, strengthen
democratic processes, mitigate - or even halt - climate change and restore
biodiversity.

If irresponsibly managed, they could inflict serious harms on humanity and other
animal species. In the most extreme cases, they could bring about the
fragmentation or collapse of societies, and even push us to the brink of
extinction.

The Future of Life Institute works to reduce the likelihood of these worst-case
outcomes, and to help ensure that transformative technologies are used to the
benefit of life.
Our mission



CAUSE AREAS


THE RISKS WE FOCUS ON

We are currently concerned by three major risks. They all hinge on the
development, use and governance of transformative technologies. We focus our
efforts on guiding the impacts of these technologies.


ARTIFICIAL INTELLIGENCE

From recommender algorithms to chatbots to self-driving cars, AI is changing our
lives. As the impact of this technology grows, so will the risks.
Artificial Intelligence



BIOTECHNOLOGY

From the accidental release of engineered pathogens to the backfiring of a
gene-editing experiment, the dangers from biotechnology are too great for us to
proceed blindly.
Biotechnology



NUCLEAR WEAPONS

Almost eighty years after their introduction, the risks posed by nuclear weapons
are as high as ever - and new research reveals that the impacts are even worse
than previously reckoned.
Nuclear Weapons

UAV Kargu autonomous drones at the campus of OSTIM Technopark in Ankara, Turkey
- June 2020.


OUR WORK


HOW WE ARE ADDRESSING THESE ISSUES

There are many potential levers of change for steering the development and use
of transformative technologies. We target a range of these levers to increase
our chances of success.


POLICY

We perform policy advocacy in the United States, the European Union, and the
United Nations.
Our Policy work


OUTREACH

We produce educational materials aimed at informing public discourse, as well as
encouraging people to get involved.
Our Outreach work


GRANTMAKING

We provide grants to individuals and organisations working on projects that
further our mission.
Our Grant Programs


EVENTS

We convene leaders of the relevant fields to discuss ways of ensuring the safe
development and use of powerful technologies.
Our Events


FEATURED PROJECTS


WHAT WE'RE WORKING ON

Read about some of our current featured projects:


UK AI SAFETY SUMMIT

On 1-2 November 2023, the United Kingdom convened the first ever global
government meeting focussed on AI Safety. In the run-up to the summit, FLI
produced and published a document outlining key recommendations.
Policy


STRENGTHENING THE EUROPEAN AI ACT

Our key recommendations include broadening the Act’s scope to regulate general
purpose systems and extending the definition of prohibited manipulation to
include any type of manipulatory technique, and manipulation that causes
societal harm.
Policy


EDUCATING ABOUT LETHAL AUTONOMOUS WEAPONS

Military AI applications are rapidly expanding. We develop educational materials
about how certain narrow classes of AI-powered weapons can harm national
security and destabilize civilization, notably weapons where kill decisions are
fully delegated to algorithms.
Outreach, Policy


ARTIFICIAL ESCALATION

Our fictional film depicts a world where artificial intelligence ('AI') is
integrated into nuclear command, control and communications systems ('NC3') with
terrifying results.
Futures


GLOBAL AI GOVERNANCE AT THE UN

Our involvement with the UN's work spans several years and initiatives,
including the Roadmap for Digital Cooperation and the Global Digital Compact
(GDC).
Policy


WORLDBUILDING COMPETITION

The Future of Life Institute accepted entries from teams across the globe, to
compete for a prize purse of up to $100,000 by designing visions of a plausible,
aspirational future that includes strong artificial intelligence.
Futures, Outreach


FUTURE OF LIFE AWARD

Every year, the Future of Life Award is given to one or more unsung heroes who
have made a significant contribution to preserving the future of life.
Futures, Outreach


MITIGATING THE RISKS OF AI INTEGRATION IN NUCLEAR LAUNCH

Avoiding nuclear war is in the national security interest of all nations. We
pursue a range of initiatives to reduce this risk. Our current focus is on
mitigating the emerging risk of AI integration into nuclear command, control and
communication.
Policy
View all projects



NEWSLETTER


REGULAR UPDATES ABOUT THE TECHNOLOGIES SHAPING OUR WORLD

Every month, we bring 41,000+ subscribers the latest news on how emerging
technologies are transforming our world. It includes a summary of major
developments in our cause areas, and key updates on the work we do. Subscribe to
our newsletter to receive these highlights at the end of each month.

Please provide a valid email.
Submit

FUTURE OF LIFE INSTITUTE NEWSLETTER: THE YEAR OF FAKE

Deepfakes are dominating headlines - with much more disruption expected, the
Doomsday Clock has been set for 2024, AI governance updates, and more.
Maggie Munro
February 2, 2024

FUTURE OF LIFE INSTITUTE NEWSLETTER: WRAPPING UP OUR BIGGEST YEAR YET

A provisional agreement is reached on the EU AI Act, highlights from the past
year, and more.
Maggie Munro
December 22, 2023

FUTURE OF LIFE INSTITUTE NEWSLETTER: SAVE THE EU AI ACT 🇪🇺

Defending the EU AI Act against Big Tech lobbying, the 2023 Future of Life Award
winners, our new partnership on hardware-backed AI governance, and more.
Maggie Munro
December 4, 2023
Read previous editions

Our content


LATEST POSTS

The most recent posts we have published:

GRADUAL AI DISEMPOWERMENT

Could an AI takeover happen gradually?
February 1, 2024

EXPLORATION OF SECURE HARDWARE SOLUTIONS FOR SAFE AI DEPLOYMENT

This collaboration between the Future of Life Institute and Mithril Security
explores hardware-backed AI governance tools for transparency, traceability, and
confidentiality.
November 30, 2023

PROTECT THE EU AI ACT

A last-ditch assault on the EU AI Act threatens to jeopardise one of the
legislation's most important functions: preventing our most powerful AI models
from causing widespread harm to society.
November 22, 2023

MILES APART: COMPARING KEY AI ACT PROPOSALS

Our analysis shows that the recent non-paper drafted by Italy, France, and
Germany largely fails to provide any provisions on foundation models or general
purpose AI systems, and offers much less oversight and enforcement than the
existing alternatives.
November 21, 2023
View all posts




POLICY PAPERS

The most recent policy papers we have published:

FLI AI LIABILITY DIRECTIVE: EXECUTIVE SUMMARY

November 2023
View file

FLI AI LIABILITY DIRECTIVE: FULL VERSION

November 2023
View file

ARTIFICIAL INTELLIGENCE AND NUCLEAR WEAPONS: PROBLEM ANALYSIS AND US POLICY
RECOMMENDATIONS

November 2023
View file

FLI GOVERNANCE SCORECARD AND SAFETY STANDARDS POLICY (SSP)

October 2023
View file
View all policy papers




FUTURE OF LIFE INSTITUTE PODCAST

The most recent podcasts we have broadcast:
January 6, 2024

FRANK SAUER ON AUTONOMOUS WEAPON SYSTEMS

Play

January 6, 2024

DARREN MCKEE ON UNCONTROLLABLE SUPERINTELLIGENCE

Play

January 6, 2024

MARK BRAKEL ON THE UK AI SUMMIT AND THE FUTURE OF AI POLICY

Play

January 6, 2024

DAN HENDRYCKS ON CATASTROPHIC AI RISKS

Play

View all episodes



MEDIA MENTIONS

Six months later, our call to slow AI development is more crucial than ever
September 22, 2023
AI-focused tech firms locked in ‘race to the bottom’, warns MIT professor
September 21, 2023
AI Expert Max Tegmark Warns That Humanity Is Failing the New Technology’s
Challenge
August 18, 2023
AI explosion merits regulation to rein in threats, experts say
July 13, 2023
The dangers of artificial intelligence
March 30, 2023
The physicist Max Tegmark works to ensure that life has a future
October 20, 2022
When it Comes to Human Extinction, We Could Be Our Worst Enemy and Best Hope
May 27, 2022
The Rise of A.I. Fighter Pilots
January 17, 2022
The Third Revolution in Warfare
September 11, 2021
The Fight to Define When AI Is ‘High Risk’
September 1, 2021


SIGN UP FOR THE FUTURE OF LIFE INSTITUTE NEWSLETTER

Join 40,000+ others receiving periodic updates on our work and cause areas.
Please provide a valid email.
Submit
View previous editions

Steering transformative technology towards benefitting life and away from
extreme large-scale risks.

CAUSE AREAS

 * Artificial Intelligence
 * Biotechnology
 * Nuclear Weapons

OUR WORK

 * Policy
 * Outreach
 * Grantmaking
 * Futures

OUR CONTENT

 * Articles
 * Podcasts
 * Newsletters
 * Open letters

ABOUT US

 * Our people
 * Careers
 * Donate
 * Finances
 * FAQs
 * Contact us


 * Privacy Policy
 * Accessibility
 * Report a broken link
 * Internal

 * Privacy Policy
 * Accessibility
 * Report a broken link
 * Internal


© 2024 Future of Life Institute. All rights reserved.
Visit our FacebookVisit our TwitterVisit our LinkedInVisit our YouTube channel

angle-downcloudmagnifiercrossarrow-uparrow_downwardarrow_forwardbusiness_centermemoryattach_moneyinsert_drive_filemonetization_onwifi_tetheringmaillive_helpav_timerlibrary_booksplay_arrowrecent_actorsaccount_balance_walletfingerprintgavellanguage
linkedin facebook pinterest youtube rss twitter instagram facebook-blank
rss-blank linkedin-blank pinterest youtube twitter instagram
We use cookies to ensure that we give you the best experience on our website. If
you continue to use this site we will assume that you are happy with
it.OkayPrivacy policy