www.safe.ai Open in urlscan Pro
52.17.119.105  Public Scan

Submitted URL: http://www.safe.ai/
Effective URL: https://www.safe.ai/
Submission: On July 28 via api from US — Scanned from DE

Form analysis 0 forms found in the DOM

Text Content

About us
Our work

Overview
CAIS conducts research, field-building, and advocacy projects to reduce AI risk.
Research
We pursue impactful, technical AI safety research.
Field-Building Projects
CAIS builds infrastructure and new pathways into AI safety.
Compute Cluster
CAIS provides compute resources for AI/ML safety projects.
AI Risk
Resources

CAIS Blog
Deeper-dive examinations of relevant AI safety topics.
The AI Safety Newsletter
Regular briefings on the latest developments in AI safety, policy, and industry.
2023 Impact Report
See highlights and outcomes from CAIS projects in 2023.
Contact
Careers
Donate

About
Our Work


Work Overview
CAIS Research
Field-Building Projects
Compute Cluster
Resources


AI Safety Newsletter
CAIS Blog
2023 Impact Report
Frequently Asked Questions
AI RiskContactCareersDonate
Careers
Donate



Get insights on the latest developments in AI delivered to your inbox
The AI Safety Newsletter



REDUCING SOCIETAL-SCALE
RISKS FROM AI





THE CENTER FOR AI SAFETY
IS A RESEARCH AND FIELD-BUILDING NONPROFIT.


View our work

Our mission



The Center for AI Safety (CAIS — pronounced 'case') is a San Francisco-based
research and field-building nonprofit. We believe that artificial intelligence
(AI) has the potential to profoundly benefit the world, provided that we can
develop and use it safely. However, in contrast to the dramatic progress in AI,
many basic problems in AI safety have yet to be solved. Our mission is to reduce
societal-scale risks associated with AI by conducting safety research, building
the field of AI safety researchers, and advocating for safety standards.




FEATURED CAIS WORK


AI SAFETY FIELD-BUILDING

Featured


CAIS COMPUTE CLUSTER


COMPUTE CLUSTER



Enabling ML safety research at scale

To support progress and innovation in AI safety, we offer researchers free
access to our compute cluster, which can run and train large-scale AI systems.

Learn More



ML SAFETY INFRASTRUCTURE


PHILOSOPHY FELLOWSHIP



Tackling conceptual issues in AI safety

The CAIS Philosophy Fellowship is a seven-month research program that
investigates the societal implications and potential risks associated with
advanced AI.

Learn More



PHILOSOPHY FELLOWSHIP


ML SAFETY COURSE



Reducing barriers to entry in ML safety

The ML Safety course offers a comprehensive introduction to ML safety, covering
topics such as anomaly detection, alignment, risk engineering, and so on.

Learn More


See all work


Dan Hendrycks

Director, Center for AI Safety
PhD Computer Science, UC Berkeley


"PREVENTING EXTREME RISKS FROM AI REQUIRES MORE THAN JUST TECHNICAL WORK, SO
CAIS TAKES A MULTIDISCIPLINARY APPROACH WORKING ACROSS ACADEMIC DISCIPLINES,
PUBLIC AND PRIVATE ENTITIES, AND WITH THE GENERAL PUBLIC."




RISKS FROM AI


ARTIFICIAL INTELLIGENCE (AI) POSSESSES THE POTENTIAL TO BENEFIT AND ADVANCE
SOCIETY. LIKE ANY OTHER POWERFUL TECHNOLOGY, AI ALSO CARRIES INHERENT RISKS,
INCLUDING SOME WHICH ARE POTENTIALLY CATASTROPHIC. 

CURRENT AI SYSTEMS

Current systems already can pass the bar exam, write code, fold proteins, and
even explain humor. Like any other powerful technology, AI also carries inherent
risks, including some which are potentially catastrophic.

AI SAFETY

As AI systems become more advanced and embedded in society, it becomes
increasingly important to address and mitigate these risks. By prioritizing the
development of safe and responsible AI practices, we can unlock the full
potential of this technology for the benefit of humanity.

AI risks overview




OUR RESEARCH




WE CONDUCT IMPACTFUL RESEARCH AIMED AT IMPROVING THE SAFETY OF AI SYSTEMS.

TECHNICAL RESEARCH

At the Center for AI Safety, our research exclusively focuses on mitigating
societal-scale risks posed by AI. As a technical research laboratory:

 * We create foundational benchmarks and methods which lay the groundwork for
   the scientific community to address these technical challenges.
 * We ensure our work is public and accessible. We publish in top ML conferences
   and always release our datasets and code.

CONCEPTUAL RESEARCH

In addition to our technical research, we also explore the less formalized
aspects of AI safety.

 * We pursue conceptual research that examines AI safety from a
   multidisciplinary perspective, incorporating insights from safety
   engineering, complex systems, international relations, philosophy, and other
   fields.
 * Through our conceptual research, we create frameworks that aid in
   understanding the current technical challenges and publish papers which
   provide insight into the societal risks posed by future AI systems.

CAIS research



LEARN MORE ABOUT CAIS


FREQUENTLY ASKED QUESTIONS



We have compiled a list of frequently asked questions to help you find the
answers you need quickly and easily.



What does CAIS do?

CAIS’ mission is to reduce societal-scale risks from AI. We do this through
research and field-building.

Where is CAIS located?

CAIS’ main offices are located in San Francisco, California.

What does CAIS mean by field-building?

By field-building, we mean expanding the research field of AI safety by
providing funding, research infrastructure, and educational resources. Our goal
is to create a thriving research ecosystem that will drive progress towards safe
AI. You can see examples of our projects on our field-building page. 

How can I support CAIS and get involved?

CAIS is always looking for value-driven, talented individuals to join our team.
You can also make a tax-deductible donation to CAIS to help us maintain our
independent focus on AI safety here.

How does CAIS choose which projects it works on?

Our work is driven by three main pillars: advancing safety research, building
the safety research community, and promoting safety standards. We understand
that technical work will not solve AI safety alone, and prioritize having a
real-world positive impact. You can see more on our mission page. 

‍

Where can I learn more about the research CAIS is doing?

As a technical research laboratory, CAIS develops foundational benchmarks and
methods which concretize the problem and progress towards technical solutions.
You can see examples of our work on our research page.




SUBSCRIBE TO THE AI SAFETY NEWSLETTER




Want to help reduce risks from AI?
‍Donate to support our mission



No technical background required. See past newsletters.

CAIS IS AN AI SAFETY NON-PROFIT. OUR MISSION IS TO REDUCE SOCIETAL-SCALE RISKS
FROM ARTIFICIAL INTELLIGENCE.


Our Work
View All WorkStatement on AI RiskField BuildingCAIS ResearchCompute
ClusterPhilosophy FellowshipCAIS Blog
Our Mission
About Us2023 Impact ReportFrequently Asked QuestionsLearn About AI RiskCAIS
Media KitTerms of ServicePrivacy Policy
Get involved
DonateContact UsCareers

General:
contact@safe.ai

Media:
media@safe.ai



Cookies Notice:
This website uses cookies to identify pages that are being used most frequently.
This helps us analyze data about web page traffic and improve our website. We
only use this information for the purpose of statistical analysis and then the
data is removed from the system. We do not and will never sell user data. Read
more about our cookie policy on our privacy policy. Please contact us if you
have any questions.



© 2024 Center for AI Safety
Credits
Website by Osborn Design Works