www.safe.ai
Open in
urlscan Pro
34.249.200.254
Public Scan
Submitted URL: https://safeml.org/
Effective URL: https://www.safe.ai/
Submission: On July 04 via automatic, source certstream-suspicious — Scanned from DE
Effective URL: https://www.safe.ai/
Submission: On July 04 via automatic, source certstream-suspicious — Scanned from DE
Form analysis
6 forms found in the DOM<form>
<fieldset>
<legend class="visuallyhidden">Consent Selection</legend>
<div id="CybotCookiebotDialogBodyFieldsetInnerContainer">
<div class="CybotCookiebotDialogBodyLevelButtonWrapper"><label class="CybotCookiebotDialogBodyLevelButtonLabel" for="CybotCookiebotDialogBodyLevelButtonNecessary"><strong class="CybotCookiebotDialogBodyLevelButtonDescription">Necessary
</strong></label>
<div class="CybotCookiebotDialogBodyLevelButtonSliderWrapper CybotCookiebotDialogBodyLevelButtonSliderWrapperDisabled"><input type="checkbox" id="CybotCookiebotDialogBodyLevelButtonNecessary"
class="CybotCookiebotDialogBodyLevelButton CybotCookiebotDialogBodyLevelButtonDisabled" disabled="disabled" checked="checked"> <span class="CybotCookiebotDialogBodyLevelButtonSlider"></span></div>
</div>
<div class="CybotCookiebotDialogBodyLevelButtonWrapper"><label class="CybotCookiebotDialogBodyLevelButtonLabel" for="CybotCookiebotDialogBodyLevelButtonPreferences"><strong class="CybotCookiebotDialogBodyLevelButtonDescription">Preferences
</strong></label>
<div class="CybotCookiebotDialogBodyLevelButtonSliderWrapper"><input type="checkbox" id="CybotCookiebotDialogBodyLevelButtonPreferences" class="CybotCookiebotDialogBodyLevelButton CybotCookiebotDialogBodyLevelConsentCheckbox"
data-target="CybotCookiebotDialogBodyLevelButtonPreferencesInline" checked="checked" tabindex="0"> <span class="CybotCookiebotDialogBodyLevelButtonSlider"></span></div>
</div>
<div class="CybotCookiebotDialogBodyLevelButtonWrapper"><label class="CybotCookiebotDialogBodyLevelButtonLabel" for="CybotCookiebotDialogBodyLevelButtonStatistics"><strong class="CybotCookiebotDialogBodyLevelButtonDescription">Statistics
</strong></label>
<div class="CybotCookiebotDialogBodyLevelButtonSliderWrapper"><input type="checkbox" id="CybotCookiebotDialogBodyLevelButtonStatistics" class="CybotCookiebotDialogBodyLevelButton CybotCookiebotDialogBodyLevelConsentCheckbox"
data-target="CybotCookiebotDialogBodyLevelButtonStatisticsInline" checked="checked" tabindex="0"> <span class="CybotCookiebotDialogBodyLevelButtonSlider"></span></div>
</div>
<div class="CybotCookiebotDialogBodyLevelButtonWrapper"><label class="CybotCookiebotDialogBodyLevelButtonLabel" for="CybotCookiebotDialogBodyLevelButtonMarketing"><strong class="CybotCookiebotDialogBodyLevelButtonDescription">Marketing
</strong></label>
<div class="CybotCookiebotDialogBodyLevelButtonSliderWrapper"><input type="checkbox" id="CybotCookiebotDialogBodyLevelButtonMarketing" class="CybotCookiebotDialogBodyLevelButton CybotCookiebotDialogBodyLevelConsentCheckbox"
data-target="CybotCookiebotDialogBodyLevelButtonMarketingInline" checked="checked" tabindex="0"> <span class="CybotCookiebotDialogBodyLevelButtonSlider"></span></div>
</div>
</div>
</fieldset>
</form>
<form><input type="checkbox" id="CybotCookiebotDialogBodyLevelButtonNecessaryInline" class="CybotCookiebotDialogBodyLevelButton CybotCookiebotDialogBodyLevelButtonDisabled" disabled="disabled" checked="checked"> <span
class="CybotCookiebotDialogBodyLevelButtonSlider"></span></form>
<form><input type="checkbox" id="CybotCookiebotDialogBodyLevelButtonPreferencesInline" class="CybotCookiebotDialogBodyLevelButton CybotCookiebotDialogBodyLevelConsentCheckbox" data-target="CybotCookiebotDialogBodyLevelButtonPreferences"
checked="checked" tabindex="0"> <span class="CybotCookiebotDialogBodyLevelButtonSlider"></span></form>
<form><input type="checkbox" id="CybotCookiebotDialogBodyLevelButtonStatisticsInline" class="CybotCookiebotDialogBodyLevelButton CybotCookiebotDialogBodyLevelConsentCheckbox" data-target="CybotCookiebotDialogBodyLevelButtonStatistics"
checked="checked" tabindex="0"> <span class="CybotCookiebotDialogBodyLevelButtonSlider"></span></form>
<form><input type="checkbox" id="CybotCookiebotDialogBodyLevelButtonMarketingInline" class="CybotCookiebotDialogBodyLevelButton CybotCookiebotDialogBodyLevelConsentCheckbox" data-target="CybotCookiebotDialogBodyLevelButtonMarketing" checked="checked"
tabindex="0"> <span class="CybotCookiebotDialogBodyLevelButtonSlider"></span></form>
<form class="CybotCookiebotDialogBodyLevelButtonSliderWrapper"><input type="checkbox" id="CybotCookiebotDialogBodyContentCheckboxPersonalInformation" class="CybotCookiebotDialogBodyLevelButton"> <span
class="CybotCookiebotDialogBodyLevelButtonSlider"></span></form>
Text Content
Powered by Cookiebot * Consent * Details * [#IABV2SETTINGS#] * About THIS WEBSITE USES COOKIES We use cookies to personalise content and ads, to provide social media features and to analyse our traffic. We also share information about your use of our site with our social media, advertising and analytics partners who may combine it with other information that you’ve provided to them or that they’ve collected from your use of their services. Consent Selection Necessary Preferences Statistics Marketing Show details * Necessary 15 Necessary cookies help make a website usable by enabling basic functions like page navigation and access to secure areas of the website. The website cannot function properly without these cookies. * Cookiebot 1 Learn more about this provider 1.gifUsed to count the number of sessions to the website, necessary for optimizing CMP product delivery. Expiry: SessionType: Pixel Tracker * LinkedIn 2 Learn more about this provider bcookieUsed in order to detect spam and improve the website's security. Expiry: 1 yearType: HTTP Cookie li_gcStores the user's cookie consent state for the current domain Expiry: 180 daysType: HTTP Cookie * newsletter.safe.ai 1 test_cookiePending Expiry: 1 dayType: HTTP Cookie * newsletter.safe.ai consent.cookiebot.com 2 CookieConsent [x2]Stores the user's cookie consent state for the current domain Expiry: 1 yearType: HTTP Cookie * newsletter.safe.ai substack.com 8 __cf_bm [x2]This cookie is used to distinguish between humans and bots. This is beneficial for the website, in order to make valid reports on the use of their website. Expiry: 1 dayType: HTTP Cookie AWSALBTG [x2]Registers which server-cluster is serving the visitor. This is used in context with load balancing, in order to optimize user experience. Expiry: 7 daysType: HTTP Cookie AWSALBTGCORS [x2]Registers which server-cluster is serving the visitor. This is used in context with load balancing, in order to optimize user experience. Expiry: 7 daysType: HTTP Cookie visit_id [x2]Preserves users states across page requests. Expiry: 1 dayType: HTTP Cookie * www.safe.ai 1 wf_auth_pageThis cookie is necessary for the login function on the website. Expiry: SessionType: HTTP Cookie * Preferences 1 Preference cookies enable a website to remember information that changes the way the website behaves or looks, like your preferred language or the region that you are in. * LinkedIn 1 Learn more about this provider lidcRegisters which server-cluster is serving the visitor. This is used in context with load balancing, in order to optimize user experience. Expiry: 1 dayType: HTTP Cookie * Statistics 8 Statistic cookies help website owners to understand how visitors interact with websites by collecting and reporting information anonymously. * newsletter.safe.ai substack.com 2 ab_testing_id [x2]This cookie is used by the website’s operator in context with multi-variate testing. This is a tool used to combine or change content on the website. This allows the website to find the best variation/edition of the site. Expiry: 1 yearType: HTTP Cookie * substackcdn.com 3 ajs_anonymous_id [x2]This cookie is used to identify a specific visitor - this information is used to identify the number of specific visitors on a website. Expiry: 1 yearType: HTTP Cookie substack_ref_urlCollects data such as visitors' IP address, geographical location and website navigation - This information is used for internal optimization and statistics for the website's operator. Expiry: PersistentType: HTML Local Storage * www.datadoghq-browser-agent.com 3 _dd_sRegisters the website's speed and performance. This function can be used in context with statistics and load-balancing. Expiry: 1 dayType: HTTP Cookie dd_cookie_test_# [x2]Registers data on visitors' website-behaviour. This is used for internal analysis and website optimization. Expiry: 1 dayType: HTTP Cookie * Marketing 8 Marketing cookies are used to track visitors across websites. The intention is to display ads that are relevant and engaging for the individual user and thereby more valuable for publishers and third party advertisers. * Google 7 Learn more about this provider IDEPending Expiry: 400 daysType: HTTP Cookie _ga [x2]Used to send data to Google Analytics about the visitor's device and behavior. Tracks the visitor across devices and marketing channels. Expiry: 2 yearsType: HTTP Cookie _ga_# [x2]Used to send data to Google Analytics about the visitor's device and behavior. Tracks the visitor across devices and marketing channels. Expiry: 2 yearsType: HTTP Cookie _gcl_au [x2]Used by Google AdSense for experimenting with advertisement efficiency across websites using their services. Expiry: 3 monthsType: HTTP Cookie * newsletter.safe.ai 1 pagead/1p-user-list/#Pending Expiry: SessionType: Pixel Tracker * Unclassified 6 Unclassified cookies are cookies that we are in the process of classifying, together with the providers of individual cookies. * newsletter.safe.ai substack.com 2 ab_experiment_sampled [x2]Pending Expiry: 1 yearType: HTTP Cookie * substack.com 2 cookie_storage_keyPending Expiry: 3 monthsType: HTTP Cookie experiment_get_app_bottom_sheet_with_backoffPending Expiry: 4 monthsType: HTTP Cookie * substackcdn.com 2 preferred_languagePending Expiry: SessionType: HTTP Cookie substack_refPending Expiry: PersistentType: HTML Local Storage Cross-domain consent[#BULK_CONSENT_DOMAINS_COUNT#] [#BULK_CONSENT_TITLE#] List of domains your consent applies to: [#BULK_CONSENT_DOMAINS#] Cookie declaration last updated on 30.06.24 by Cookiebot [#IABV2_TITLE#] [#IABV2_BODY_INTRO#] [#IABV2_BODY_LEGITIMATE_INTEREST_INTRO#] [#IABV2_BODY_PREFERENCE_INTRO#] [#IABV2_LABEL_PURPOSES#] [#IABV2_BODY_PURPOSES_INTRO#] [#IABV2_BODY_PURPOSES#] [#IABV2_LABEL_FEATURES#] [#IABV2_BODY_FEATURES_INTRO#] [#IABV2_BODY_FEATURES#] [#IABV2_LABEL_PARTNERS#] [#IABV2_BODY_PARTNERS_INTRO#] [#IABV2_BODY_PARTNERS#] Cookies are small text files that can be used by websites to make a user's experience more efficient. The law states that we can store cookies on your device if they are strictly necessary for the operation of this site. For all other types of cookies we need your permission. This site uses different types of cookies. Some cookies are placed by third party services that appear on our pages. You can at any time change or withdraw your consent from the Cookie Declaration on our website. Learn more about who we are, how you can contact us and how we process personal data in our Privacy Policy. Please state your consent ID and date when you contact us regarding your consent. Do not sell or share my personal information Deny Allow selection Customize Allow all Powered by Cookiebot by Usercentrics About us Our work Overview CAIS conducts research, field-building, and advocacy projects to reduce AI risk. Research We pursue impactful, technical AI safety research. Field-Building Projects CAIS builds infrastructure and new pathways into AI safety. Compute Cluster CAIS provides compute resources for AI/ML safety projects. AI Risk Resources CAIS Blog Deeper-dive examinations of relevant AI safety topics. The AI Safety Newsletter Regular briefings on the latest developments in AI safety, policy, and industry. 2023 Impact Report See highlights and outcomes from CAIS projects in 2023. Contact Careers Donate About Our Work Work Overview CAIS Research Field-Building Projects Compute Cluster Resources AI Safety Newsletter CAIS Blog 2023 Impact Report Frequently Asked Questions AI RiskContactCareersDonate Careers Donate Get insights on the latest developments in AI delivered to your inbox The AI Safety Newsletter REDUCING SOCIETAL-SCALE RISKS FROM AI THE CENTER FOR AI SAFETY IS A RESEARCH AND FIELD-BUILDING NONPROFIT. View our work Our mission The Center for AI Safety (CAIS — pronounced 'case') is a San Francisco-based research and field-building nonprofit. We believe that artificial intelligence (AI) has the potential to profoundly benefit the world, provided that we can develop and use it safely. However, in contrast to the dramatic progress in AI, many basic problems in AI safety have yet to be solved. Our mission is to reduce societal-scale risks associated with AI by conducting safety research, building the field of AI safety researchers, and advocating for safety standards. FEATURED CAIS WORK AI SAFETY FIELD-BUILDING Featured CAIS COMPUTE CLUSTER COMPUTE CLUSTER Enabling ML safety research at scale To support progress and innovation in AI safety, we offer researchers free access to our compute cluster, which can run and train large-scale AI systems. Learn More ML SAFETY INFRASTRUCTURE PHILOSOPHY FELLOWSHIP Tackling conceptual issues in AI safety The CAIS Philosophy Fellowship is a seven-month research program that investigates the societal implications and potential risks associated with advanced AI. Learn More PHILOSOPHY FELLOWSHIP ML SAFETY COURSE Reducing barriers to entry in ML safety The ML Safety course offers a comprehensive introduction to ML safety, covering topics such as anomaly detection, alignment, risk engineering, and so on. Learn More See all work Dan Hendrycks Director, Center for AI Safety PhD Computer Science, UC Berkeley "PREVENTING EXTREME RISKS FROM AI REQUIRES MORE THAN JUST TECHNICAL WORK, SO CAIS TAKES A MULTIDISCIPLINARY APPROACH WORKING ACROSS ACADEMIC DISCIPLINES, PUBLIC AND PRIVATE ENTITIES, AND WITH THE GENERAL PUBLIC." RISKS FROM AI ARTIFICIAL INTELLIGENCE (AI) POSSESSES THE POTENTIAL TO BENEFIT AND ADVANCE SOCIETY. LIKE ANY OTHER POWERFUL TECHNOLOGY, AI ALSO CARRIES INHERENT RISKS, INCLUDING SOME WHICH ARE POTENTIALLY CATASTROPHIC. CURRENT AI SYSTEMS Current systems already can pass the bar exam, write code, fold proteins, and even explain humor. Like any other powerful technology, AI also carries inherent risks, including some which are potentially catastrophic. AI SAFETY As AI systems become more advanced and embedded in society, it becomes increasingly important to address and mitigate these risks. By prioritizing the development of safe and responsible AI practices, we can unlock the full potential of this technology for the benefit of humanity. AI risks overview OUR RESEARCH WE CONDUCT IMPACTFUL RESEARCH AIMED AT IMPROVING THE SAFETY OF AI SYSTEMS. TECHNICAL RESEARCH At the Center for AI Safety, our research exclusively focuses on mitigating societal-scale risks posed by AI. As a technical research laboratory: * We create foundational benchmarks and methods which lay the groundwork for the scientific community to address these technical challenges. * We ensure our work is public and accessible. We publish in top ML conferences and always release our datasets and code. CONCEPTUAL RESEARCH In addition to our technical research, we also explore the less formalized aspects of AI safety. * We pursue conceptual research that examines AI safety from a multidisciplinary perspective, incorporating insights from safety engineering, complex systems, international relations, philosophy, and other fields. * Through our conceptual research, we create frameworks that aid in understanding the current technical challenges and publish papers which provide insight into the societal risks posed by future AI systems. CAIS research LEARN MORE ABOUT CAIS FREQUENTLY ASKED QUESTIONS We have compiled a list of frequently asked questions to help you find the answers you need quickly and easily. What does CAIS do? CAIS’ mission is to reduce societal-scale risks from AI. We do this through research and field-building. Where is CAIS located? CAIS’ main offices are located in San Francisco, California. What does CAIS mean by field-building? By field-building, we mean expanding the research field of AI safety by providing funding, research infrastructure, and educational resources. Our goal is to create a thriving research ecosystem that will drive progress towards safe AI. You can see examples of our projects on our field-building page. How can I support CAIS and get involved? CAIS is always looking for value-driven, talented individuals to join our team. You can also make a tax-deductible donation to CAIS to help us maintain our independent focus on AI safety here. How does CAIS choose which projects it works on? Our work is driven by three main pillars: advancing safety research, building the safety research community, and promoting safety standards. We understand that technical work will not solve AI safety alone, and prioritize having a real-world positive impact. You can see more on our mission page. Where can I learn more about the research CAIS is doing? As a technical research laboratory, CAIS develops foundational benchmarks and methods which concretize the problem and progress towards technical solutions. You can see examples of our work on our research page. SUBSCRIBE TO THE AI SAFETY NEWSLETTER Want to help reduce risks from AI? Donate to support our mission No technical background required. See past newsletters. CAIS IS AN AI SAFETY NON-PROFIT. OUR MISSION IS TO REDUCE SOCIETAL-SCALE RISKS FROM ARTIFICIAL INTELLIGENCE. Our Work View All WorkStatement on AI RiskField BuildingCAIS ResearchCompute ClusterPhilosophy FellowshipCAIS Blog Our Mission About Us2023 Impact ReportFrequently Asked QuestionsLearn About AI RiskCAIS Media KitTerms of ServicePrivacy Policy Get involved DonateContact UsCareers General: contact@safe.ai Media: media@safe.ai Cookies Notice: This website uses cookies to identify pages that are being used most frequently. This helps us analyze data about web page traffic and improve our website. We only use this information for the purpose of statistical analysis and then the data is removed from the system. We do not and will never sell user data. Read more about our cookie policy on our privacy policy. Please contact us if you have any questions. © 2024 Center for AI Safety Credits Website by Osborn Design Works