www.mlsafety.org Open in urlscan Pro
63.35.51.142  Public Scan

Submitted URL: https://mlsafety.org/
Effective URL: https://www.mlsafety.org/
Submission: On August 25 via automatic, source certstream-suspicious — Scanned from DE

Form analysis 0 forms found in the DOM

Text Content

ML Safety HomeEventsFundingNewsletterResourcesCourse
SafeBenchAboutGuidelinesExample IdeasFAQJudgesSubmitContact




SafeBench
AboutGuidelinesExample IdeasFAQJudgesContactSubmitExpress InterestEvents






ML SAFETY



The ML research community focused on
reducing risks from AI systems.




WHAT IS ML SAFETY?



ML systems are rapidly increasing in size, are acquiring new capabilities, and
are increasingly deployed in high-stakes settings. As with other powerful
technologies, the safety of ML systems should be a leading research priority.
This involves ensuring systems can withstand hazards (Robustness), identifying
hazards (Monitoring), reducing inherent ML system hazards (Alignment), and
reducing systemic hazards (Systemic Safety). Example problems and subtopics in
these categories are listed below:




ROBUSTNESS



Adversarial Robustness, Long-Tail Robustness


MONITORING



Anomaly Detection, Interpretable Uncertainty, Transparency, Trojans, Detecting
Emergent Behavior


ALIGNMENT



Honesty, Power Aversion, Value Learning, Machine Ethics


SYSTEMIC SAFETY



ML for Improved Epistemics, ML for Improved Cyberdefense, Cooperative AI


Learn more




ML SAFETY PROJECTS



We organize AI/ML safety resources and education for researchers and
non-technical audiences.




Seminar Series (Coming Soon)

The Newsletter


NeurIPS 2023 Social


Competitions and Prizes

ML Safety Course





GET CONNECTED

Stay in the loop and exchange thoughts and news related to ML safety. Join our
slack or follow one of the accounts below.




Follow
ML Safety @ml_safety
General Announcements



Follow
ML Safety Daily @topofmlsafety
ML safety papers as they are released


A PROJECT BY THE CENTER FOR AI SAFETY


MLSafety
NewsletterFundingResourcesCourse
SafeBench
Submit BenchmarkSafeBench OverviewExample IdeasGuidelinesFrequently Asked
QuestionsContactTerms and Conditions
Events
Events OverviewNeurIPS 2024NeurIPS 2023MLSS YaleICML SocialIntro to MLS

© 2024 Center for AI Safety
Built by Osborn Design Works