securitybootcamp.org Open in urlscan Pro
76.76.21.21  Public Scan

Submitted URL: https://www.securitybootcamp.org/
Effective URL: https://securitybootcamp.org/
Submission: On December 06 via automatic, source certstream-suspicious — Scanned from US

Form analysis 0 forms found in the DOM

Text Content

Cybersecurity Bootcamp for AI Security | Jan 6–17



CYBERSECURITY BOOTCAMP FOR AI SECURITY | JAN 6–17

We’re organizing a 2 week cybersecurity bootcamp aimed at training attendees in
the cybersecurity skills relevant for frontier Al security.

Please fill out this 10 minute expression of interest form ASAP if you’re
interested in attending or TAing the bootcamp. Invitations are sent on a rolling
basis.

This bootcamp is intended as an entry point for those looking to transition into
AI security projects or improve their current work through more security
knowledge. It will run from January 6–Jan 17, in-person in Berkeley, CA. It’s
entirely free to attend. You don’t need to have a background in security to
attend.

This content is designed to help attendees understand and address the security
gaps in current systems, particularly against highly capable threat actors.
During the bootcamp, attendees will learn how cyber-capable adversaries
compromise systems, which security control measures could prevent future
incidents, and learn to implement those control measures in labs. By the end of
the bootcamp, attendees should feel equipped to understand and implement
defenses against various threats, and better understand the broader security
landscape.

If you’re interested in attending or TAing the bootcamp, please fill out this 10
minute expression of interest form ASAP. We’ll likely close the form around
December 13, but we recommend submitting the form as soon as possible since
invitations are sent on a rolling basis.

‣
❓ How should I decide if I want to attend vs TA?

Our curriculum and teaching lead is Google Security Engineer Emma Liddell. The
other organizers of this bootcamp are Wrena Sproat and Caleb Parikh. Buck
Shlegeris is advising.


AUDIENCE

The bootcamp is aimed at those who want to learn about cybersecurity in order to
advance AI safety. Within that, we expect that the skills that attendees learn
will be useful for a range of purposes, including:

 * Understanding how to evaluate security architecture proposals, particularly
   around hardware roots of trust and zero trust.
 * Assessing the feasibility of proposed security controls against sophisticated
   threats, contributing to technical discussions about securing AI
   infrastructure, and participating in threat modeling exercises for critical
   systems.
 * Understanding where current security paradigms fall short and what
   next-generation approaches might look like.

We don’t expect applicants to have a security background, though most attendees
will likely already feel comfortable with coding, networking fundamentals, or
navigating around a terminal.

‣
❓ What types of people might the bootcamp be especially useful for?


CURRICULUM

Each day of the bootcamp will focus on a specific security control and a
vulnerability it addresses.

The day begins with a discussion or lecture about a well-known historical
security incident where the absence of that control enabled the threat actor to
enact the security breach. We’ll explore how having the control in place might
have prevented the incident.

For the majority of the day, participants will implement the control in a lab
setting. By the end of the bootcamp, they’ll know what it means to apply the
control in practice.


WEEK 1: FOUNDATIONS AND TRUST

Day 1: Threat Modeling & Attack Trees

 * Vulnerability: Complex AI systems have non-obvious attack paths.
 * Control: Systematic mapping of attack paths and trust relationships.

Day 2: TPMs and Remote Attestation

 * Vulnerability: Compromised systems can lie about their security state.
 * Control: Hardware-based attestation with cryptographic proof of system
   integrity.
 * Historical Context: Stuxnet (2010)

Day 3: GPU Confidential Computing

 * Vulnerability: AI model weights in GPU memory are vulnerable to theft.
 * Control: Hardware-enforced encrypted enclaves for GPU data protection.
 * Historical Context: NVIDIA Breach (2022)

Day 4: Side Channel Attacks

 * Vulnerability: Shared hardware resources leak data through timing and power
   consumption.
 * Control: Hardware-level isolation to prevent covert channel data extraction.

Day 5: Reproducible Builds and Supply Chain Security

 * Vulnerability: Build systems can be compromised to inject malicious code.
 * Control: Deterministic builds to ensure deployed code matches the source
   exactly.


WEEK 2: ZERO TRUST AND ADVANCED SECURITY

Day 1: Zero Trust Architecture

 * Vulnerability: Network breach enables unrestricted lateral movement.
 * Control: Verify every access attempt, regardless of the source.

Day 2: Microsegmentation

 * Vulnerability: Flat networks allow rapid malware spread.
 * Control: Fine-grained perimeters around workloads to contain breaches.

Day 3: Device-Bound Credentials

 * Vulnerability: Stolen credentials can be used from any device.
 * Control: Hardware-binding ensures credentials work only on authorized
   devices.

Day 4: Detection and Response

 * Vulnerability: Sophisticated attackers evade traditional detection.
 * Control: Hardware-level monitoring for advanced persistence detection.

Day 5: Future Security Engineering and Incident Response

 * Vulnerability: Nation-state attacks use zero-days with hardware persistence.
 * Control: Multi-layer detection and response across hardware and software.



A typical schedule might look something like this:

9am
Lecture and Q&A on daily topic
10am
Pair program on labs
12:30pm
Lunch
1:30pm
Pair program on labs
6pm
Communal dinner
7pm
End of schedule, optional social event or tabletop exercise

If you get stuck on some content or labs, the bootcamp will have TAs around to
answer questions and help get you unstuck.


‣
❓ After completing the curriculum, what concrete security tasks will graduates
be able to do?


ENTIRELY FREE TO ATTEND

The bootcamp is free to attend, with meals and office space provided for all
attendees. We can also provide travel and housing support for some attendees who
need financial support to attend.

We do not want cost to be a barrier to attend the bootcamp.


FAQ

‣
Why is cybersecurity important for AI safety?
‣
Why isn't existing cybersecurity at labs enough? They already don't want their
model weights to be leaked.
‣
Isn't the bootcamp’s pace super fast?