franziska-boenisch.de Open in urlscan Pro
185.199.111.153  Public Scan

Submitted URL: http://franziska-boenisch.de/
Effective URL: https://franziska-boenisch.de/
Submission: On October 31 via api from US — Scanned from DE

Form analysis 0 forms found in the DOM

Text Content

 * Franziska Boenisch
 * Publications
 * Teaching
 * Blog Posts
 * Talks
 * CV


FRANZISKA BOENISCH

Tenure-Track Faculty
@ CISPA

Research focus:
Privacy-Preserving and Trustworthy ML.

Follow
 * Saarbrücken, Germany
 * Email
 * PGP-Key
 * Twitter
 * LinkedIn
 * Github
 * Google Scholar


HOME

I am Franziska Boenisch, tenure track faculty at the CISPA Helmholtz Center for
Information Security. At CISPA, I am co-leading the SprintML lab for Secure,
Private, Robust, INterpretable, and Trustworthy Machine Learning. Before, I was
a Postdoctoral Fellow at the Vector Institute for Artificial Intelligence
supervised by Prof. Dr. Nicolas Papernot. Prior to joining Vector, I was a PhD
candidate at Freie University Berlin and a research assosiate at the Fraunhofer
Institute for Applied and Integrated Security (AISEC).


I AM HIRING!

Currently, I am looking for PhDs, Postdocs, and Research Interns. If you are
excited about working on trustworthy ML, please drop me a mail with your CV,
current transcript, and a motivation why you want to apply to my group.


RESEARCH

My research focus lies at the intersection of Trustworthy Machine Learning (ML)
and Privacy from the perspective of individual users and data owners.

Research has shown that trained ML models do not necessarily provide privacy for
the underlying training datasets, as some attacks allow to restore (aspects of)
the training data from the model parameters (e.g. model inversion attacks), or
others allow to find out if an individual data point was included in the
training dataset or not (membership inference attacks). Both can be harmful for
the privacy of the individuals whose data is represented in the training
dataset. Therefore, protecting privacy in ML models is a crucial task. I’m
currently mainly researching in the area of Differential Privacy, a mathematical
framework that provides formal privacy guarantees. I’m also looking into the
practical evaluation of privacy loss and into the identification of potential
sources for privacy leakage in privacy preserving technologies. Identifying such
pain points allows us to develop a better understanding on why practical privacy
stays behind the strong theoretical guarantees. This helps in adapting and
extending theoretical frameworks, their implementations and their integration
into real-worlds systems for enhanced privacy in practice.

Furthermore, I am investigating the impact that ML privacy has on other aspects
of trustworthy ML, such as robustness, fairness, and biases. So far, research
suggests that training with privacy guarantees has a negative impact on such
other desirable properties of ML models. Therefore, I consider it of high
importance to study the different aspects together in order to build an
understanding on the reasons behind negative inferences. By then developing
methods that jointly optimize for different aspects, I believe that we will be
able to deploy more trustworthy and private ML systems.


NEWS

 * October 2024: We kicked-off our ILLUMINATION research grant project that I am
   coordinating.
 * September 2024: Three of our papers were accepted to NeurIPS’24. Excited to
   present our work on localizing memorization in SSL vision encoders’
   parameters, neuron interventions in diffusion models to prevent privacy
   leakage, and on private large language model adaptations in Vancouver!
 * September 2024: I was named a GI Junior-Fellow for my work on trustworthy ML
   and my committment to diversity in computer science.
 * July 2024: All lectures from my new lecture series on Trustworthy Machine
   Learning are now online.
 * June 2024: We reached another milestone, our SprintML Lab research group has
   reached 20 members (from 10 different countries!). Super exciting.
 * May 2024: We are presenting our novel work on Memorization in Self-Supervised
   Learning at the ICLR conference in Vienna.
 * April 2024: My seminar on Differential Privacy in Machine Learning was
   awarded the Saarland Univeristy Busy Beaver Teaching Award.
 * January 2024: My PhD dissertation was awarded the 2nd Prize in the Fraunhofer
   IuK Dissertation Award.
 * September 2023: I am excited that three of our papers were accepted at the
   NeurIPS conference.
 * September 2023: I started as a tenure track faculty at CISPA.
 * July 2023: Meet me at the PETs and IEEE Euro S&P conference, where I will be
   presenting our work on Individually Privaet Machine Learning, Privacy
   Assessment of Synthetic Data, Privacy Risks in Federated Learning, and Data
   Extraction Attacks in Federated Learning
 * December 2022: Our Workshop on Trustworthy ML under Limited Data and Compute
   was accepted for ICLR’23. CfP here.
 * November 2022: Paper accepted for publication at PETS’23: A Unified Framework
   for Quantifying Privacy Risk in Synthetic Data. <!—- September 2022: Paper
   accepted at the 36th Conference on Neural Information Processing Systems
   (NeurIPS’22): Dataset Inference for Self-Supervised Models.
 * September 2022: Paper accepted for publication at PETS’23: Individualized
   PATE: Differentially Private Machine Learning with Individual Privacy
   Guarantees.
 * September 2022: We’re presenting our paper on Introducing Model Inversion
   Attacks on Automatic Speaker Recognition at the 2nd Symposium on Security and
   Privacy in Speech Communication (SPSC).
 * August 2022: I’m happy to announce that I finalized my PhD duties and will be
   joining the Canadian Vector Institute as a Postdoctoral Fellow under the
   supervision of Prof. Dr. Nicolas Papernot by the end of the month.
 * April 2022: Super proud that my interview on Differential Privacy with the
   Google-Aufbruch magazine made it to the title page.—>


© 2024 Franziska Boenisch. Impressum. Powered by Jekyll & AcademicPages, a fork
of Minimal Mistakes.