stsirtsis.github.io Open in urlscan Pro
2606:50c0:8000::153  Public Scan

Submitted URL: http://stsirtsis.github.io/
Effective URL: https://stsirtsis.github.io/
Submission: On November 01 via api from US — Scanned from DE

Form analysis 0 forms found in the DOM

Text Content

Toggle navigation
 * about (current)
 * publications
 * 




STRATIS TSIRTSIS

Final-year PhD candidate @ Max Planck Institute for Software Systems

Paul-Ehrlich-StraĂźe 26

Kaiserslautern, Germany

👋🏼 Hey there! I am Stratis, and I am currently pursuing a PhD in computer
science, fortunate to be advised by Manuel Gomez-Rodriguez. I have spent fall
2023 as a research intern at Meta AI (FAIR) and spring 2023 as a visitor at
Stanford University working with Tobias Gerstenberg. Before starting my PhD, I
studied electrical & computer engineering at the National Technical University
of Athens, where I completed my diploma thesis supervised by Dimitris Fotakis.

🚨 I am on the 2024-2025 academic job market 🚨

At a high level, I am interested in building AI systems to understand, inform
and complement human decisions and judgments in uncertain and high-stakes
environments. During my PhD, I have focused primarily on developing machine
learning methods for (i) informing decision making in the presence of strategic
human behavior and (ii) enhancing the counterfactual analysis of sequential
decision-making tasks. In a nutshell, my research interests lie in the
intersection of machine learning and:

 * causal inference
 * game theory
 * combinatorial & convex optimization
 * algorithmic fairness
 * computational cognitive science


SELECTED PUBLICATIONS

 1. Journal
    Optimal Decision Making Under Strategic Behavior
    Stratis Tsirtsis, Behzad Tabibian, Moein Khajehnejad, Adish Singla, Bernhard
    Schölkopf, and Manuel Gomez-Rodriguez
    Management Science, 2024
    Abs Link Note
    
    We are witnessing an increasing use of data-driven predictive models to
    inform decisions. As decisions have implications for individuals and
    society, there is increasing pressure on decision makers to be transparent
    about their decision policies. At the same time, individuals may use
    knowledge, gained by transparency, to invest effort strategically in order
    to maximize their chances of receiving a beneficial decision. Our goal is to
    find decision policies that are optimal in terms of utility in such a
    strategic setting. To this end, we first characterize how strategic
    investment of effort by individuals leads to a change in the feature
    distribution. Using this characterization, we first show that, in general,
    we cannot expect to find optimal decision policies in polynomial time and
    there are cases in which deterministic policies are suboptimal. Then, we
    demonstrate that, if the cost individuals pay to change their features
    satisfies a natural monotonicity assumption, we can narrow down the search
    for the optimal policy to a particular family of decision policies with a
    set of desirable properties, which allow for a highly effective polynomial
    time heuristic search algorithm using dynamic programming. Finally, under no
    assumptions on the cost individuals pay to change their features, we develop
    an iterative search algorithm that is guaranteed to find locally optimal
    decision policies also in polynomial time. Experiments on synthetic and real
    credit card data illustrate our theoretical findings and show that the
    decision policies found by our algorithms achieve higher utility than those
    that do not account for strategic behavior.
    
    A preliminary version appeared at the NeurIPS Workshop on Human-Centric
    Machine Learning, 2019.

 2. Conference
    Finding Counterfactually Optimal Action Sequences in Continuous State Spaces
    Stratis Tsirtsis, and Manuel Gomez-Rodriguez
    37th Conference on Neural Information Processing Systems (NeurIPS), 2023
    Abs Link Note
    
    Whenever a clinician reflects on the efficacy of a sequence of treatment
    decisions for a patient, they may try to identify critical time steps where,
    had they made different decisions, the patient’s health would have improved.
    While recent methods at the intersection of causal inference and
    reinforcement learning promise to aid human experts, as the clinician above,
    to retrospectively analyze sequential decision making processes, they have
    focused on environments with finitely many discrete states. However, in many
    practical applications, the state of the environment is inherently
    continuous in nature. In this paper, we aim to fill this gap. We start by
    formally characterizing a sequence of discrete actions and continuous states
    using finite horizon Markov decision processes and a broad class of
    bijective structural causal models. Building upon this characterization, we
    formalize the problem of finding counterfactually optimal action sequences
    and show that, in general, we cannot expect to solve it in polynomial time.
    Then, we develop a search method based on the A* algorithm that, under a
    natural form of Lipschitz continuity of the environment’s dynamics, is
    guaranteed to return the optimal solution to the problem. Experiments on
    real clinical data show that our method is very efficient in practice, and
    it has the potential to offer interesting insights for sequential decision
    making tasks.
    
    A preliminary version appeared at the ICML Workshop on Counterfactuals in
    Minds and Machines, 2023.

 3. Conference
    Towards a computational model of responsibility judgments in sequential
    human-AI collaboration
    Stratis Tsirtsis, Manuel Gomez-Rodriguez, and Tobias Gerstenberg
    46th Annual Conference of the Cognitive Science Society (CogSci), 2024
    Abs Link Note
    
    When a human and an AI agent collaborate to complete a task and something
    goes wrong, who is responsible? Prior work has developed theories to
    describe how people assign responsibility to individuals in teams. However,
    there has been little work studying the cognitive processes that underlie
    responsibility judgments in human-AI collaborations, especially for tasks
    comprising a sequence of interdependent actions. In this work, we take a
    step towards filling this gap. Using semi-autonomous driving as a paradigm,
    we develop an environment that simulates stylized cases of human-AI
    collaboration using a generative model of agent behavior. We propose a model
    of responsibility that considers how unexpected an agent’s action was, and
    what would have happened had they acted differently. We test the model’s
    predictions empirically and find that in addition to action expectations and
    counterfactual considerations, participants’ responsibility judgments are
    also affected by how much each agent actually contributed to the outcome.
    
    A preliminary version appeared at the CHI Workshop on Theory of Mind in
    Human-AI Interaction, 2024.


NEWS

Sep 25, 2024 We released a preprint on Counterfactual Token Generation in Large
Language Models! Jul 09, 2024 I presented a poster summarizing large part of my
research at EC’24. Apr 05, 2024 Our paper Towards a computational model of
responsibility judgments in sequential human-AI collaboration has been accepted
at CogSci 2024! 🎉 Nov 23, 2023 I visited and gave a research talk at Athena
Research Center. Sep 22, 2023 Our paper Finding Counterfactually Optimal Action
Sequences in Continuous State Spaces has been accepted at NeurIPS 2023! 🎉 Sep
05, 2023 Our paper Optimal Decision Making Under Strategic Behavior has been
accepted at Management Science! 🎉 Jul 30, 2023 We organized a workshop on
counterfactuals in minds and machines at ICML 2023. Recordings are available
here.

Trivia

I grew up on a beautiful Greek island called Lesvos. In my free time, I enjoy
(trail) running and playing the guitar.

If you want to get in touch, feel free to send me an email or ping me on twitter
(now X).
© Copyright 2024 Stratis Tsirtsis. Powered by Jekyll and based on the al-folio
theme. Hosted by GitHub Pages.