ruyc.github.io Open in urlscan Pro
2606:50c0:8003::153  Public Scan

URL: https://ruyc.github.io/
Submission: On November 24 via api from US — Scanned from DE

Form analysis 0 forms found in the DOM

Text Content

Toggle navigation
 * ABOUT (current)
 * PAPERS
 * CV
 * MISC
 * ctrl k
 * 




YURONG CHEN  陈昱蓉

Inria, Ecole Normale Supérieure, PSL Research University, France.

I am a currently a postdoc at SIERRA-team, INRIA Paris, working with Michael I.
Jordan. I obtained my PhD degree in Computer Science at Peking University, where
I was advised by Xiaotie Deng. I obtained my bachelor degree in Applied
Mathematics from Hua Luogeng Honors Class, Beihang University.

My current research interest lies in the learning and game theoretic issues in
the interaction of strategic and learning agents and how each field can help the
other to have better practical implication. For example, I am interested in how
to learn players’ private information from equilibria and how strategic agents
can utilize their information advantage to gain profit from interaction.

During my PhD, I visited Zhiyi Huang at the University of Hong Kong from Feb. to
Aug. 2023, and from Aug. to Sept. 2024. I worked as an intern at Alimama group
from May. to Sept. 2024 on online ad auctions.

My email: yurong.chen [at] inria.fr;

               yurong.chen1909 [at] gmail.com

You can also send me a message by clicking on the envelope bottom below.


NEWS

Nov 22, 2024 Our paper Optimal Private Payoff Manipulation against Commitment in
Extensive-form Games [link] has been accepted by Games and Economic Behavior
(joint work with Xiaotie Deng, and Yuhao Li) Oct 10, 2024 Our paper Mechanism
Design for LLM Fine-tuning with Multiple Reward Models has been accepted to
Pluralistic Alignment @ NeurIPS 2024 Workshop (joint work with Haoran Sun, Siwei
Wang, Wei Chen, and Xiaotie Deng) Oct 01, 2024 Today, I officially joined Inria
Paris as a postdoc, under the supervision of Michael I. Jordan . Thrilled to
embark on this exciting new journey! May 18, 2024 Our paper Are Bounded
Contracts Learnable and Approximately Optimal? has been accepted to EC ‘24
(joint work with Zhaohua Chen, Xiaotie Deng, and Zhiyi Huang) Sep 22, 2023 Our
paper A Scalable Neural Network for DSIC Affine Maximizer Auction Design has
been accepted to NeurIPs ‘23 (joint work with Zhijian Duan, Haoran Sun, and
Zhaohua Chen, Xiaotie Deng)


SELECTED PUBLICATIONS

(αβ)indicates alphabetical author order. * indicates equal contribution.
 1. Games Econ. Behav.
    Optimal Private Payoff Manipulation against Commitment in Extensive-form
    Games
    (αβ)  Yurong Chen , Xiaotie Deng, and Yuhao Li
    Games and Economic Behavior, 2024
    A preliminary version of this work was presented at WINE 2022, where it
    received the Best Student Paper Award 🏆.
    Abs DOI Bib
    
    Stackelberg equilibrium describes the optimal strategies of a player, when
    she (the leader) first credibly commits to a strategy. Her opponent (the
    follower) will best respond to her commitment. To compute the optimal
    commitment, a leader must learn enough follower’s payoff information. The
    follower can then potentially provide fake information, to induce a
    different final game outcome that benefits him more than when he truthfully
    behaves. We study such follower’s manipulation in extensive-form games. For
    all four settings considered, we characterize all the inducible game
    outcomes. We show the polynomial-time tractability of finding the optimal
    payoff function to misreport. We compare the follower’s optimal attainable
    utilities among different settings, with the true game fixed. In particular,
    one comparison shows that the follower gets no less when the leader’s
    strategy space expands from pure strategies to behavioral strategies. Our
    work completely resolves this follower’s optimal manipulation problem on
    extensive-form game trees.
    
    @article{chen2024optimal,
      title = {Optimal Private Payoff Manipulation against Commitment in Extensive-form Games},
      journal = {Games and Economic Behavior},
      year = {2024},
      issn = {0899-8256},
      doi = {https://doi.org/10.1016/j.geb.2024.11.008},
      url = {https://www.sciencedirect.com/science/article/pii/S0899825624001647},
      author = {Chen, Yurong and Deng, Xiaotie and Li, Yuhao},
      keywords = {Stackelberg equilibrium, Strategic behavior, Private information manipulation, Extensive-form games},
      note = {A preliminary version of this work was presented at <b>WINE 2022</b>, where it received the <b>Best Student Paper</b> Award 🏆. },
    }

 2. ICML
    Coordinated Dynamic Bidding in Repeated Second-Price Auctions with Budgets
    Yurong Chen* , Qian Wang*, Zhijian Duan, Haoran Sun, Zhaohua Chen, Xiang
    Yan, and Xiaotie Deng
    In Proceedings of the 40th International Conference on Machine Learning,
    23–29 jul 2023
    
    Abs arXiv Bib PDF Poster
    
    In online ad markets, a rising number of advertisers are employing bidding
    agencies to participate in ad auctions. These agencies are specialized in
    designing online algorithms and bidding on behalf of their clients.
    Typically, an agency usually has information on multiple advertisers, so she
    can potentially coordinate bids to help her clients achieve higher utilities
    than those under independent bidding. In this paper, we study coordinated
    online bidding algorithms in repeated second-price auctions with budgets. We
    propose algorithms that guarantee every client a higher utility than the
    best she can get under independent bidding. We show that these algorithms
    achieve maximal social welfare and discuss bidders’ incentives to misreport
    their budgets, in symmetric cases. Our proofs combine the techniques of
    online learning and equilibrium analysis, overcoming the difficulty of
    competing with a multi-dimensional benchmark. The performance of our
    algorithms is further evaluated by experiments on both synthetic and real
    data. To the best of our knowledge, we are the first to consider bidder
    coordination in online repeated auctions with constraints.
    
    @inproceedings{chen2023coordinated,
      title = {Coordinated Dynamic Bidding in Repeated Second-Price Auctions with Budgets},
      author = {Chen*, Yurong and Wang*, Qian and Duan, Zhijian and Sun, Haoran and Chen, Zhaohua and Yan, Xiang and Deng, Xiaotie},
      booktitle = {Proceedings of the 40th International Conference on Machine Learning},
      pages = {5052--5086},
      year = {2023},
      volume = {202},
      series = {Proceedings of Machine Learning Research},
      month = {23--29 Jul},
      publisher = {PMLR},
    }

 3. Arxiv
    Learning to Manipulate a Commitment Optimizer
    (αβ)  Yurong Chen , Xiaotie Deng, Jiarui Gan, and Yuhao Li
    23–29 jul 2023
    
    Abs arXiv Bib
    
    We consider a Stackelberg scenario where the leader commits optimally based
    on the follower’s type (i.e., the follower’s payoff function). Despite its
    rationality, such commitmentoptimizing behavior inadvertently reveals
    information about the leader’s incentive, especially when one gets access to
    the leader’s optimal commitments against different follower types. In this
    paper, we study to what extent one can learn about the leader’s payoff
    information by actively querying the leader’s optimal commitments. We show
    that, by using polynomially many queries and operations, a learner can learn
    a payoff function that is strategically equivalent to the leader’s original
    payoff function, in the sense that it preserves: 1) the leader’s preference
    over fairly broad sets of strategy profiles and 2) the set of all possible
    (strong) Stackelberg equilibria the leader may engage in, considering all
    possible follower types. As an application, we show that a follower can use
    the learned information to induce an optimal Stackelberg equilibrium (w.r.t.
    the follower’s payoff) by imitating a different type, without knowing the
    leader’s payoff function beforehand. To the best of our knowledge, we are
    the first to extend this equilibrium inducing problem to the incomplete
    information setting
    
    @misc{chen2023learning,
      title = {Learning to Manipulate a Commitment Optimizer},
      author = {Chen, Yurong and Deng, Xiaotie and Gan, Jiarui and Li, Yuhao},
      year = {2023},
      eprint = {2302.11829},
      archiveprefix = {arXiv},
      primaryclass = {cs.GT},
    }

 4. Arxiv
    Are Bounded Contracts Learnable and Approximately Optimal?
    (αβ)  Yurong Chen , Zhaohua Chen, Xiaotie Deng, and Zhiyi Huang
    23–29 jul 2024
    
    Abs arXiv Bib
    
    This paper considers the hidden-action model of the principal-agent problem,
    in which a principal incentivizes an agent to work on a project using a
    contract. We investigate whether contracts with bounded payments are
    learnable and approximately optimal. Our main results are two learning
    algorithms that can find a nearly optimal bounded contract using a
    polynomial number of queries, under two standard assumptions in the
    literature: a costlier action for the agent leads to a better outcome
    distribution for the principal, and the agent’s cost/effort has diminishing
    returns. Our polynomial query complexity upper bound shows that standard
    assumptions are sufficient for achieving an exponential improvement upon the
    known lower bound for general instances. Unlike the existing algorithms
    which relied on discretizing the contract space, our algorithms directly
    learn the underlying outcome distributions. As for the approximate
    optimality of bounded contracts, we find that they could be far from optimal
    in terms of multiplicative or additive approximation, but satisfy a notion
    of mixed approximation.
    
    @misc{chen2024bounded,
      author = {Chen, Yurong and Chen, Zhaohua and Deng, Xiaotie and Huang, Zhiyi},
      year = {2024},
      eprint = {2402.14486},
      archiveprefix = {arXiv},
      primaryclass = {cs.GT},
    }


© Copyright 2024 Yurong Chen. Powered by Jekyll with al-folio theme. Hosted by
GitHub Pages. Photos from Unsplash.