www.chongliqin.com
Open in
urlscan Pro
198.185.159.145
Public Scan
Submitted URL: https://chongliqin.com/
Effective URL: https://www.chongliqin.com/
Submission: On August 20 via api from US — Scanned from US
Effective URL: https://www.chongliqin.com/
Submission: On August 20 via api from US — Scanned from US
Form analysis
0 forms found in the DOMText Content
0 Skip to Content CHONGLI QIN Home Open Menu Close Menu CHONGLI QIN Home Open Menu Close Menu Home Hi there! My name is Chongli Qin. My aim is to do what I can to use AI technologies for good and reduce harm. I was previously a Senior Research Scientist at Google DeepMind. My research is largely split between AI safety as well as AI for sciences pushing the frontiers of methodologies for both red teaming/adversarial attacks as well as leveraging ML techniques for sciences such as AlphaFold. My papers have been published in major journals such as Nature and PNAS, also spotlight papers at major ML conferences such as NeurIPS. RESEARCH Achieving robustness in the wild via adversarial mixing with disentangled representations [2020], Sven Gowal, Chongli Qin, Po-Sen Huang, Taylan Cemgil, Krishnamurthy Dvijotham, Timothy Mann, Pushmeet Kohli, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Efficient neural network verification with exactness characterization [2020], Krishnamurthy Dj Dvijotham, Robert Stanforth, Sven Gowal, Chongli Qin, Soham De, Pushmeet Kohli, Uncertainty in artificial intelligence Improved protein structure prediction using potentials from deep learning [2020], Andrew W Senior, Richard Evans, John Jumper, James Kirkpatrick, Laurent Sifre, Tim Green, Chongli Qin, Augustin Žídek, Alexander WR Nelson, Alex Bridgland, Hugo Penedones, Stig Petersen, Karen Simonyan, Steve Crossan, Pushmeet Kohli, David T Jones, David Silver, Koray Kavukcuoglu, Demis Hassabis, Nature Uncovering the limits of adversarial training against norm-bounded adversarial examples [2020], S. Gowal, Chongli Qin, Jonathan Uesato, Timothy Mann, Pushmeet Kohli. In Proceedings of the 1st Conference on Myths in the Universe. Training generative adversarial networks by solving ordinary differential equations [2020], Chongli Qin, Yan Wu, Jost Tobias Springenberg, Andy Brock, Jeff Donahue, Timothy Lillicrap, Pushmeet Kohli. Advances in Neural Information Processing Systems On a continuous time model of gradient descent dynamics and instability in deep learning [2022], Mihaela Rosca, Yan Wu, Chongli Qin, Benoit Dherin, Transactions on Machine Learning Research Scalable verified training for provably robust image classification [2019], Sven Gowal, Krishnamurthy Dj Dvijotham, Robert Stanforth, Rudy Bunel, Chongli Qin, Jonathan Uesato, Relja Arandjelovic, Timothy Mann, Pushmeet Kohli Proceedings of the IEEE/CVF International Conference on Computer Vision Verification of Non-linear Specifications [2019], Chongli Qin, Brendan O’Donoghue, Rudy Bunel, Robert Stanforth, Sven Gowal, Jonathan Uesato, Grzegorz Swirszcz, Pushmeet Kohli International Conference on Learning Representations Adversarial Robustness through Local Linearization [2019], Chongli Qin, James Martens, Sven Gowal, Dilip Krishnan, Krishnamurthy Dvijotham, Alhussein Fawzi, Soham De, Robert Stanforth, Pushmeet Kohli Advances in Neural Information Processing Systems Power Law Tails in Phylogenetic Systems [2018], Chongli Qin, Lucy Colwell Proceedings of the National Academy of Sciences PUBLIC TALKS Effective Altruism 2020: Invited talk “Ensuring Safety and Consistency in the Age of Machine Learning” DeepMind / UCL Deep Learning Lecture Series 2020: Guest Lecture “Responsible Innovation” Conference on Neural Information Processing Systems 2020: Spotlight Talk “Training Generative Adversarial Networks by Solving Ordinary Differential Equations” WORKSHOPS Continuous Time Perspective on Machine Learning Mihaela Rosca · Chongli Qin · Julien Mairal · Marc Deisenroth. CONTACT Email LinkedIn