jgkwak95.github.io Open in urlscan Pro
2606:50c0:8000::153  Public Scan

Submitted URL: http://jgkwak95.github.io/
Effective URL: https://jgkwak95.github.io/
Submission: On November 06 via api from US — Scanned from CA

Form analysis 0 forms found in the DOM

Text Content

 * Jeong-gi Kwak

AI/ML Researcher. kjk8557@korea.ac.kr

Follow
 * Seoul, Korea
 * LinkedIn
 * Github
 * Google Scholar

About
I’m a research scientist at Innerverz AI, focusing on video diffusion models. My
research interests include Generative Models (e.g., Diffusion models and GANs)
and 3D Neural Rendering. I graduated with a PhD from Korea University, under the
supervision of Prof. Hanseok Ko. I also received the B.S and M.S degrees at
Korea University in 2018 and 2020, respectively. I also had a wonderful time in
beautiful Vancouver as a visiting research student, under supervision of Prof.
Kwang Moo Yi at University of British Columbia (UBC), focusing on diffusion
models and 3D neural rendering (Jun. 2023 - Dec. 2023)

[Google Scholar] [Github] [CV]



NEWS


[Jul. 2024] I will be giving a talk at Twelve Labs.
[Apr. 2024] Our paper has been selected as one of Highlight Papers at CVPR 2024
(Top 10%).
[Feb. 2024] One paper has been accepted to CVPR 2024.
[Jan. 2024] One paper has been accepted to ICASSP 2024.
[Jan. 2024] I joined Innerverz AI as an AI/ML researcher, focusing on video
diffusion models.
[Dec. 2023] I successfully defended my thesis, “Towards Controllable and
Interpretable Generative Neural Rendering”.



SELECTED PUBLICATIONS

| ViVid-1-to-3: Novel View Synthesis with Video Diffusion Models
Jeong-gi Kwak * , Erqun Dong *, Yuhe Jin, Hanseok Ko, Shweta Mahajan, Kwang Moo
Yi
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024
Highlight
[paper] [code] [project page]



| Towards Multi-domain Face Landmark Detection with Synthetic data from
Diffusion model
Yuanming Li, Gwantae Kim, , Jeong-gi Kwak, Bonhwa Ku, Hanseok Ko
IEEE International Conference on Acoustics, Speech and Signal Processing
(ICASSP), 2024
[paper]


| Injecting 3D Perception of Controllable NeRF-GAN into StyleGAN for Editable
Portrait Image Synthesis
Jeong-gi Kwak, Yuanming Li, Dongsik Yoon, Donghyeon Kim, David Han, Hanseok Ko
European Conference on Computer Vision (ECCV), 2022
[paper] [code] [project page]
(2022.12) 2022 ETNews ICT Paper Awards sponsored by MSIT Korea



| DIFAI: Diverse Facial Inpainting using StyleGAN Inversion
Dongsik Yoon, Jeong-gi Kwak, Yuanming Li, David Han, Hanseok Ko
IEEE International Conference on Image Processing (ICIP), 2022
[paper]




| Generate and Edit Your Own Character in a Canonical View
Jeong-gi Kwak, Yuanming Li, Dongsik Yoon, David Han, Hanseok Ko
IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshop (CVPRW),
2022
[paper][poster]



| Adverse Weather Image Translation with Asymmetric and Uncertainty-aware GAN
Jeong-gi Kwak, Youngsaeng Jin, Yuanming Li, Dongsik Yoon, Donghyeon Kim, Hanseok
Ko
British Machine Vision Conference (BMVC), 2021
[paper] [code]



| Reference Guided Image Inpainting using Facial Attributes
Dongsik Yoon, Jeong-gi Kwak, Yuanming Li, David Han, Youngsaeng Jin, Hanseok Ko
British Machine Vision Conference (BMVC), 2021
[paper] [code]



| CAFE-GAN: Arbitrary Face Attribute Editing with Complementary Attention
Feature
Jeong-gi Kwak, David K. Han, Hanseok Ko
European Conference on Computer Vision (ECCV), 2020
[paper]



Sitemap
 * Follow:
 * GitHub
 * Feed

© 2024 Jeong-gi Kwak. Powered by Jekyll & AcademicPages, a fork of Minimal
Mistakes.