About Me

Hi, I'm Junhoo Lee. I am a Ph.D. candidate at Seoul National University (MIPAL), advised by Prof. Nojun Kwak.

My research aims to bridge the gap between optimization theory and modern generative AI. Instead of merely scaling up models, I investigate the training dynamics of overparameterized networks and design inductive biases (such as geometric constraints or explicit filtering) to ensure robust in-distribution learning.

Currently, I am exploring the fundamental principles of Diffusion Models and LLMs to make them more efficient, explainable, and controllable.

I am always open to discussing new ideas and potential collaborations. Feel free to reach out to me via email at mrjunoo@snu.ac.kr.

Education

Ph.D. Candidate in Intelligence and InformationSeoul National University
Sep 2021 – Aug 2026 (Expected)
B.Sc. in Electrical and Computer EngineeringSeoul National University
Mar 2017 – Sep 2021

News

Main Conference

Unlocking the Potential of Diffusion Language Models through Template Infilling

ACL2026Large Language ModelsDiffusion Language Models

Annual Meeting of the Association for Computational Linguistics

Junhoo Lee, Seungyeon Kim, Nojun Kwak

Unlike autoregressive LMs, diffusion LMs work better with template-then-fill rather than sequential prompting.

CSF: Black-box Fingerprinting via Compositional Semantics for Text-to-Image Models

CVPR2026Generative ModelsQuery-Only Attribution

The IEEE/CVF Conference on Computer Vision and Pattern Recognition

Junhoo Lee, Mijin Koo, Nojun Kwak

A problem-first project page for black-box lineage attribution of fine-tuned text-to-image APIs using compositional semantic fingerprints.

Deep Edge Filter †

NeurIPS2025Learning TheoryRepresentation Analysis

The Conference on Neural Information Processing Systems

Dongkwan Lee†, Junhoo Lee†, Nojun Kwak

Just as humans perceive edges (high-frequency) as core components, deep features in neural networks exhibit the same tendency.

What's Making That Sound Right Now? Video-centric Audio-Visual Localization

ICCV2025Generative ModelsAudio-Visual Localization

The IEEE/CVF International Conference on Computer Vision

Hahyeon Choi, Junhoo Lee, Nojun Kwak

Video-centric audio-visual localization benchmark (AVATAR) with temporal dynamics.

Deep Support Vectors

NeurIPS2024Learning TheoryIsometry

The Conference on Neural Information Processing Systems

Junhoo Lee, Hyunho Lee, Kyomin Hwang, Nojun Kwak

Deep learning has support vectors just like SVMs.

Any-Way Meta Learning

AAAI2024Meta-LearningFew-Shot Learning

The AAAI Conference on Artificial Intelligence

Junhoo Lee, Yearim Kim, Hyunho Lee, Nojun Kwak

Breaking fixed N-way constraint in meta-learning by exploiting label equivalence from episodic task sampling.

SHOT: Suppressing the Hessian along the Optimization Trajectory

NeurIPS2023Meta-LearningOptimization

The Conference on Neural Information Processing Systems

Junhoo Lee, Jayeon Yoo, Nojun Kwak

The key to meta-learning adaptation is flattening the learning trajectory.

Workshop

Do Not Think About Pink Elephant! †

CVPR Workshops, Responsible Generative AI2024Generative ModelsSafety

The IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops

Kyomin Hwang†, Suyoung Kim†, Junhoo Lee†, Nojun Kwak

First discovery that negation doesn't work in large models — telling them not to generate something makes them generate it.

Coreset Selection for Object Detection

CVPR Workshops, Dataset Distillation2024Data EfficiencyDataset Pruning

The IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops

Hojun Lee, Suyoung Kim, Junhoo Lee, Jaeyoung Yoo, Nojun Kwak

Efficient coreset selection method specifically designed for object detection tasks.

Practical Dataset Distillation Based on Deep Support Vectors

ECCV Workshops, Dataset Distillation Challenge2024Data EfficiencyDataset Distillation

The European Conference on Computer Vision Workshops

Hyunho Lee, Junhoo Lee, Nojun Kwak

Applying DeepKKT loss for dataset distillation when only partial data is accessible.

Journal

The Role of Teacher Calibration in Knowledge Distillation

IEEE Access2025Knowledge DistillationCalibration
Suyoung Kim, Seonguk Park, Junhoo Lee, Nojun Kwak

Teacher's calibration error strongly correlates with student accuracy — well-calibrated teachers transfer knowledge better.

Awards & Honors

2023
BK21 Future Innovation Talent Bronze Prize(USD 1,000)
2023
BK21 Outstanding Research Talent Fellowship(USD 3,500)
2022
Yulchon AI Star Scholarship(USD 8,000)
2021
3rd Place, SNU FastMRI Challenge (out of 107 teams)(USD 3,000)
2021
Kwanak Scholarship(full funded)
2017-2018
National Science and Engineering Scholarship(full funded)