About Me
Hi, I'm Junhoo Lee. I am a PhD student at Seoul National University (MIPAL), advised by Prof. Nojun Kwak.
My primary research interests lie in the intersection of Diffusion Models, Large Language Models (LLMs),Machine Learning Theory, and Lifelong Learning. I am passionate about understanding the fundamental principles of generative models and applying them to solve complex problems.
I am always open to discussing new ideas and potential collaborations. Feel free to reach out to me via email at mrjunoo@snu.ac.kr.
News
- [Dec 2025]I will be at NeurIPS 2025 in San Diego, presenting our "Deep Edge Filter" paper!
- [Oct 2025]New preprint "Unlocking the Potential of Diffusion Language Models through Template Infilling" is now on arXiv!
- [Sep 2025]Our paper "Deep Edge Filter" is accepted to NeurIPS 2025! (co-first author)
- [Jul 2025]Our paper "What's Making That Sound Right Now?" is accepted to ICCV 2025!
- [Jun 2025]Our paper "The Role of Teacher Calibration in Knowledge Distillation" is published in IEEE Access!
- [Dec 2024]I will be at NeurIPS 2024 in Vancouver, presenting our "Deep Support Vectors" paper!
- [ Sep 2024]Our paper "Deep Support Vectors" is accepted to NeurIPS 2024!
- [Jul 2024]Our paper "Practical Dataset Distillation Based on Deep Support Vectors" is presented at ECCV 2024 Workshop!
- [Apr 2024]Two workshop papers accepted to CVPR 2024 — "Do Not Think About Pink Elephant!" (co-first) and "Coreset Selection for Object Detection"!
- [Feb 2024]I will be at AAAI 2024 in Vancouver, presenting our "Any-Way Meta Learning" paper!
- [Dec 2023]Our paper "Any-Way Meta Learning" is accepted to AAAI 2024!
- [Dec 2023]I will be at NeurIPS 2023 in New Orleans, presenting our SHOT paper!
- [Sep 2023]Our paper "SHOT: Suppressing the Hessian along the Optimization Trajectory" is accepted to NeurIPS 2023!
Publications
View Google ScholarPreprint
Unlocking the Potential of Diffusion Language Models through Template Infilling
Unlike autoregressive LMs, diffusion LMs work better with template-then-fill rather than sequential prompting.
Main Conference (First Author / Co-first †)
Deep Edge Filter †
Just as humans perceive edges (high-frequency) as core components, deep features in neural networks exhibit the same tendency.
What's Making That Sound Right Now?
Video-centric audio-visual localization benchmark (AVATAR) with temporal dynamics.
Deep Support Vectors
Deep learning has support vectors just like SVMs.
Any-Way Meta Learning
Breaking fixed N-way constraint in meta-learning by exploiting label equivalence from episodic task sampling.
SHOT: Suppressing the Hessian along the Optimization Trajectory
The key to meta-learning adaptation is flattening the learning trajectory.
Workshop
Do Not Think About Pink Elephant! †
First discovery that negation doesn't work in large models — telling them not to generate something makes them generate it.
Coreset Selection for Object Detection
Efficient coreset selection method specifically designed for object detection tasks.
Practical Dataset Distillation Based on Deep Support Vectors
Applying DeepKKT loss for dataset distillation when only partial data is accessible.
Journal
The Role of Teacher Calibration in Knowledge Distillation
Teacher's calibration error strongly correlates with student accuracy — well-calibrated teachers transfer knowledge better.