Prof.
Eunji Kwon
권은지 교수
Efficient AI Computing (EAIC) Lab.
Contact
Phone
E-mail eunjikwon@kookmin.ac.kr
E-mail eunjikwon@kookmin.ac.kr
Education
학력
Ph.D. POSTECH, Electrical Engineering (2024)
M.Sc. POSTECH, Electrical Engineering (2021)
B.Sc. UNIST, Mechanical Engineering (Summa Cum Laude) (2017)
M.Sc. POSTECH, Electrical Engineering (2021)
B.Sc. UNIST, Mechanical Engineering (Summa Cum Laude) (2017)
Career
경력
2024-Present Assistant Professor, Dept. of AI, Kookmin University
2023-2024 Visiting Graduate Student, UC San Diego
2023-2024 Visiting Graduate Student, UC San Diego
2024-현재 국민대학교 인공지능학부 조교수
2023-2024 UC 샌디에이고 방문 대학원생
2023-2024 UC 샌디에이고 방문 대학원생
Research Overview 연구 개요
Research focuses on energy-efficient AI computing, including deep learning hardware accelerators, SW/HW co-optimization, and power management on mobile systems. Key topics include vision transformer acceleration, neural network quantization, and sparsity optimization.
에너지 효율적 AI 컴퓨팅을 연구하며, 딥러닝 하드웨어 가속기, SW/HW 공동 최적화, 모바일 시스템 전력 관리 등을 포함합니다. 주요 연구 주제로는 비전 트랜스포머 가속, 신경망 양자화, 희소성 최적화가 있습니다.
Research Areas 연구 분야
- Energy-Efficient AI Computing
- Deep Learning Hardware Accelerators
- SW/HW Co-optimization
- Neural Network Quantization
- Sparsity Optimization
- 에너지 효율적 AI 컴퓨팅
- 딥러닝 하드웨어 가속기
- SW/HW 공동 최적화
- 신경망 양자화
- 희소성 최적화
Major Achievements 주요 연구 성과
- Mobile Transformer Accelerator with line sparsity and dynamic quantization (TCAD 2023)
- RL-based mixed precision quantization for hybrid vision transformers (DAC 2024)
Recent Publications 주요 논문
- E. Kwon, "TACo: Training-Free, Hardware-Aware ViT Architecture Search with a Hypervolume-Based Unified Zero-Cost Score," DAC, 2026
- J. Nam, J. Kim, E. Kwon, and S. Kang, "Efficient Down-sampling in Hybrid Neural Network using Adversarial Autoencoders," DATE, 2026
- E. Kwon and T. Rosing, "Autonomous Model Quantization Framework for Hybrid Vision Transformers based on Reinforcement Learning," IEEE TCAD, 2025
- S. Moon and E. Kwon, "DeltaTrack: Flow-Driven Multiple Object Tracking Accelerator with Variable LSB Approximation," IEEE TCAS-II, 2025
- E. Kwon, M. Zhou, W. Xu, T. Losing, and S. Kang, "RL-PTQ: RL-based Mixed Precision Quantization for Hybrid Vision Transformers," DAC, 2024
- S. Lee, K. Cho, E. Kwon, S. Park, S. Kim, and S. Kang, "ViT-ToGo: Vision Transformer Accelerator with Grouped Token Pruning," DATE, 2024
- E. Kwon, J. Yoon, and S. Kang, "Mobile Transformer Accelerator Exploiting Various Line Sparsity and Tile-based Dynamic Quantization," IEEE TCAD, 2023
- E. Kwon, H. Song, J. Park, and S. Kang, "Mobile Accelerator Exploiting Sparsity of Multi-Heads, Lines, and Blocks in Transformers," DATE, 2023
