Time
- Wednesday 18:30
Location
- 211-1 Electronic Information College Building
- Current Schedule (Spring 2025)
- Mailing List
- Presenter
- Previous Meetings
- Related Links
- FAQ
- About Us
- Suggested Papers
Please let Sung Oh (ahp2025 -at- khu -dot- ac -dot- kr) know what paper you are going to present, and please provide (name and year) of the conference the paper was accepted and summary by Friday 11:59am before your presentation.
Moreover, send the presentation slides link on Sunday 11:59pm.
This Spring we will have One presenters each week. Presentation duration is up to the presenter (as long as it does not go over an hour).
Date | Presenters | Topic |
---|---|---|
03/12 | Myeongjun Oh | Enhancing Implicit Neural Representations via Symmetric Power Transformation [Weixiang Zhang, et al., AAAI 2025] [slides] |
03/19 | Youngtae Kim | Streaming Dense Video Captioning [Xingyi Zhou, et al., CVPR 2024] [slides] |
03/26 | Jiyoung Park | Describing differences in image sets with natural language [Lisa Dunlap, et al., CVPR 2024] [slides] |
04/02 | Geo Ahn | Can We Talk Models Into Seeing the World Differently? [Paul Gavrikov et al., ICLR 2025] [slides] |
04/09 | Euijune Lee | Representation Alignment for Generation: Training Diffusion Transformers Is Easier Than You Think [Shiyun Yu et al., ICLR 2025] [slides] |
04/16 | Mid-term | No Reading Group 📖 |
04/23 | Mid-term | No Reading Group 📖 |
04/30 | Suyoung Yun | VTimeLLM: Empower LLM to Grasp Video Moments [Bin Huang et al., CVPR 2024] [slides] |
05/07 | Chan Lee | DiffusionDrive: Truncated Diffusion Model for End-to-End Autonomous Driving [Bencheng Liao et al., CVPR 2025] [slides] |
05/14 | ICCV rebutal | No Reading Group 📖 |
05/21 | NeurIPS Deadline | No Reading Group 📖 |
05/28 | Wooil Lee | Can I Trust Your Answer? Visually Grounded Video Question Answering [Junbin Xiao et al., CVPR 2024] [slides] |
06/04 | Jiwook Han | Number it: Temporal Grounding Videos like Flipping Manga [Youngliang Wu et al., CVPR 2025] [slides] |
06/11 | Final | No Reading Group 📖 |
06/18 | Final | No Reading Group 📖 |
We use Google Groups to manage the mailing list: (link). You can click "Join Group" when you sign in with your Kyung Hee University account.
Please let Sung Oh (ahp2025 -at- khu -dot- ac -dot- kr) know what paper you are going to present, and please provide (name and year) of the conference the paper was accepted and summary by Friday 11:59am before your presentation. Moreover, send the presentation slides link on Monday 11:59pm.
- AMI
- Ph.D. student : Enki Cho / Yong Hyun Ahn / Minkuk Kim / Hyeonbae Kim / Youngtae Kim
- M.S. student : Ohsung Choo / Kayoung Kim / Youngseob Won / Sunyoung Yun
- UG student : Heedong Kim / Jeongin Bae
- MLVC
- Ph.D. student : Sung Oh / Jongkyung Lim
- M.S. student : Donghoon Kim / Myeongjun Oh / Euijin Lee
- UG student : Junseok Yang, Junghyun Lee, Soohyun Lee
- VAI
- M.S. student : Seungho Shin / Jiyoung Park / Yueun Lee / Chan Lee / Inje Oh / Suyoung Choi / Soyeon Lee
- UG student : Kanghyun Lee / Junyoung Jung
- VLL
- M.S. student : Jongseo Lee / Geo Ahn / Soyeon Hong
- UG student : Jiwook Han / Wooil Lee / Gangmin Choi / Yuri Kim
- Alumni
- [AGI] M.S. : Ahyung Shin / Sunghoon Lee / Jaeho Lee / Juwon Seo / Jun-Yeong Moon / Keonhee Park / Seun-An Choe / Min-Yeong Park / Taeyoung Lee / Min-Jae Kim / UG : Won-Jeong Lee / Habin Lim / Jihyun Park / Taekyun Yoo
- [AMI] M.S. : Soyoun Won / Yebin Ji / UG : Jehyun Park
- [MLVC] M.S. : Junghun Cha / Taegoo Kang / Subin Yang
- [VLL] M.S. : Dongho Lee / Jongmin Shin / Hyogun Lee / Kyungho Bae / UG : Joohyun Chang
- Summer~Fall 2021
- Winter 2022
- Spring 2022
- Summer 2022
- Fall 2022
- Winter~Spring 2023
- Summer 2023
- Fall 2023
- Winter 2024
- Spring 2024
- Summer 2024
- Fall 2024
- Winter 2025
- Awesome Computer Vision
- Awesome Deep Vision
- Awesome Action Recognition
- Computer Vision Foundation open access
- MIT Vision Seminars
- UIUC Vision Lunch
- UT-Austin CV Reading Group
- CMU VASC Seminar Series
- CMU ML Reading Group
- VT Vision and Learning Reading Group
- 딥러닝 논문 읽기 모임 @ TensorFlow Korea Facebook Group
- Advanced Computer Vision (Jia-Bin Huang, Virginia Tech)
- Object and Activity Recognition Seminar (Trevor Darrell, UC Berkeley)
- Visual Learning and Recognition (Abhinav Gupta, CMU)
- Visual Recognition (Kristen Grauman, UT Austin)
- Advanced Computer Vision (Devi Parikh, Georgia Tech)
- Cutting-Edge Trends in Deep Learning and Recognition (Svetlana Lazebnik, UIUC)
The presenters' order is generated from the presenters' list in a FIFO manner (but the list is initially generated randomly).
Contact other presenters to see if they are willing to swap dates with you. Let the group organizer Sung Oh (ahp2025 -at- khu -dot- ac -dot- kr) know about your situation.
We are a group that meets about once a week to discuss one to two relevant papers. For every meeting, two people will be in charge of selecting the paper(s) for that meeting, thoroughly understanding the work, and leading the discussion (either informally or via a presentation, whatever the leader thinks is best). The rest of the members will read over the paper(s) beforehand to gain a basic idea of the work. Then, on the day of the meeting, we will discuss the strengths, weaknesses, and techniques of the paper(s).
NOTE: Please tell the group organizer Sung Oh (ahp2025 -at- khu -dot- ac -dot- kr) which paper(s) you are going to present, and summarize the paper/talk in several sentences, before the Friday of that week.
We will be reading papers appearing in the leading computer vision conferences (e.g., CVPR, ICCV, ECCV, SIGGRAPH, SIGGRAPH Asia) and machine learning conferences (e.g., NeurIPS, ICML, ICLR), and other AI conferences (e.g., MICCAI, ACL, EMNLP, NAACL, UAI, AAAI, IJCAI, AISTATS). Members are free to choose which paper(s) they will present (we can also provide suggestions), thus the specific topics will vary based on the members' interests.
We are open to everyone who is interested, whether you are an undergrad, a grad student, or KHU staff, regardless of department. As long as you are interested in learning more about the fields (by reading cutting-edge research papers), you are welcome to join.
We maintain a pool of suggested papers here.
Credits: The contents and formats were modified from VT Vision and Learning Reading Group.