-
Notifications
You must be signed in to change notification settings - Fork 10
Expand file tree
/
Copy pathindex.html
More file actions
882 lines (880 loc) · 53.3 KB
/
index.html
File metadata and controls
882 lines (880 loc) · 53.3 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
<!DOCTYPE html>
<!--[if IE 8]>
<html lang="en" class="ie8">
<![endif]-->
<!--[if IE 9]>
<html lang="en" class="ie9">
<![endif]-->
<!--[if !IE]><!-->
<html lang="en">
<!--<![endif]-->
<head>
<title>Jianlong Wu</title>
<!-- Meta -->
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge,Chrome=1">
<!-- <meta name="viewport" content="width=device-width, initial-scale=1.0">-->
<meta name="description" content="Jianlong Wu's homepage">
<link rel="shortcut icon" href="assets/images/log.png">
<link href='https://fonts.googleapis.com/css?family=Roboto:400,500,400italic,300italic,300,500italic,700,700italic,900,900italic' rel='stylesheet' type='text/css'>
<!-- Global CSS -->
<link rel="stylesheet" href="assets/css/bootstrap.min.css">
<link rel="stylesheet" href="assets/css/font-awesome/css/font-awesome.min.css">
<link rel="stylesheet" href="assets/css/main.css">
<script src="bootstrap/js/bootstrap.min.js"></script>
<!-- HTML5 shim and Respond.js for IE8 support of HTML5 elements and media queries -->
<!--[if lt IE 9]>
<script src="https://oss.maxcdn.com/html5shiv/3.7.2/html5shiv.min.js"></script>
<script src="https://oss.maxcdn.com/respond/1.4.2/respond.min.js"></script>
<![endif]-->
<script>
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','https://www.google-analytics.com/analytics.js','ga');
ga('create', 'UA-88572407-1', 'auto');
ga('send', 'pageview');
</script>
</head>
<body>
<div class="container">
<div class="row">
<div class='row'>
<div class='col-xs-3'>
<div class='photo'>
<img src="assets/images/jlwu.jpg" alt="photo"/>
</div>
</div>
<div class='col-xs-8'>
<h3>
Jianlong Wu (吴建龙)
</h3>
<p>
<p>
<p>
I'm Jianlong Wu, a professor and doctoral advisor in <a href="http://cs.hitsz.edu.cn/">School of Computer Science and Technology</a>, <a href="https://www.hitsz.edu.cn/">Harbin Institute of Technology(Shenzhen)</a>. I worked as an assistant professor in Shandong University from 2019 to 2022. I received my Ph.D. degree in computer vision from the <a href="https://zero-lab-pku.github.io/">ZERO Lab</a>, <a href="http://eecs.pku.edu.cn/">School of Electronics Engineering and Computer Science</a>, <a href="https://www.pku.edu.cn/">Peking University</a> in 2019, advised by Professor <a href="https://zhouchenlin.github.io/">Zhouchen Lin</a> (IEEE Fellow) and Professor <a href="https://www.cis.pku.edu.cn/info/1177/1379.htm">Hongbin Zha</a>. In 2014, I received my bachelor's degree in electronics and information engineering from the advanced class, <a href="https://www.hust.edu.cn/">Huazhong University of Science and Technology (HUST)</a>.
</p>
</p>
<div class='researchInt'>
<h3 style="padding-top:-300px">Research Interest</h3>
<p>Computer Vision, Multi-modal Learning. </p>
</div>
<p>
<h3 style="padding-top:-200px"></h3>
<a href="mailto: wujianlong@hit.edu.cn"><i class="fa fa-envelope"></i> wujianlong@hit.edu.cn</a>
<a href="https://scholar.google.com/citations?user=XGeEH-IAAAAJ&hl=zh-CN" target="_blank"><i class="fa fa-globe"></i> Google Scholar</a>
<a href="https://faculty.hitsz.edu.cn/wujianlong" target="_blank"><i class="fa fa-paper-plane"></i> 中文简介</a>
</div>
</div>
<hr>
<h3>
<a name='recuit'></a> Recruit
</h3>
<div class='masters'>
I'm recruiting <strong>self-motivated postdoctoral fellow, Ph.D. and master students</strong> who have strong mathematical abilities and coding skills to work with me on multimodal learning related research topics. Welcome to send me your detailed resume!</p>
</div>
<hr>
<h3>
<a name='publications'></a> News
</h3>
<div class='news'>
<ul>
<li><i class="fa fa-flag" color="#FF0000"
aria-hidden="true"></i>[2025/10] I will serve as an Associate Editor of IEEE TPAMI and TMM. </li>
<li><i class="fa fa-flag" color="#FF0000"
aria-hidden="true"></i>[2025/09] Selected for the World's Top 2% Scientists. One paper accepted by TPAMI. </li>
<li><i class="fa fa-flag" color="#FF0000"
aria-hidden="true"></i>[2025/07] Our work won the Zuchongzhi Outstanding Achievement Award and CVPR2025 VideoLLMs Championship. One paper accepted by TPAMI. </li>
<li><i class="fa fa-flag" color="#FF0000"
aria-hidden="true"></i>[2025/02] Our work won the first prize of CAA Natural Science Award. Several papers were accepted by TPAMI, TMM, and TCSVT. </li>
<li><i class="fa fa-flag" color="#FF0000"
aria-hidden="true"></i>[2024/10] I will serve as an Area Chair of CVPR 2025 and ICML 2025. Several papers were accepted by ECCV, ACM MM, and NeurIPS. </li>
<li><i class="fa fa-flag" color="#FF0000"
aria-hidden="true"></i>[2024/03] I will serve as an Area Chair of NeurIPS 2024 and ACM MM 2024. Two papers were accepted by TPAMI. </li>
<li><i class="fa fa-flag" color="#FF0000"
aria-hidden="true"></i>[2023/12] Won the first prize of Shandong Provincial Technological Invention Award and the Young Elite Scientists Sponsorship Program by CAST. </li>
<li><i class="fa fa-flag" color="#FF0000"
aria-hidden="true"></i>[2023/08] Three papers were accepted by ACM MM. </li>
<li><i class="fa fa-flag" color="#FF0000"
aria-hidden="true"></i>[2023/05] I will serve as an Area Chair of ACM Multimedia 2023. Two papers were accepted by ACL and SIGIR, respectively. </li>
<li><i class="fa fa-flag" color="#FF0000"
aria-hidden="true"></i>[2023/03] I will serve as an Area Chair of NeurIPS 2023 and a Guest Editor of TCSVT. Two papers were accepted by CVPR and TIP, respectively. </li>
<li><i class="fa fa-flag" color="#FF0000"
aria-hidden="true"></i>[2022/07] Three papers were accepted by TNNLS, ECCV 2022, and ACM MM 2022, respectively.</li>
<li><i class="fa fa-flag" color="#FF0000"
aria-hidden="true"></i>[2022/02] Two papers were accepted by CVPR 2022 and three papers were accepted by IEEE Trans. Multimedia.</li>
<li><i class="fa fa-flag" color="#FF0000"
aria-hidden="true"></i>[2021/12] Our work won the first prize of the Shandong Provincial Science and Technology Progress Award in 2021.</li>
<li><i class="fa fa-flag" color="#FF0000"
aria-hidden="true"></i>[2021/07] Our paper received the Best Student Paper Award of SIGIR 2021. And one paper was accepted by ICCV 2021.</li>
<li><i class="fa fa-flag" color="#FF0000"
aria-hidden="true"></i>[2021/04] One paper was accepted by PR and another paper was accepted by SIGIR 2021.</li>
<!--<li><i class="fa fa-flag" color="#FF0000"
aria-hidden="true"></i>[2020/10] I will serve as a Senior Program Committee (SPC) Member for IJCAI 2021.</li>-->
<li><i class="fa fa-flag" color="#FF0000"
aria-hidden="true"></i>[2020/09] Four papers were accepted by NeurIPS 2020, ICML 2020, SIGIR 2020, and ECCV 2020, respectively.</li>
<li><i class="fa fa-flag" color="#FF0000"
aria-hidden="true"></i>[2019/11] Three papers were accepted by AAAI 2020.</li>
<li><i class="fa fa-flag" color="#FF0000"
aria-hidden="true"></i>[2019/07] Two papers were accepted by ICCV 2019 with one oral presentation.</li>
<li><i class="fa fa-flag" color="#FF0000"
aria-hidden="true"></i>[2019/05] Two papers were accepted by IEEE Trans. Image Processing and ICML 2019.</li>
</ul>
</div>
<hr>
<h3>
<a name='publications'></a> Publications
</h3>
(* denotes equal contributions and ^ denotes corresponding author)
<h4>
<a name='2025'></a> Preprint
</h4>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>Mamba-FSCIL: Dynamic Adaptation with Selective State Space Model for Few-Shot Class-Incremental Learning</strong><br />
Xiaojie Li, Yibo Yang, <strong>Jianlong Wu^</strong>, Bernard Ghanem, Liqiang Nie, Min Zhang<br />
<a href="https://arxiv.org/abs/2407.06136">[PDF]</a>
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>Continuous Knowledge-Preserving Decomposition for Few-Shot Continual Learning</strong><br />
Xiaojie Li, Yibo Yang, <strong>Jianlong Wu^</strong>, David A Clifton, Yue Yu, Bernard Ghanem, Min Zhang<br />
<a href="https://arxiv.org/abs/2501.05017">[PDF]</a>
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>ReTaKe: Reducing Temporal and Knowledge Redundancy for Long Video Understanding</strong><br />
Xiao Wang, Qingyi Si, <strong>Jianlong Wu^</strong>, Shiyu Zhu, Li Cao, Liqiang Nie<br />
<a href="https://arxiv.org/abs/2412.20504">[PDF]</a>
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>Wkvquant: Quantizing Weight and Key/Value Cache for Large Language Models Gains More</strong><br />
Yuxuan Yue, Zhihang Yuan, Haojie Duanmu, Sifan Zhou, <strong>Jianlong Wu^</strong>, Liqiang Nie<br />
<a href="https://arxiv.org/abs/2402.12065">[PDF]</a>
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>MegaSR: Mining Customized Semantics and Expressive Guidance for Image Super-Resolution</strong><br />
Xinrui Li, <strong>Jianlong Wu^</strong>, Xinchuan Huang, Chong Chen, Weili Guan, Xian-Sheng Hua, Liqiang Nie<br />
<a href="https://arxiv.org/abs/2503.08096">[PDF]</a>
</p>
</div>
</div>
<h4>
<a name='2025'></a> 2025
</h4>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>A Survey on Video Temporal Grounding with Multimodal Large Language Model</strong><br />
<strong>Jianlong Wu</strong>, Wei Liu, Ye Liu, Meng Liu, Liqiang Nie, Zhouchen Lin, Chang Wen Chen<br />
IEEE Transactions on Pattern Analysis and Machine Intelligence (<strong>TPAMI</strong>), 2025<br />
<a href="https://arxiv.org/abs/2508.10922">[PDF]</a>
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>ClusMatch: Improving Deep Clustering by Unified Positive and Negative Pseudo-label Learning</strong><br />
<strong>Jianlong Wu</strong>, Zihan Li, Wei Sun, Jianhua Yin, Liqiang Nie, Zhouchen Lin<br />
IEEE Transactions on Pattern Analysis and Machine Intelligence (<strong>TPAMI</strong>), 2025<br />
<a href="https://ieeexplore.ieee.org/document/11079791">[PDF]</a>
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>Video DataFlywheel: Resolving the Impossible Data Trinity in Video-Language Understanding</strong><br />
Xiao Wang, <strong>Jianlong Wu^</strong>, Zijia Lin, Fuzheng Zhang, Di Zhang, Liqiang Nie^<br />
IEEE Transactions on Pattern Analysis and Machine Intelligence (<strong>TPAMI</strong>), 2025<br />
<a href="https://ieeexplore.ieee.org/document/10839067">[PDF]</a>
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>HAIC: Improving Human Action Understanding and Generation with Better Captions for Multi-modal Large Language Models</strong><br />
Xiao Wang, Jingyun Hua, Weihong Lin, Yuanxing Zhang, Fuzheng Zhang, <strong>Jianlong Wu^</strong>, Di Zhang, Liqiang Nie^<br /> The Annual Meeting of the Association for Computational Linguistics (<strong>ACL</strong>), 2025<br />
<a href="https://arxiv.org/abs/2502.20811">[PDF]</a>
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>AdaReTaKe: Adaptive Redundancy Reduction to Perceive Longer for Video-language Understanding</strong><br />
Xiao Wang, Qingyi Si, <strong>Jianlong Wu^</strong>, Shiyu Zhu, Li Cao, Liqiang Nie^<br />Findings of the Annual Meeting of the Association for Computational Linguistics (<strong>ACL</strong>), 2025<br />
<a href="https://arxiv.org/abs/2503.12559">[PDF]</a>
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>FineBadminton: A Multi-Level Dataset for Fine-Grained Badminton Video Understanding</strong><br />
Xusheng He, Wei Liu, Shanshan Ma^, Qian Liu, Chenghao Ma, <strong>Jianlong Wu^</strong><br />
ACM Conference on Multimedia (<strong>ACM MM</strong>), 2025<br />
<a href="https://arxiv.org/abs/2508.07554">[PDF]</a>
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>RA-BLIP: Multimodal Adaptive Retrieval-Augmented Bootstrapping Language-Image Pre-training</strong><br />
Muhe Ding, Yang Ma, Pengda Qin, <strong>Jianlong Wu^</strong>, Yuhong Li, Liqiang Nie<br />
IEEE Transactions on Multimedia (<strong>TMM</strong>), 2025<br />
<a href="https://arxiv.org/abs/2410.14154">[PDF]</a>
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>Preview-based Category Contrastive Learning for Knowledge Distillation</strong><br />
Muhe Ding, <strong>Jianlong Wu^</strong>, Xue Dong, Xiaojie Li, Pengda Qin, Tian Gan, Liqiang Nie<br />
IEEE Transactions on Circuits and Systems for Video Technology (<strong>TCSVT</strong>), 2025<br />
<a href="https://arxiv.org/abs/2410.14143">[PDF]</a>
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>LipGen: Viseme-Guided Lip Video Generation for Enhancing Visual Speech Recognition</strong><br />
Bowen Hao, Dongliang Zhou, Xiaojie Li, Xingyu Zhang^, Liang Xie, <strong>Jianlong Wu^</strong>, Erwei Yin<br />
IEEE International Conference on Acoustics, Speech and Signal Processing (<strong>ICASSP</strong>), 2025<br />
<a href="https://ieeexplore.ieee.org/document/10889163">[PDF]</a>
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>DKDM: Data-Free Knowledge Distillation for Diffusion Models with Any Architecture</strong><br />
Qianlong Xiang, Miao Zhang, Yuzhang Shang, <strong>Jianlong Wu</strong>, Yan Yan, Liqiang Nie<br />
IEEE/CVF Conference on Computer Vision and Pattern Recognition (<strong>CVPR</strong>), 2025<br />
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>Affordgrasp: In-context affordance reasoning for open-vocabulary task-oriented grasping in clutter</strong><br />
Yingbo Tang, Shuaike Zhang, Xiaoshuai Hao, Pengwei Wang, <strong>Jianlong Wu</strong>, Zhongyuan Wang, Shanghang Zhang<br />
The IEEE/RSJ International Conference on Intelligent Robots and Systems (<strong>IROS</strong>), 2025<br />
<a href="https://arxiv.org/abs/2503.00778">[PDF]</a>
</p>
</div>
</div>
<h4>
<a name='2024'></a> 2024
</h4>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>GenView: Enhancing View Quality with Pretrained Generative Model for Self-supervised Learning</strong><br />
Xiaojie Li, Yibo Yang^, Xiangtai Li, <strong>Jianlong Wu^</strong>, Yue Yu, Bernard Ghanem, Min Zhang<br />
European Conference on Computer Vision (<strong>ECCV</strong>), 2024 <br />
<a href="https://link.springer.com/chapter/10.1007/978-3-031-73113-6_18">[PDF]</a>
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>Differential-Perceptive and Retrieval-Augmented MLLM for Change Captioning</strong><br />
Xian Zhang, Haokun Wen, <strong>Jianlong Wu^</strong>, Pengda Qin, Hui Xue, Liqiang Nie^ <br />
ACM Conference on Multimedia (<strong>ACM MM</strong>), 2024 <br />
<a href="https://dl.acm.org/doi/10.1145/3664647.3681453">[PDF]</a>
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>CorDA: Context-oriented Decomposition Adaptation of Large Language Models</strong><br />
Yibo Yang, Xiaojie Li, Zhongzhu Zhou, Shuaiwen Leon Song, <strong>Jianlong Wu</strong>, Liqiang Nie, Bernard Ghanem <br />
Advances in Neural Information Processing Systems (<strong>NeurIPS</strong>), 2024 <br />
<a href="https://arxiv.org/abs/2406.05223">[PDF]</a>
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>Detecting and Grounding Multi-modal Media Manipulation and Beyond</strong><br />
Rui Shao, Tianxing Wu, <strong>Jianlong Wu</strong>, Liqiang Nie, Ziwei Liu<br />
IEEE Transactions on Pattern Analysis and Machine Intelligence (<strong>TPAMI</strong>), 2024<br />
<a href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10440475">[PDF]</a>
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>Self-Training Boosted Multi-Factor Matching Network for Composed Image Retrieval</strong><br />
Haokun Wen, Xuemeng Song, Jianhua Yin, <strong>Jianlong Wu</strong>, Weili Guan, Liqiang Nie<br />
IEEE Transactions on Pattern Analysis and Machine Intelligence (<strong>TPAMI</strong>), 2024<br />
<a href="https://ieeexplore.ieee.org/abstract/document/10373096">[PDF]</a>
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>SSCNet: Learning-based subspace clustering</strong><br />
Xingyu Xie, <strong>Jianlong Wu</strong>, Guangcan Liu, Zhouchen Lin<br />
Visual Intelligence, 2024<br />
<a href="https://link.springer.com/article/10.1007/s44267-024-00043-0">[PDF]</a>
</p>
</div>
</div>
<a name='2023'></a> 2023
</h4>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>Neighbor-guided Consistent and Contrastive Learning for Semi-supervised Action Recognition</strong><br />
<strong>Jianlong Wu</strong>, Wei Sun, Tian Gan, Ning Ding, Feijun Jiang, Jialie Shen, Liqiang Nie<br />
IEEE Transactions on Image Processing (<strong>TIP</strong>), 2023<br />
<a href="https://ieeexplore.ieee.org/document/10100655">[PDF]</a>
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>CHMATCH: Contrastive Hierarchical Matching and Robust Adaptive Threshold Boosted Semi-Supervised Learning</strong><br />
<strong>Jianlong Wu</strong>, Haozhe Yang, Tian Gan, Ning Ding, Feijun Jiang, Liqiang Nie<br />
IEEE/CVF Conference on Computer Vision and Pattern Recognition (<strong>CVPR</strong>), 2023<br />
<a href="https://openaccess.thecvf.com/content/CVPR2023/papers/Wu_CHMATCH_Contrastive_Hierarchical_Matching_and_Robust_Adaptive_Threshold_Boosted_Semi-Supervised_CVPR_2023_paper.pdf">[PDF]</a>
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>Self-adaptive Context and Modal-interaction Modeling For Multimodal Emotion Recognition</strong><br />
Haozhe Yang, Xianqiang Gao, <strong>Jianlong Wu^</strong>, Tian Gan, Ning Ding, Feijun Jiang, Liqiang Nie<br />
Findings of the Annual Meeting of the Association for Computational Linguistics (<strong>ACL</strong>), 2023<br />
<a href="https://aclanthology.org/2023.findings-acl.390/">[PDF]</a>
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>Fine-grained Key-Value Memory Enhanced Predictor for Video Representation Learning</strong><br />
Xiaojie Li, <strong>Jianlong Wu^</strong>, Shaowei He, Kang Shuo, Yue Yu, Liqiang Nie, Min Zhang<br />
ACM Conference on Multimedia (<strong>ACM MM</strong>), 2023<br />
<a href="https://dl.acm.org/doi/10.1145/3581783.3612131">[PDF]</a>
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>Temporal Sentence Grounding in Streaming Videos</strong><br />
Tian Gan, Xiao Wang, Yan Sun, <strong>Jianlong Wu^</strong>, Qingpei Guo, Liqiang Nie<br />
ACM Conference on Multimedia (<strong>ACM MM</strong>), 2023<br />
<a href="https://dl.acm.org/doi/10.1145/3581783.3612120">[PDF]</a>
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>Mask Again: Masked Knowledge Distillation for Masked Video Modeling</strong><br />
Xiaojie Li, Shaowei He, <strong>Jianlong Wu^</strong>, Yue Yu, Liqiang Nie^, Min Zhang<br />
ACM Conference on Multimedia (<strong>ACM MM</strong>), 2023<br />
<a href="https://dl.acm.org/doi/10.1145/3581783.3612129">[PDF]</a>
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>Semantic-aware Modular Capsule Routing for Visual Question Answering</strong><br />
Yudong Han, Jianhua Yin, <strong>Jianlong Wu</strong>, Yinwei Wei, Liqiang Nie<br />
IEEE Transactions on Image Processing (<strong>TIP</strong>), 2023<br />
<a href="https://ieeexplore.ieee.org/document/10268338">[PDF]</a>
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>SNP-S3: Shared Network Pre-training and Significant Semantic Strengthening for Various Video-Text Tasks</strong><br />
Xingning Dong, Qingpei Guo, Tian Gan, Qing Wang, <strong>Jianlong Wu</strong>, Xiangyuan Ren, Yuan Cheng, Wei Chu<br />
IEEE Transactions on Circuits and Systems for Video Technology (<strong>TCSVT</strong>), 2023<br />
<a href="https://ieeexplore.ieee.org/document/10214396">[PDF]</a>
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>Multi-Granularity Interaction and Integration Network for Video Question Answering</strong><br />
Yuanyuan Wang, Meng Liu, <strong>Jianlong Wu</strong>, Liqiang Nie<br />
IEEE Transactions on Circuits and Systems for Video Technology (<strong>TCSVT</strong>), 2023<br />
<a href="https://ieeexplore.ieee.org/document/10130300">[PDF]</a>
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>OFAR: A Multimodal Evidence Retrieval Framework for Illegal Live-streaming Identification</strong><br />
Dengtian Lin, Yang Ma, Yuhong Li, Xuemeng Song, <strong>Jianlong Wu</strong>, Liqiang Nie<br />
Industrial Track of the International ACM SIGIR Conference on Research and Development in Information Retrieval (<strong>SIGIR</strong>), 2023<br />
<a href="https://dl.acm.org/doi/10.1145/3539618.3591864">[PDF]</a>
</p>
</div>
</div>
<h4>
<a name='2022'></a> 2022
</h4>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>Micro-video Tagging via Jointly Modeling Social Influence and Tag Relation</strong><br />
Xiao Wang, Tian Gan^, Yinwei Wei, <strong>Jianlong Wu^</strong>, Xiaoqiang Lei, Liqiang Nie<br />
ACM Conference on Multimedia (<strong>ACM MM</strong>), 2022<br />
<a href="https://dl.acm.org/doi/abs/10.1145/3503161.3548098">[PDF]</a>
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>HEAD: HEtero-Assists Distillation for Heterogeneous Object Detectors</strong><br />
Luting Wang, Xiaojie Li, Yue Liao, Zeren Jiang, <strong>Jianlong Wu</strong>, Fei Wang, Chen Qian, Si Liu<br />
European Conference on Computer Vision (<strong>ECCV</strong>), 2022 <br />
<a href="https://arxiv.org/abs/2207.05345">[PDF]</a>
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>TryonCM2: Try-on-Enhanced Fashion Compatibility Modeling Framework</strong><br />
Xue Dong, Xuemeng Song, Na Zheng, <strong>Jianlong Wu</strong>, Hongjun Dai, Liqiang Nie<br />
IEEE Transactions on Neural Networks and Learning Systems (<strong>TNNLS</strong>), 2022<br />
<a href="https://ieeexplore.ieee.org/abstract/document/9775146/">[PDF]</a>
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>Stacked Hybrid-Attention and Group Collaborative Learning for Unbiased Scene Graph Generation</strong><br />
Xingning Dong, Tian Gan, Xuemeng Song, <strong>Jianlong Wu</strong>, Yuan Cheng, Liqiang Nie<br />
IEEE/CVF Conference on Computer Vision and Pattern Recognition (<strong>CVPR</strong>), 2022<br />
<a href="https://arxiv.org/abs/2203.09811">[PDF]</a>
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>High Quality Segmentation for Ultra High-resolution Images</strong><br />
Tiancheng Shen, Yuechen Zhang, Lu Qi, Jason Kuen, Xingyu Xie, <strong>Jianlong Wu</strong>, Zhe Lin, Jiaya Jia<br />
IEEE/CVF Conference on Computer Vision and Pattern Recognition (<strong>CVPR</strong>), 2022<br />
<a href="https://arxiv.org/abs/2111.14482">[PDF]</a>
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>Self-supervised Correlation Learning for Cross-Modal Retrieval</strong><br />
Yaxin Liu, <strong>Jianlong Wu^</strong>, Leigang Qu, Tian Gan, Jianhua Yin, Liqiang Nie<br />
IEEE Transactions on Multimedia (<strong>TMM</strong>), 2022<br />
<a href="https://ieeexplore.ieee.org/abstract/document/9714824/">[PDF]</a>
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>Micro-influencer Recommendation by Multi-perspective Account Representation Learning</strong><br />
Shaokun Wang, Tian Gan^, Yuan Liu, <strong>Jianlong Wu^</strong>, Yuan Cheng, Liqiang Nie<br />
IEEE Transactions on Multimedia (<strong>TMM</strong>), 2022<br />
<a href="https://ieeexplore.ieee.org/abstract/document/9712372/">[PDF]</a>
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>DualGNN: Dual Graph Neural Network for Multimedia Recommendation</strong><br />
Qifan Wang, Yinwei Wei, Jianhua Yin, <strong>Jianlong Wu</strong>, Xuemeng Song, Liqiang Nie<br />
IEEE Transactions on Multimedia (<strong>TMM</strong>), 2022<br />
<a href="https://ieeexplore.ieee.org/abstract/document/9662655/">[PDF]</a>
</p>
</div>
</div>
<h4>
<a name='2021'></a> 2021
</h4>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>Graph Contrastive Clustering</strong><br />
Huasong Zhong*, <strong>Jianlong Wu*</strong>, Chong Chen, Jianqiang Huang, Minghua Deng, Liqiang Nie, Zhouchen Lin, Xiansheng Hua<br />
International Conference on Computer Vision (<strong>ICCV</strong>), 2021<br />
<a href="https://arxiv.org/pdf/2104.01429.pdf">[PDF]</a>
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>Reconstruction Regularized Low-Rank Subspace Learning for Cross-Modal Retrieval</strong><br />
<strong>Jianlong Wu</strong>, Xingyu Xie, Liqiang Nie, Zhouchen Lin, Hongbin Zha<br />
Pattern Recognition (<strong>PR</strong>), 2021<br />
<a href="https://www.sciencedirect.com/science/article/pii/S0031320320306166">[PDF]</a>
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>Dynamic Modality Interaction Modeling for Image-Text Retrieval</strong><br />
Leigang Qu, Meng Liu, <strong>Jianlong Wu</strong>, Zan Gao, Liqiang Nie <br />
International ACM SIGIR Conference on Research and Development in Information Retrieval (<strong>SIGIR, Best Student Paper</strong>), 2021<br />
<a href="https://dl.acm.org/doi/abs/10.1145/3404835.3462829">[PDF]</a>
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>Discover Micro-influencers for Brands via Better Understanding</strong><br />
Shaokun Wang, Tian Gan, Yuan Liu, Li Zhang, <strong>Jianlong Wu</strong>, Liqiang Nie<br />
IEEE Transactions on Multimedia (<strong>TMM</strong>), 2021<br />
<a href="https://ieeexplore.ieee.org/abstract/document/9454334">[PDF]</a>
</p>
</div>
</div>
<h4>
<a name='2020'></a> 2020
</h4>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>Agree to Disagree: Adaptive Ensemble Knowledge Distillation in Gradient Space</strong><br />
Shangchen Du*, Shan You*^, Xiaojie Li, <strong>Jianlong Wu^</strong>, Fei Wang, Chen Qian, Changshui Zhang<br />
Advances in Neural Information Processing Systems (<strong>NeurIPS</strong>), 2020 <br />
<a href="https://proceedings.neurips.cc/paper/2020/file/91c77393975889bd08f301c9e13a44b7-Paper.pdf">[PDF]</a>
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>Local Correlation Consistency for Knowledge Distillation</strong><br />
Xiaojie Li, <strong>Jianlong Wu^</strong>, Hongyu Fang, Yue Liao, Fei Wang, Chen Qian<br />
European Conference on Computer Vision (<strong>ECCV</strong>), 2020 <br />
<a href="http://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123570018.pdf">[PDF]</a>
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>Maximum-and-Concatenation Networks</strong><br />
Xingyu Xie, Hao Kong, <strong>Jianlong Wu</strong>, Wayne Zhang, Guangcan Liu, Zhouchen Lin<br />
International Conference on Machine Learning (<strong>ICML</strong>), 2020<br />
<a href="https://arxiv.org/abs/2007.04630">[PDF]</a>
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>Fashion Compatibility Modeling through a Multi-modal Try-on-guided Scheme</strong><br />
Xue Dong, <strong>Jianlong Wu</strong>, Xuemeng Song, Hongjun Dai, Liqiang Nie<br />
International ACM SIGIR Conference on Research and Development in Information Retrieval (<strong>SIGIR</strong>), 2020<br />
<a href="https://dl.acm.org/doi/abs/10.1145/3397271.3401047">[PDF]</a>
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>Unified Graph and Low-rank Tensor Learning for Multi-view Clustering</strong><br />
<strong>Jianlong Wu*</strong>, Xingyu Xie*, Liqiang Nie, Zhouchen Lin, Hongbin Zha<br />
AAAI Conference on Artificial Intelligence (<strong>AAAI</strong>), 2020<br />
<a href="https://aaai.org/ojs/index.php/AAAI/article/view/6109">[PDF]</a>
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>Dynamical System Inspired Adaptive Time Stepping Controller for Residual Network Families</strong><br />
Yibo Yang*, <strong>Jianlong Wu*</strong>, Hongyang Li, Xia Li, Tiancheng Shen, Zhouchen Lin<br />
AAAI Conference on Artificial Intelligence (<strong>AAAI</strong>), 2020<br />
<a href="https://aaai.org/ojs/index.php/AAAI/article/view/6141">[PDF]</a>
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>SOGNet: Scene Overlap Graph Network for Panoptic Segmentation</strong><br />
Yibo Yang*, Hongyang Li*, Xia Li, Qijie Zhao, <strong>Jianlong Wu</strong>, Zhouchen Lin<br />
AAAI Conference on Artificial Intelligence (<strong>AAAI</strong>), 2020<br />
<a href="https://arxiv.org/abs/1911.07527">[PDF]</a>
</p>
</div>
</div>
<h4>
<a name='2019'></a> 2019
</h4>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>Deep Comprehensive Correlation Mining for Image Clustering</strong><br />
<strong>Jianlong Wu*</strong>, Keyu Long*, Fei Wang, Chen Qian, Cheng Li, Zhouchen Lin, Hongbin Zha<br />
International Conference on Computer Vision (<strong>ICCV</strong>), 2019<br />
<a href="http://openaccess.thecvf.com/content_ICCV_2019/papers/Wu_Deep_Comprehensive_Correlation_Mining_for_Image_Clustering_ICCV_2019_paper.pdf">[PDF]</a>
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>Expectation-Maximization Attention Networks for Semantic Segmentation</strong><br />
Xia Li, Zhisheng Zhong, <strong>Jianlong Wu</strong>, Yibo Yang, Zhouchen Lin, Hong Liu<br />
International Conference on Computer Vision (<strong>ICCV, Oral</strong>), 2019<br />
<a href="http://openaccess.thecvf.com/content_ICCV_2019/papers/Li_Expectation-Maximization_Attention_Networks_for_Semantic_Segmentation_ICCV_2019_paper.pdf">[PDF]</a>
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>Essential Tensor Learning for Multi-view Spectral Clustering</strong><br />
<strong>Jianlong Wu</strong>, Zhouchen Lin, Hongbin Zha<br />
IEEE Transactions on Image Processing (<strong>TIP</strong>), 2019<br />
<a href="https://arxiv.org/pdf/1807.03602.pdf">[PDF]</a>
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>Differentiable Linearized ADMM</strong><br />
Xingyu Xie*, <strong>Jianlong Wu*</strong>, Zhisheng Zhong, Guangcan Liu, Zhouchen Lin<br />
International Conference on Machine Learning (<strong>ICML</strong>), 2019<br />
<a href="http://proceedings.mlr.press/v97/xie19c/xie19c.pdf">[PDF]</a>
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>R^2 -Net: Recurrent and Recursive Network for Sparse-view CT Artifacts Removal</strong><br />
Tiancheng Shen*, Xia Li*, Zhisheng Zhong, <strong>Jianlong Wu</strong>, Zhouchen Lin<br />
International Conference on Medical Image Computing and Computer Assisted Intervention (<strong>MICCAI</strong>), 2019<br />
<a href="https://link.springer.com/chapter/10.1007/978-3-030-32226-7_36">[PDF]</a>
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>Matrix Recovery With Implicitly Low-rank Data</strong><br />
Xingyu Xie, <strong>Jianlong Wu</strong>, Guangcan Liu, Jun Wang<br />
Neurocomputing, 2019<br />
<a href="https://www.sciencedirect.com/science/article/pii/S0925231219300426">[PDF]</a>
</p>
</div>
</div>
<h4>
<a name='2018'></a> 2018
</h4>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>Recurrent Squeeze-and-Excitation Context Aggregation Net for Single Image Deraining</strong><br />
Xia Li*, <strong>Jianlong Wu*</strong>, Zhouchen Lin, Hong Liu, Hongbin Zha<br />
European Conference on Computer Vision (<strong>ECCV</strong>), 2018 <br />
<a href="http://openaccess.thecvf.com/content_ECCV_2018/papers/Xia_Li_Recurrent_Squeeze-and-Excitation_Context_ECCV_2018_paper.pdf">[PDF]</a>
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>Joint Dictionary Learning and Semantic Constrained Latent Subspace Projection for Cross-modal Retrieval</strong><br />
<strong>Jianlong Wu</strong>, Zhouchen Lin, Hongbin Zha<br />
ACM International Conference on Information and Knowledge Management (<strong>CIKM</strong>, short paper), 2018 <br />
<a href="https://dl.acm.org/citation.cfm?id=3269296">[PDF]</a>
</p>
</div>
</div>
<h4>
<a name='2017'></a> 2017
</h4>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>Joint Latent Subspace Learning and Regression for Cross-modal Retrieval</strong><br />
<strong>Jianlong Wu</strong>, Zhouchen Lin, Hongbin Zha<br />
International ACM SIGIR Conference on Research and Development in Information Retrieval (<strong>SIGIR</strong>, short paper), 2017 <br />
<a href="https://dl.acm.org/citation.cfm?id=3080678">[PDF]</a>
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>Locality-constrained Linear Coding Based Bi-layer Model for Multi-view Facial Expression Recognition</strong><br />
<strong>Jianlong Wu</strong>, Zhouchen Lin, Wenming Zheng, Hongbin Zha<br />
Neurocomputing, 2017 <br />
<a href="https://www.sciencedirect.com/science/article/pii/S0925231217302825">[PDF]</a>
</p>
</div>
</div>
<h4>
<a name='2016'></a> 2016
</h4>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>Multi-view Common Space Learning for Emotion Recognition in the Wild</strong><br />
<strong>Jianlong Wu</strong>, Zhouchen Lin, Hongbin Zha<br />
ACM International Conference on Multimodal Interaction (<strong>ICMI</strong>), 2016<br />
<a href="https://dl.acm.org/citation.cfm?id=2997631">[PDF]</a>
</p>
</div>
</div>
<h4>
<a name='2015'></a> 2015
</h4>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>Multiple Models Fusion for Emotion Recognition in the Wild</strong><br />
<strong>Jianlong Wu</strong>, Zhouchen Lin, Hongbin Zha<br />
ACM International Conference on Multimodal Interaction (<strong>ICMI</strong>), 2015<br />
<a href="https://dl.acm.org/citation.cfm?id=2830582">[PDF]</a>
</p>
</div>
</div>
<hr>
<h3>
<a name='service'></a> Academic Services
</h3>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>Associate Editor:</strong><br />
IEEE Transactions on Pattern Analysis and Machine Intelligence (<strong>TPAMI</strong>),
IEEE Transactions on Multimedia (<strong>TMM</strong>)
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>Area Chair:</strong><br />
International Conference on Machine Learning (<strong>ICML</strong>, 2025), IEEE Conference on Computer Vision and Pattern Recognition (<strong>CVPR</strong>, 2025),Neural Information Processing Systems (<strong>NeurIPS</strong>, 2024/2023),
ACM Multimedia (<strong>ACM MM</strong>, 2024/2023),
IEEE International Conference on Pattern Recognition (<strong>ICPR</strong>, 2022/2020)
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>Reviewer for Journals:</strong><br />
IEEE Transactions on Pattern Analysis and Machine Intelligence (<strong>TPAMI</strong>), International Journal of Computer Vision (<strong>IJCV</strong>), IEEE Transactions on Image Processing (<strong>TIP</strong>), IEEE Transactions on Neural Networks and Learning Systems (<strong>TNNLS</strong>), Pattern Recognition (<strong>PR</strong>), IEEE Transactions on Multimedia (<strong>TMM</strong>), IEEE Transactions on Cybernetics (<strong>TCYB</strong>)
</p>
</div>
</div>
<div class="media">
<div class="media-body">
<p class="media-heading">
<strong>Reviewer (or Program Committee Member) for Conferences:</strong><br />
International Conference on Machine Learning (<strong>ICML</strong>), Neural Information Processing Systems (<strong>NeurIPS</strong>), IEEE Conference on Computer Vision and Pattern Recognition (<strong>CVPR</strong>), International Conference on Computer Vision (<strong>ICCV</strong>), Eourpean Conference on Computer Vision (<strong>ECCV</strong>), AAAI Conference on Artificial Intelligence (<strong>AAAI</strong>), ACM Conference on Multimedia (<strong>ACM MM</strong>), International Conference on Learning Representation (<strong>ICLR</strong>), International Joint Conferences on Artificial Intelligence (<strong>IJCAI</strong>)
</p>
</div>
</div>
<hr>
<h3>
<a name='publications'></a> Selected Awards
</h3>
<div class='award'>
<ul >
<li>2025 World's Top 2% Scientists
<li>2024 First Prize of the Chinese Automation Association (CAA) Natural Science Award
<li>2024 ACM China Rising Star Nomination
<li>2024 ACM SIGMM China Rising Star
<li>2023 First Prize of the Shandong Provincial Technological Invention Award
<li>2023 Young Elite Scientists Sponsorship Program by CAST
<li>2022 Outstanding Young Scholar, Harbin Institute of Technology (Shenzhen)
<li>2021 First Prize of the Shandong Provincial Science and Technology Progress Award
<li>2021 Best Student Paper of SIGIR 2021
<li>2020 Future Program for Young Scholars, Shandong University
<li>2020 Top Reviewer of ICML 2020
<li>2019 Outstanding Graduate, Peking University
<li>2019 ICML Travel Award
<li>2018 National Scholarship for Ph.D. Student (Top 2% in PKU)
<li>2018 Pexpertmaker to Merit Student, Peking University
<li>2017 Merit Student, Peking University
<li>2016 Outstanding Academic Award, Peking University
<li>2014 Outstanding Graduate, HUST
<li>2014 Excellent Student Cadre, HUST
</ul>
</div>
<hr>
<h3>
<a name='teaching'></a> Teaching
</h3>
<div class='teaching'>
<ul >
<li>Introduction to Object-Oriented Software Construction, Harbin Institute of Technology (Shenzhen), Spring 2025/2024/2023.</li>
<li>Digital Image Processing, Shandong University, Fall 2021/2020/2019.</li>
<li>Advanced Language Programming, Shandong University, Spring 2022/2021/2020.</li>
<li>Computer Vision, Shandong University, Fall 2020/2019.</li>
</ul>
</div>
</div>
</div>
<!--//main-body-->
<hr>
<footer class="footer">
<div class="container">
<small class="copyright"> 2025 Jianlong Wu</small>
</div>
<!--//container-->
<!--/footer-->
<!--//footer-->
<!--div style="display:inline-block;width:200px;">
<script type="text/javascript" src="//rf.revolvermaps.com/0/0/8.js?i=5ekw1not1zt&m=8&c=ff0000&cr1=ffffff&f=arial&l=0" async="async"></script>
</div-->
</body>
</html>