- arXiv paper link : https://arxiv.org/abs/1907.00831
Code refactoring (12/12)- Tensorboard callback override to log in a single file
- Two initialization methods (MHT, association-based)
- Tracking evaluation code
- Demo tracking sequence update
OS : Windows10 64bit (Verified to works fine on Ubuntu 18.04)
CPU : Intel i5-8500 3.00GHz
GPU : Geforce GTX Titan X (Works on GPU with smaller memory size >= 5GB)
RAM : 32 GB
python 3.6
tensorflow-gpu 2.1.0 (strict!)
numpy 1.17.3
opencv 3.4.2
matplotlib 3.1.1
scikit-learn 0.22.1
- Set the dataset folder as following structure
ex) MOT
|__ TUD-Stadtmitte
| |__ det
| |__ gt
| |__ img1
|__ MOT16-02
|__ Custom_Seuqnce
.
.
- We recommend to copy-and-paste all MOTChallenge sequences in MOT folder
-
Download the pre-trained models and locate in in model directory.
-
Set the variable 'seq_path' in config.py to your own dataset path.
- dataset should be 'MOT/{sequence_folder-1, ..., sequence_folder-N}'.
- each sequence_folder should follow the MOTChallenge style (e.g., 'sequence_folder-1/{det, gt, img1}').
- The simplest way is just copy and paste all MOTChallenge datasets (2DMOT2015, MOT16, MOT16, MOT20, etc) in 'MOT' folder.
- The compatible datasets are available on MOTChallenge.
-
Set the variable 'seqlist_name' in 'tracking_demo.py' to the proper name.
- We have already set some sequence groups to test the tracker.
- Add your own tracking sequence group in 'sequence_groups'.
-
Perform tracking using 'tracking_demo.py'.
- tracking thresholds can be controlled by modifying 'config.py'.
- There exist two mode on-off variables in 'tracking_demo.py'.
- 'set_fps' : manipulates an FPS and drop frames of videos
- 'semi_on' : improves a tracking performance using interpolation and restoration
-
Set the data as same as 'Tracking settings' above.
-
Modify the 'sequence_groups/trainval_group.json' to your own dataset
- Training and validation dataset should contain 'gt' folder.
-
Perform training using 'training_demo.py'.
- JI-Net training should be performed first.
- Using pre-trained JI-Net model, LSTM can be trained.
- The evaluation tool should be manually set by the users.
-
LSTM validation loss and accuracy
-
We provide pre-trained models for JI-Net and LSTM
-
Locate the downloaded models in 'model' directory
-
Download links
JI-Net : https://drive.google.com/file/d/1VnJoyUOuDPbP82kgqznoZlKSaZ7QdaiZ/view?usp=sharing
LSTM : https://drive.google.com/file/d/1jkGdbSqfP7Pc9CyFxNT6aAAam1pWyA_X/view?usp=sharing
-
@inproceedings{ycyoon2018,
title={Online Multi-Object Tracking with Historical Appearance Matching and Scene Adaptive Detection Filtering},
author={Young-Chul Yoon and Abhijeet Boragule and Young-min Song and Kwangjin Yoon and Moongu Jeon},
year={2018},
booktitle={IEEE AVSS}
}
@inproceedings{ycyoon2020,
title={Online Multiple Pedestrians Tracking using Deep Temporal Appearance Matching Association},
author={Young-Chul Yoon Du Yong Kim and Young-min Song and Kwangjin Yoon and Moongu Jeon},
year={2021},
booktitle={Information Sciences}
}
This tracker has been awarded a 3rd Prize on 4th BMTT MOTChallenge Workshop held in CVPR 2019




