Browsing by Author "Ateş, H. F."
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
ArticlePublication Open Access HM-net: A regression network for object center detection and tracking on wide area motion imagery(IEEE, 2022) Motorcu, Hakkı; Ateş, H. F.; Uğurdağ, Hasan Fatih; Güntürk, B. K.; Electrical & Electronics Engineering; UĞURDAĞ, Hasan Fatih; Motorcu, HakkıWide Area Motion Imagery (WAMI) yields high resolution images with a large number of extremely small objects. Target objects have large spatial displacements throughout consecutive frames. This nature of WAMI images makes object tracking and detection challenging. In this paper, we present our deep neural network-based combined object detection and tracking model, namely, Heat Map Network (HM-Net). HM-Net is significantly faster than state-of-the-art frame differencing and background subtraction-based methods, without compromising detection and tracking performances. HM-Net follows object center-based joint detection and tracking paradigm. Simple heat map-based predictions support unlimited number of simultaneous detections. The proposed method uses two consecutive frames and the object detection heat map obtained from the previous frame as input, which helps HM-Net monitor spatio-temporal changes between frames and keep track of previously predicted objects. Although reuse of prior object detection heat map acts as a vital feedback-based memory element, it can lead to unintended surge of false positive detections. To increase robustness of the method against false positives and to eliminate low confidence detections, HM-Net employs novel feedback filters and advanced data augmentations. HM-Net outperforms state-of-the-art WAMI moving object detection and tracking methods on WPAFB dataset with its 96.2% F1 and 94.4% mAP detection scores, while achieving 61.8 % mAP tracking score on the same dataset. This performance corresponds to an improvement of 2.1% for F1, 6.1% for mAP scores on detection, and 9.5% for mAP score on tracking over state-of-the-art.Conference ObjectPublication Metadata only VisDrone-MOT2021: The vision meets drone multiple object tracking challenge results(IEEE, 2021) Chen, G.; Wang, W.; He, Z.; Wang, L.; Yuan, Y.; Zhang, D.; Zhang, J.; Zhu, P.; Gool, L. V.; Han, J.; Hoi, S.; Hu, Q.; Liu, M.; Sciarrone, A.; Sun, C.; Garibotto, C.; Tran, D. N. N.; Lavagetto, F.; Haleem, H.; Motorcu, Hakkı; Ateş, H. F.; Jeon, H. J.; Bisio, I.; Jeon, J. W.; Li, J.; Pham, J. H.; Jeon, M.; Feng, Q.; Li, S.; Tran, T. H. P.; Pan, X.; Song, Y. M.; Yao, Y.; Du, Y.; Xu, Z.; Luo, Z.; Motorcu, HakkıVision Meets Drone: Multiple Object Tracking (VisDrone-MOT2021) challenge - the forth annual activity organized by the VisDrone team - focuses on benchmarking UAV MOT algorithms in realistic challenging environments. It is held in conjunction with ICCV 2021. VisDrone-MOT2021 contains 96 video sequences in total, including 56 sequences (~24K frames) for training, 7 sequences (~3K frames) for validation and 33 sequences (~13K frames) for testing. Bounding-box annotations for novel object categories are provided every frame and temporally consistent instance IDs are also given. Additionally, occlusion ratio and truncation ratio are provided as extra useful annotations. The results of eight state-of-the-art MOT algorithms are reported and discussed. We hope that our VisDrone-MOT2021 challenge will facilitate future research and applications in the field of UAV vision.