A project in collaboration with the Illinois Department of Transportation
The project is in collaboration with the Illinois Department of Transportation (IDOT), aiming to generate traffic counting from low-resolution surveillance videos. The goal is to realize 24/7 counting despite camera view, weather and other environment factors. The project covers a wide range of problems in computer vision, from high-level object detection/tracking to low-level image processing. Challenges come from the poor quality of the given videos and various camera angles, lighting conditions.
Fully Automatic, Real-Time Vehicle Tracking for Surveillance Video
14th Computer and Robotic Vision (CRV) 2017
We present an object tracking framework which fuses multiple unstable video-based methods and supports automatic tracker initialization and termination. To evaluate our system, we collected a large dataset of hand-annotated 5-minute traffic surveillance videos, which we are releasing to the community. To the best of our knowledge, this is the first publicly available dataset of such long videos, providing a diverse range of real-world object variation, scale change, interaction, different resolutions and illumination conditions. In our comprehensive evaluation using this dataset, we show that our automatic object tracking system often outperforms state-of-the-art trackers, even when these are provided with proper manual initialization. We also demonstrate tracking throughput improvements of 5x or more vs. the competition.
Scene Semantics Learning for Fully automatic Vehicle Tracking
Under review of Winter Application of Computer Vision, 2019
We propose a scene semantics learning framework for accurate and fully automatic vehicle tracking for opportunistic videos. Using topic modeling, for any surveillance scene, the system automatically learns and exploits significant and useful motion semantics, such as entry/exit hotspots, active motion region, and direction constraints. Proposed affinity measurements are applied to measure the fitness of any tracked object the scene at the current moment, which is found to be critical to every phase of the tracking of an object. We evaluate the system on 11 videos ranging from highway to cities, and discover that a tracker when augmented with the derived scene semantics as constraints, significantly improve tracking accuracy and robustness upon multiple automatic baselines that run without scene semantics. Overall, the scene semantic understanding and augmentation pipeline circumvents prevalent tracking challenges, such as poor video quality, occlusion, and varying lighting conditions, and provide a simple, flexible, and fully automatic framework.
The dataset is publicly availble for download, along with a C++ implementatioin of trajectory visualization tool.