Date: Wednesday, May 24
Start Time: 11:25 am
End Time: 11:55 am
Using multiple fixed cameras to track objects requires a careful solution design. To enable scaling the number of cameras, the solution must avoid sending all images across a network. When camera views overlap only a little or not at all, input from multiple cameras must be combined by extending tracking area coverage. When an object can be seen by multiple cameras, this should be used to increase accuracy. Multiple cameras can better collaborate if they share a common coordinate system; therefore, environment mapping and accurate calibration is necessary. Moreover, the tracking algorithm must scale properly with the number of tracked objects, which can be achieved with a distributed approach. In this talk, we will cover practical ways of addressing these issues and will present our multiple-camera tracking solution, used for vehicle and pedestrian tracking, and share some of its results.