Driving scene retrieval by example from large-scale data
Paper i proceeding, 2019
task, subsets, containing certain features more densely, support training better than others. For example, training networks on tasks such as image segmentation, bounding box detection or tracking requires an ample amount of objects in the input data. When training a network to perform optical flow estimation from first-person video, over-proportionally many straight driving scenes in the training data may lower generalization to turns. Even though some scenes of the BDD-V dataset are labeled with scene, weather or time of day information, these may be too coarse to filter the dataset best for a particular training task. Furthermore, even defining an exhaustive list of good label-types is complicated as it requires choosing the most relevant concepts of the natural world for a task. Alternatively, we investigate how to use examples of desired data to retrieve more similar data from a large-scale dataset. Following the paradigm of ”I know it when I see it”, we present a deep learning approach to use driving examples for retrieving similar scenes from the BDD-V dataset. Our method leverages only automatically collected labels. We show how we can reliably vary time of the day or objects in our query examples and retrieve nearest neighbors from the dataset. Using this method, already collected data can be filtered to remove bias from a dataset, removing scenes regarded too redundant to train on.
Författare
Sascha Hornauer
University of California at Berkeley
Baladitya Yellapragada
University of California at Berkeley
Arian Ranjbar
University of California at Berkeley
Stella Yu
University of California at Berkeley
CVPR Workshops 2019
Long Beach, CA, USA,
Ämneskategorier
Annan data- och informationsvetenskap
Bioinformatik (beräkningsbiologi)
Datorseende och robotik (autonoma system)