FOV-RVO: Velocity Obstacle-Based Pedestrian Motion Predictor
Predicting pedestrian motion is a crucial part of any safety-first autonomous driving system. We present FOVRVO, a Velocity Obstacle-based motion prediction method that models pedestrian-to-pedestrian and pedestrian-to-scene interactions by integrating the gaze directions of the pedestrians and map information of the environment. The proposed solution is fast, robust, and does not require any prior data. Furthermore, we enhance the method by introducing an auxiliary pre-trained Deep Learning (DL) method and combining predictions for final evaluation to utilize the strengths of both knowledgebased and data-driven motion prediction methods. The combined model is implemented inside the autonomous driving framework - Autoware Mini and tested on data from trips in urban conditions in Tartu, Estonia. The proposed FOV-RVO method outperforms compared state-of-the-art DL methods at number of predicted candidate trajectories K=1 in combined evaluation using minimal Average/Final Displacement Errors (minADE/minFDE), Miss Rate (MR), and non-Drivable Area Compliance (nonDAC). The combined solution at K=2 performs equivalent or better than tested models that output significantly higher predictions (up to K=10). The open-source code with instructions on accessing the dataset is available at https://github.com/dmytrozabolotnii/autoware_mini/tree/FOVRVO.
| Item Type | Conference or Workshop Item (Other) |
|---|---|
| Identification Number | 10.1109/ROBIO66223.2025.11377134 |
| Additional information | © 2025 IEEE. This is the accepted manuscript version of an article which has been published in final form at https://doi.org/10.1109/ROBIO66223.2025.11377134 |
| Date Deposited | 11 Mar 2026 11:46 |
| Last Modified | 14 Mar 2026 02:07 |
