Digitala Vetenskapliga Arkivet

Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Incorporating Uncertainty in Predicting Vehicle Maneuvers at Intersections With Complex Interactions
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL. (RPL/EECS)ORCID iD: 0000-0002-7796-1438
2019 (English)In: 2019 IEEE Intelligent Vehicles Symposium (IV), IEEE, 2019Conference paper, Published paper (Refereed)
Abstract [en]

Highly automated driving systems are required to make robust decisions in many complex driving environments, such as urban intersections with high traffic. In order to make as informed and safe decisions as possible, it is necessary for the system to be able to predict the future maneuvers and positions of other traffic agents, as well as to provide information about the uncertainty in the prediction to the decision making module. While Bayesian approaches are a natural way of modeling uncertainty, recently deep learning-based methods have emerged to address this need as well. However, balancing the computational and system complexity, while also taking into account agent interactions and uncertainties, remains a difficult task. The work presented in this paper proposes a method of producing predictions of other traffic agents' trajectories in intersections with a singular Deep Learning module, while incorporating uncertainty and the interactions between traffic participants. The accuracy of the generated predictions is tested on a simulated intersection with a high level of interaction between agents, and different methods of incorporating uncertainty are compared. Preliminary results show that the CVAE-based method produces qualitatively and quantitatively better measurements of uncertainty and manage to more accurately assign probability to the future occupied space of traffic agents.

Place, publisher, year, edition, pages
IEEE, 2019.
National Category
Robotics and automation
Research subject
Computer Science
Identifiers
URN: urn:nbn:se:kth:diva-257881DOI: 10.1109/IVS.2019.8814159ISI: 000508184100285Scopus ID: 2-s2.0-85072275831OAI: oai:DiVA.org:kth-257881DiVA, id: diva2:1349139
Conference
2019 IEEE Intelligent Vehicles Symposium (IV)
Funder
Vinnova, 2016-02547
Note

QC 20190925

Part of ISBN 978-1-7281-0560-4

Available from: 2019-09-06 Created: 2019-09-06 Last updated: 2025-02-09Bibliographically approved
In thesis
1. Interpretable, Interaction-Aware Vehicle Trajectory Prediction with Uncertainty
Open this publication in new window or tab >>Interpretable, Interaction-Aware Vehicle Trajectory Prediction with Uncertainty
2021 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Autonomous driving technologies have recently made great strides in development, with several companies and research groups getting close to producing a vehicle with full autonomy. Self-driving cars introduce many advantages, including increased traffic safety and added ride-sharing capabilities which reduce environmental effects. To achieve these benefits, many modules must work together on an autonomous platform to solve the multiple tasks required. One of these tasks is the prediction of the future positions and maneuvers of surrounding human drivers. It is necessary for autonomous driving platforms to be able to reason about, and predict, the future trajectories of other agents in traffic scenarios so that they can ensure their planned maneuvers remain safe and feasible throughout their execution. Due to the stochastic nature of many traffic scenarios, these predictions should also take into account the inherent uncertainty involved, caused by both the road structure and driving styles of human drivers. Since many traffic scenarios include vehicles changing their behavior based on the actions of others, for example by yielding or changing lanes, these interactions should be taken into account to produce more robust predictions. Lastly, the prediction methods should also provide a level of transparency and traceability. On an self-driving platform with many safety-critical tasks, it is important to be able to identify where an error occurred in a failure case, and what caused it. This helps prevent the problem from reoccurring, and can also aid in finding new and relevant test cases for simulation.

In this thesis, we present a framework for trajectory prediction of vehicles based on deep learning to fulfill these criteria. We first show that by operating on a generic representation of the traffic scene, our model can implicitly learn interactions between vehicles by capturing the spatio-temporal features in the data using recurrent and convolutional operations, and produce predictions for all vehicles simultaneously. We then explore different methods for incorporating uncertainty regarding the actions of human drivers, and show that Conditional Variational Auto Encoders are highly suited for our prediction method, allowing it to produce multi-modal predictions accounting for different maneuvers as well as variations within them. To address the issue of transparency for deep learning methods, we also develop an interpretability framework for deep learning models operating on sequences of images. This allows us to show, both spatially and temporally, what the models base their output on for all modes of input without requiring a dedicated model architecture, using the proposed Temporal Masks method. Finally, all these extensions are incorporated into one method, and the resulting prediction module is implemented and interfaced with a real-world autonomous driving research platform.

Place, publisher, year, edition, pages
KTH Royal Institute of Technology, 2021. p. 139
Series
TRITA-EECS-AVL ; 2021:9
Keywords
Trajectory Prediction, Computer Vision, Autonomous Driving, Deep Learning, Interpretability
National Category
Computer graphics and computer vision Robotics and automation
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-289569 (URN)978-91-7873-770-3 (ISBN)
Public defence
2021-02-26, F3, Lindstedtsvägen 26, Stockholm, 10:00 (English)
Opponent
Supervisors
Funder
Vinnova, 2016-02547
Note

QC 20210210

Available from: 2021-02-10 Created: 2021-02-03 Last updated: 2025-02-05Bibliographically approved

Open Access in DiVA

fulltext(689 kB)533 downloads
File information
File name FULLTEXT01.pdfFile size 689 kBChecksum SHA-512
5495e7bb5e712346a2b17975f76b5361316890756509fe18831943313babecaf4ab02b0bd8940aa0c15b32d082c5f879499fa89c24a43a375d507b183d880a05
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopushttps://doi.org/10.1109/IVS.2019.8814159

Search in DiVA

By author/editor
Mänttäri, JoonatanFolkesson, John
By organisation
Robotics, Perception and Learning, RPL
Robotics and automation

Search outside of DiVA

GoogleGoogle Scholar
Total: 533 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 434 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf