Aswin Prasannakumar
A Novel YOLOP-Based Framework for Panoptic Driving Perception in ADAS Applications.
Rel. Nicola Amati, Shailesh Sudhakara Hegde. Politecnico di Torino, Corso di laurea magistrale in Automotive Engineering (Ingegneria Dell'Autoveicolo), 2025
|
|
PDF (Tesi_di_laurea)
- Tesi
Accesso limitato a: Solo utenti staff fino al 21 Luglio 2026 (data di embargo). Licenza: Creative Commons Attribution Non-commercial No Derivatives. Download (21MB) |
Abstract
This thesis addresses two main limitations of the default YOLOP model for autonomous driving: - The primary one being the fact that there is no direct detection of lane center, and the secondary being- no depth or distance estimation in the object detection head. While the baseline YOLOP architecture can perform lane line detection, drivable area segmentation, and object detection at 40 FPS in an efficient manner, post- detection output steps—such as finding regions of interest, OpenCV based transformations, and curve fitting for determining the lane center from the detected lane lines—introduce significant latency (0.04 seconds), reducing the effective frame rate to 15 FPS.
Additionally, the bounding box outputs from the object detection head lack depth estimates, limiting their use in any control related algorithm design
Tipo di pubblicazione
URI
![]() |
Modifica (riservato agli operatori) |
