Publications

This paper presents the most thorough study to date of vehicular carrier-phase differential GNSS (CDGNSS) positioning performance in a deep urban setting unaided by complementary sensors. Using data captured during approximately 2 hours of driving in and around the dense urban center of Austin, TX, a CDGNSS system is demonstrated to achieve 17-cm-accurate 3D urban positioning (95% probability) with solution availability greater than 87%. The results are achieved without any aiding by inertial, electro-optical, or odometry sensors. Development and evaluation of the unaided GNSS-based precise positioning system is a key milestone toward the overall goal of combining precise GNSS, vision, radar, and inertial sensing for all-weather high- integrity high-absolute-accuracy positioning for automated and connected vehicles. The system described and evaluated herein is composed of a densely-spaced reference network, a software-defined GNSS receiver, and a real-time kinematic (RTK) positioning engine. A performance sensitivity analysis reveals that navigation data wipeoff for fully-modulated GNSS signals and a dense reference network are key to high-performance urban RTK positioning. A comparison with existing unaided systems for urban GNSS processing indicates that the proposed system has significantly greater availability or accuracy.

Cite and download the paper:
Todd E. Humphreys, Matthew J. Murrian, and Lakshay Narula "Deep urban unaided precise GNSS vehicle positioning," accepted for publication in the IEEE Intelligent Transportation Systems Magazine.

 

Exchange of location and sensor data among connected and automated vehicles will demand accurate global referencing of the digital maps currently being developed to aid positioning for automated driving. This paper explores the limit of such maps’ globally-referenced position accuracy when the mapping agents are equipped with low-cost Global Navigation Satellite System (GNSS) receivers performing standard code- phase-based navigation. The key accuracy-limiting factor is shown to be the asymptotic average of the error sources that impair standard GNSS positioning. Asymptotic statistics of each GNSS error source are analyzed through both simulation and empirical data to show that sub-50-cm accurate digital mapping is feasible in moderately urban environments in the horizontal plane after multiple mapping sessions with standard GNSS, but larger biases persist in the vertical direction.

Cite and download the paper:
Lakshay Narula, Matthew J. Murrian, and Todd E. Humphreys, "Accuracy Limits for Globally-Referenced Digital Mapping Using Standard GNSS," In 2018 21st International Conference on Intelligent Transportation Systems (ITSC), pp. 3075-3082. IEEE, 2018.

 

A comparison of neural network, state augmentation, and multiple model-based approaches to online location of inertial sensors on a vehicle is presented that exploits dual-antenna carrier-phase-differential GNSS. The best technique among these is shown to yield a significant improvement on a priori calibration with a short window of data. Estimation of Inertial Measurement Unit (IMU) parameters is a mature field, with state augmentation being a strong favorite for practical implementation, to the potential detriment of other approaches. A simple modification of the standard state augmentation technique for determining IMU location is presented that determines which model of an enumerated set best fits the measurements of this IMU. A neural network is also trained on batches of IMU and GNSS data to identify the lever arm of the IMU. A comparison of these techniques is performed and it is demonstrated on simulated data that state augmentation outperforms these other methods.

Cite and download the paper:
Nick Montalbano, and Todd E. Humphreys "A Comparison of Methods for Online Lever Arm Estimation in GPS/INS Integration," in Proceedings of the IEEE/ION PLANS Meeting, Monterey, CA, 2018.

 

Exchange of location and sensor data among connected and automated vehicles will demand accurate global referencing of the digital maps currently being developed to aid positioning for automated driving. This paper explores the limit of such maps’ globally-referenced position accuracy when the mapping agents are equipped with low-cost Global Navigation Satellite System (GNSS) receivers performing standard code-phase-based navigation, and presents a globally-referenced electro-optical simultaneous localization and mapping pipeline, called GEOSLAM, designed to achieve this limit. The key accuracy-limiting factor is shown to be the asymptotic average of the error sources that impair standard GNSS positioning. Asymptotic statistics of each GNSS error source are analyzed through both simulation and empirical data to show that sub-50-cm accurate digital mapping is feasible in the horizontal plane after multiple mapping sessions with standard GNSS, but larger biases persist in the vertical direction. GEOSLAM achieves this accuracy by (i) incorporating standard GNSS position estimates in the visual SLAM framework, (ii) merging digital maps from multiple mapping sessions, and (iii) jointly optimizing structure and motion with respect to time-separated GNSS measurements.

Cite and download the paper:
Lakshay Narula, J. Michael Wooten, Matthew J. Murrian, Daniel M. LaChapelle, and Todd E. Humphreys, "Accurate Collaborative Globally-Referenced Digital Mapping with Standard GNSS," Sensors 2018, 18, 2452..

 

Recognizing objects in the environment and precisely determining their positions is a fundamental component of autonomous navigation systems. This thesis presents a technique for determining both the locations and the semantic labels of new objects in a scene with respect to a prior three- dimensional (3D) map of the scene. This work aims to reduce object recognition errors in cluttered environments by isolating new objects from the known background by correlating features de- tected in a new photo with feature points that constitute the 3D map. Such isolation enables a neural network trained to recognize an enumerated set of objects to focus narrowly upon those portions of images that contain new objects instead of having to process the whole scene. As a result, changes in a prior map can be rapidly detected and semantically labeled, allowing for confident navigation within the ever-evolving cluttered environment. Using multiple images ob- tained from varying camera poses, the globally-referenced 3D positions of changes in the scene can be determined with multiple-view geometry techniques.

Cite and download the paper:
Siddarth Kaki, Todd E. Humphreys, Maruthi Akella "Exploiting a Prior 3D Map for Object Recognition

More Articles...