Today the emerging field of autonomous driving demands accurate positioning and trustworthy self-localization methods which requires new technologies and hardware. The common approach is to fuse all available information sources: GPS, IMU equipped with the tri-axial gyroscope and accelerometer, odometer, and perception sensors (camera, lidar, radar). This tutorial will discuss trade-offs between inertial-based (IMU) and perception-based sensors for Autonomous Vehicles (Avs) requiring 10 cm positional accuracy 100% of the time without using GPS. The perception sensors provide centimeter accuracy but require powerful graphical processors to perform feature recognition. In contrast to perception sensors, IMUs allow for navigation completely independent of external references (e.g. satellites, road markers, geomaps, databases) and immunity to weather conditions. MEMS IMUs are attractive due to the SWaP-C metric, but only suitable for short-term inertial dead-reckoning because of positional error growth with time. The long-term navigation using MEMS, however, is possible when aided using perception sensors available in AV for correction of IMU drift. This tutorial will present various approach to solve vehicle localization problem by using MEMS IMU aided with Visual Odometry. The trade-offs between 1) Wheel Odometry, 2) Camera Odometry, 3) Lidar Odometry, and 4) Radar Odometry will be covered in detail. Visual Odometry in autonomous vehicles works by exploiting geometrical consistency of surrounding stationary objects to determine its track in 3D. VO implications for correcting IMU drift as well as 3D map generation will be discussed during tutorial. Finally, this tutorial will address hardware and primarily software challenges associated with developing an ambitious navigation system fusing all available input sensors (IMU, camera, lidar, wheel speed) to obtain an accurate vehicle position on the map without GPS.
Interferometry based on quantum systems, e.g. ultracold atoms, offers unprecedented sensitivity and performance when compared to classical inertial sensors. Such systems also commonly feature absolute measurements as their transduced signals are functions of their perfectly reproducible internal structure and the fundamental constants of nature. Choices of the internal quantum states, and procedures used to manipulate them, can make these systems highly sensitive to, or immune to, signals of interest.
MEMS accelerometers and gyroscopes typically use an electrical displacement readout mechanism, such as a capacitor gap between the proof mass and frame or piezoelectric/piezoresistive readout of tether strain. Optical readout for accelerometers have up until recently relied on bouncing a laser off the surface of the proof mass a single time. However, advancements in on-chip photonics and nano-optomechanics over the past ~15 years have enabled MEMS accelerometers to take advantage of on-chip displacement readout mechanisms that leverage long waveguide paths and high-Q cavities that are exquisitely sensitive to mechanical displacement and strain, achieving displacement sensitivities that are not attainable for typical electrical readout schemes. In this tutorial, I will present the fundamentals of optomechanical inertial sensors so that the attendee can understand the mechanisms that underlie the displacement sensing, fundamental noise sources, and packaging requirements/challenges.