20 SEPTEMBER 2017
automobile manufacturers are racing to incorporate AI and
deep learning (DL) into their cars, with plans to deliver
vehicles with SAE (Society of Automotive Engineers) Level
3 (partial automation) and potentially Level 4 (conditional
automation) capabilities by 2020.
Although rapid advances in AI and DL algorithms are
spearheading the transition to autonomous vehicles (Figure
5), this transformation would not be possible without the
evolution of sensor fusion, which is largely taking place within
the vehicle itself. Until now, the conventional embedded
systems used in vehicle control applications processed
sensor data on a distributed web of microprocessors, each
associated with one, or a handful of sensors. In contrast,
sensor fusion brings the raw data from the 60 – 100 sensors
found on a typical car onto a single processing platform.
Tomorrow’s fully-autonomous vehicles are expected to
employ 2X-4X more sensors, needed to support advanced
functions like include ultra-precise vehicle location and
complete awareness of its surrounding environment.
Making sense of the flood of data arriving at widely
different rates and latencies requires the use of an onboard
sensor fusion platform to perform a series of challenging
tasks, beginning with co-registration of raw sensor data,
low-level feature detection (edges and blobs), and identifying
preliminary feature correspondences. The platform then
associates the edge and blob features, and fuses them
to create preliminary objects that are then analyzed by a
succession of image understanding algorithms.
At the lower processing levels, edge and feature extraction
algorithms, such as Convolution Neural Networks (CNNs)
are among the most useful methods (Figure 5). The higher-level processes, in particular, require the use of inference, an
AI method explained earlier.
Autonomous vehicle cost and performance can be
dramatically affected by decisions about what processing
tasks can (or must) be done onboard the vehicle itself, and
what can be performed on the cloud, or pre-trained in a
datacenter. Segmentation is often applied to inferential tasks
Figure 3. NVIDIA’s Xavier System-on-Chip. Image source: NVIDIA
Figure 4. An autonomous vehicle will use a combination of sensor fusion, artificial intelligence,
and deep learning to make sense of the flood of data produced by its sensors. Source: Intel