when network connections may be unreliable, or there’s lots
of latency in the connection.
In these applications, specialized multi-core processors
known as graphics processing units (GPUs) have been
essential to efficiently supporting locally-hosted deep
learning. NVIDIA’s Clayton describes GPUs as “highly
parallelized architectures that can run those algorithms fast,
making it possible to train very large data sets in hours or
days, instead of weeks or months.” For example, NVIDIA’s
Jetson TX1 is a module designed specifically for deep
learning for embedded systems. It incorporates a powerful
GPU, and only uses 10W, and it’s smaller than a credit card.
The Jetson TX1, with its multi-GPU designed specifically
for embedded applications, has been a key enabler for new
One of the recently-released consumer devices powered
by NVIDIA’s Jetson TX1 chip is the Horus wearable assistant
for the blind. It uses a small head-mounted camera to gather
imagery and then translates what it is seeing to an audio
description. For instance, it can enable a blind mother to
read a written book to her child.
The TX1 chip also powers the June Intelligent Oven, which
uses computer vision and deep learning to help you optimally
prepare your food. It has a variety of sensors and it’s been
trained to determine exactly the right level of doneness for
all sorts of recipes. It can even tell what kind of food you’re
putting in just by looking at it.
Partitioning between the Device
and the Cloud
Although the Horus assistant and the June Oven keep
their intelligence entirely locally-resident, it’s not a cost-effective approach for many types of products where price
is a primary design driver. As a result, most of the next
generation of consumer-facing AI systems will likely partition
various percentages of their large-scale algorithms and
deep learning processes between on-board and cloud-based platforms, while keeping user-specific and identifying
This approach is essential not only for personal assistant
and household management and utility devices, but also
for the emerging realm of intelligent wearable devices.
One such device is the Vi, an AI-based personal trainer
developed by LifeBeam which can respond to voice prompts
and offer “coaching” advice based on its analysis of the
user’s performance and vital signs.
A Lifebeam’s intelligence resides partially on the headset
device itself and partially on the connected phone app. A
company representative divides VI’s artificial intelligence
according to two different aspects. One is a set of rules
based on pre-defined data inputs, through which the user’s
behavior is filtered. The second is a more adaptable data
collection and pattern recognition capability, and this is the
part that actually learns. Eventually, after it gets to know the
user’s behavior, it can suggest activities and audio or come
to conclusions based on that behavior.
Regardless of how next-generation intelligent consumer
devices will partition their processing between local and
cloud-based platforms, the one aspect of the future is clear:
intelligent devices will become more ubiquitous and will find
roles in many aspects of our lives, both individually and at as
LifeBeam’s VI intelligent fitness monitor. Image courtesy of LifeBeam
NVIDIA’s Jetson TX1
embedded GPU module.