Applying artificial intelligence to teach
robots how to behave a little more like human explorers.
by Annie Strickler
Ayanna Howard may never set foot on
Mars or lead a mission to Jupiter, but the work she's doing on "smart"
robots will help to revolutionise planetary exploration nonetheless.
As a project scientist specialising
in artificial intelligence at NASA's Jet Propulsion Laboratory (JPL),
Ayanna is part of a team that applies creative energy to a new generation
of space missions -- planetary and moon surface explorations led
by autonomous robots capable of "thinking" for themselves.
Nearly all of today's robotic space
probes are inflexible in how they respond to the challenges they
encounter (one notable exception is Deep Space 1, which employs
artificial intelligence technologies). They can only perform actions
that are explicitly written into their software or radioed from
a human controller on Earth.
When exploring unfamiliar planets millions
of miles from Earth, this "obedient dog" variety of robot requires
constant attention from humans. In contrast, the ultimate goal for
Ayanna and her colleagues is "putting a robot on Mars and walking
away, leaving it to work without direct human interaction."
Image courtesy JPL
Robotic explorers
like this one will someday possess artificial intelligence,
which will allow them to scout out terrains without human
oversight.
|
"We want to tell the robot to think
about any obstacle it encounters just as an astronaut in the same
situation would do," she says. "Our job is to help the robot think
in more logical terms about turning left or right, not just by how
many degrees."
How could a robot possibly make decisions
like a human?
Scientists are developing suitable
techniques by learning from humans' vision and observation abilities.
Humans don't have a rulebook or program
to consult for each move they make, Ayanna notes -- we're much more
reactive than that. Her team's job is to produce robots that can
emulate not only the thought process and judgement of a human for
sizing up the terrain, but also a human's ability to drive and navigate
a car in real time.
To do this, Ayanna and her colleagues
rely on two concepts in the field of artificial intelligence: "fuzzy
logic" and "neural networks."
Fuzzy logic allows computers to operate
not only in terms of black and white -- true or false -- but also
in shades of gray. For example, a traditional computer would take
the height measurement of a tree and assign that tree to some category
-- say, "tall." But a fuzzy logic computer would say the tree has
a 78 percent chance (for example) of belonging to the category "tall"
and a 22 percent chance of belonging to some other category. The
sharp distinction between "tall" and "short" becomes fuzzy.
This probabilistic approach to categorisation
allows the computer to learn from its experiences, since the assigning
of probabilities can be adjusted the next time a similar object
is encountered. Fuzzy logic is already in use today in software
such as computer speech and handwriting recognition programs, which
learn to perform better through "training."
The combination
of fuzzy logic and neural networks enables robot pioneers
to detect the obstacles in an unfamiliar terrain (left,
a sequence of one image being processed), assess the
relative safety of various alternative routes, and plot
a path to its destination (right, a three-image panorama),
all without real-time human guidance.
|
|
|
Neural networks also have the ability
to learn from experience. This shouldn't be too surprising, since
the design of neural networks mimics the way brain cells -- called
"neurons" -- process information.
"Neural networks allow
you to associate general input to a specific output," Ayanna says.
"When someone sees four legs and hears a bark (the input), their
experience lets them know it is a dog (the output)." This feature
of neural networks will allow a robot pioneer to choose behaviours
based on the general features of its surroundings, much like humans
do.
To accomplish this,
neural nets contain several layers of "nodes," which are analogous
to neurons. Each node in one layer is connected to nodes in the
other layers. Signals travel through this web of connections with
each node acting as a gate, only relaying signals above a certain
strength. Adjusting that threshold for individual nodes is how the
network 'learns'.
In this simple
example of a neural network, input signals are fed into the
yellow layer on the left, pass through the two processing
layers, then emerge on the right as output signals. This architecture
can perform some surprisingly sophisticated logic, especially
when feedback loops are added.
|
This dinner-napkin
sketch of neural nets may sound relatively simple, but in practice,
these artificial brains can perform some astoundingly complex logic.
In fact, Ayanna calls neural nets a "black-box technology" -- in
other words, what happens between the input layer and the output
layer is often so difficult to decipher that scientists just treat
it as a "black box" that somehow converts inputs into outputs.
By combining these two technologies,
Ayanna and her colleagues at JPL hope to create a robot "brain"
that can learn on its own how to expertly traverse the alien terrains
of other planets.
Such a brainy 'bot might sound more
like the science fiction fantasies of children's comics than a real
NASA project, but Ayanna thinks the sci-fi flavour of the project
contributes to its importance for space exploration.
Ayanna -- who wanted to be television's
"Bionic Woman" when she was young, and later decided she wanted
to try to build her instead -- says she believes that the flights
of imagination common in childhood translate into adult scientific
achievement.
"I truly believe science fiction drives
real science forward," she says. "You must have imagination to go
to the next level."
|