Too Much Information: Taming the UAV Data Explosion

[defenseindustrydaily.com] TMI! That’s what US military commanders are saying about the
explosion of data being collected and processed (or not) by thousands of
UAVs. Because UAVs provide valuable information, the US military has
been asking for more and more of them to be sent to Iraq and
Afghanistan. Be careful what you wish for. You might just get it.

All that information needs to be processed so that it is useful for
the commanders in the field. Software that can archive and retrieve
information when needed and display it on a user-friendly interface is
available in the commercial sphere. But the technology is not being
developed and deployed fast enough in the military sphere.

As Lt. Gen. Deptula, USAF deputy chief of staff for intelligence,
surveillance, and reconnaissance, said recently, “We are going to find
ourselves in the not too distant future swimming in sensors and drowning
in data.” This free-to-view DID Spotlight article examines the problem
of the UAV data explosion, some possible solutions, and future
challenges.

Proliferation of UAV Data: The Stats

UAVs have played a crucial role in the US military’s wars in Iraq
and Afghanistan. With all the useful information that UAVs provide comes
the problem of how to sort through it all and find actionable data. The
scope of the problem is apparent. There are thousands of UAVs deployed
in Iraq and Afghanistan. In 2009, the USA’s UAVs alone generated 24
years’ worth of video if watched continuously
. New UAV models are
expected to produce 30 times as much information in 2011.

The USAF flies 39 orbits over Afghanistan and Iraq every day, and
the service expects that number to increase to 50 by 2011. An orbit
is a 24-hour combat flight by a single UAV
. The USAF uses two shifts
of operators per orbit for its high-flying, long-endurance UAVs (MQ-1
Predator, MQ-9
Reaper
and RQ-4 Global Hawk), so increasing the number of orbits to
50 is expected to double the requirement for operators.

New technological developments are expected to compound the data
explosion problem. For example, the USAF is planning to add a wide area
airborne surveillance sensor to its MQ-9 Reaper and, eventually, its
other UAVs. This system is expected to
add 50 video streams per sensor within a few years
. The USAF is
aiming to have a version deployed on the MQ-9 by the summer of 2010.
Made by Sierra Nevada, the Gorgon Stare sensor system is named after the
three sisters of Greek
mythology
who had a gaze that would turn anyone who beheld it to
stone.

The USAF has increased the number of UAVs over the last 2 years by
330%. It also plans to shift 3,600 manpower billets to analyze data
streaming from UAVs. It is also doubling the number of ISR liaison
offers assigned to ground forces to assist with integration of UAV data
collection and exploitation.

The Deluge: Breaking Down
Barriers

In addition to the proliferation of UAVs and the exponential
expansion of sensor capability, the Pentagon is engaged in an effort to
break down proprietary barriers between UAV systems. This effort is
intended to allow commanders on the battlefield as well as analysts back
in CONUS to access important information no matter which system
collects it.

For example, the popular MQ-1
Predator
UAV system comes in a package with 4 vehicles, 1 ground
control solution (GCS)
, and a data link suite that consists of UHF
and VHF radio relay links, a C-band line-of-sight data link, and Ku-band
satellite data links.

Unfortunately, the Predator GCS can only control and process
information from Predator vehicles. The RQ-4
Global Hawk
GCS controls and processes information from Global
Hawks. And other UAVs use their own proprietary GCS systems.

In 2008, the Pentagon launched an effort to develop and demonstrate a
common, open GCS architecture supporting everything from MQ-8
Fire Scout
unmanned helicopter to long-range Global Hawk. The
intent is to end the packaging of UAVs and GCS by manufacturers as 1
proprietary system. The Pentagon wants GCSs to be able to control
multiple types of UAVs and share information across platforms. See “It’s
Better to Share: Breaking Down UAV GCS Barriers
” for more
information.

While breaking down barriers sounds like a great idea, it adds to
the data explosion problem. The good news is that there is a lot of
information out there; the bad news is it’s hard to find the right
information. If all of the UAV systems can share data, who or what is
going to sort out the data so that useful information can emerge out of
the raw feeds?

The Madden Effect: Video Tagging
and Retrieval

One solution to data explosion is to tag the data, store it, and
retrieve it when needed. An application of this technique is used in coverage
of NFL football games
.

A new $500 million computer system being installed by the US Air
Force will enable it to use TV broadcast techniques and send out
highlight reels of the greatest battlefield moments, i.e., the most
important video feeds for the commander. The video is tagged with time,
geographic coordinates, and other essential data.

Not to be outdone, the US Navy is climbing into networks’ broadcasts
trucks located outside of football stadiums to get a first-hand view of
the technology. Cmdr. Joseph Smith, a Navy officer assigned to the
National Geospatial-Intelligence Agency, told the New York Times
that he and other officials learned a lot from watching the technology
in action (besides the scores of their favorite teams).

“There are these three guys who sit in the back of an ESPN or Fox
Sports van, and every time Tom Brady comes on the screen, they tap a
button so that Tom Brady is marked.” Then, to call up the highlights
later, he said, “they just type in: ‘Tom Brady, touchdown pass.’ ” This
retrieves the video they need at that moment.

The US military would like to implement a similar system for its UAV
videos. However, tagging can be labor intensive. In addition, the right
tags need to be used so that the person searching for the video in a
time sensitive situation doesn’t get frustrated by not being able to
find the video he needs. If the tags don’t make sense to the commander
searching for the video, the technology will be useless.

Like John Madden, the US military is using telestrators, such as the
one on the Remotely
Operated Video Enhanced Receiver (ROVER)
. This technology is
similar to that used by Madden to mark and analyze football plays on the
video screen. The telestrator enables US military commanders in the
field to circle images of vehicles or individuals they want the UAVs to
track.

Data Fusion: Telling a Story

Data fusion involves the use of techniques and software that combine
data from multiple sources and analyze that data to make it useful for
the end user.

Data fusion can involve the combining of data (such as UAV video)
with a geographical information system (GIS), which adds location and
time data to the images gathered by UAVs. To accomplish this, the raw
data has to be combined with metadata, which is information about the
data that enables the data to be combined with a GIS.

According to the Belgian
Royal Military Academy
, data fusion can provide the following
military benefits:

  • “improved confidence in decisions due to the use of
    complementary information (e.g. silhouette of objects from visible
    image, active/non-active status from infra-red image, speed and range
    from radar,etc.);

  • improved performance to countermeasures (it is very
    hard to camouflage an object in all possible wave-bands);

  • improved performance in adverse environmental
    conditions. Typically smoke or fog cause bad visible contrast and some
    weather conditions (rain) cause low thermal contrast (Infra Red
    imaging), combining both types of sensors should give better overall
    performance.”

An example of a basic data fusion system is the Link
16 standard
embedded in the MIDS-LVTs carried by fighters. A target
seen and identified by any fighter jet in a formation, or any linked
ground station or ship, is seen and identified for all.

Data fusion is a subset
of information fusion, which is such an important issue that the US
Navy has set up a center to tackle it – the NAVAIR
Information Fusion Center
. It is run by NAVAIR’s Naval Air Warfare
Center Weapons Division (NAWCWD).

Robert Reddit, NAWCWD’s Director of Information Fusion, describes
the purpose of information fusion this way:

“Information fusion is the science behind Critical
Infrastructure Protection, Homeland Security, ForceNet and Maritime
Domain Awareness. Currently there are hundreds of rooms with hundreds of
individuals all tracking tens of thousands of aircraft, maritime
vessels, ground vehicles and individuals with everyone looking for the
needle in the haystack. Information fusion reduces the rooms and
individuals and finds the needle, pulls it out of the hay and puts it
where it won’t hurt anybody.”

To get the ball rolling, in 2009 the center awarded a $95
million contract
to General Dynamics to support the center’s work.
The firm is helping with research and development, integration and
testing, continual advancement and operation of the Information Fusion
Center; training for newly developed software, hardware and other
products; and independent verification and validation of sensors and
systems relating to critical infrastructure protection and force
protection.

To support the center, General Dynamics is using its Quarterback
Information Fusion capability and Story Maker fusion system. Story
Maker
provides an overall reconnaissance architecture that stresses
multi-service/ multi-platform utility, interoperability among existing
and planned airborne reconnaissance components, timely dissemination of
intelligence information to operational forces, enhanced combat
identification capability, and high payoff multi-use technology. Through
the application of algorithms, Story Maker fuses and reasons with
collected data, building evidence for track identification. Story Maker
enables identification of 10 times as many tracks with 98.6% accuracy.

While Story Maker was originally
developed
to integrate information collected by the Navy’s EP-3E
ISR aircraft
, the technology used by the system can be applied to
data fusion for a range of platforms.

Googling the Enemy: Intelligent
Search

Intelligent search is another tool that can be used to make UAV data
more accessible. Probably the most famous and widely used intelligent
search engine is Google.

Google uses a patented algorithm called PageRank that ranks pages
that match a given word search string. The algorithm, which is a list of
well-defined instructions for completing a task, analyzes
human-generated links, assuming that the web pages linked from many
important pages are themselves likely to be important. This produces
results that tend to be in line with human concepts of importance. The
founders of Google, Sergey Brin and Lawrence Page, laid out their Google vision
as researchers at Stanford University.

Of course, searching for UAV video is a lot different than searching
the web for a favorite recipe, but the technology is similar.
Algorithms can be used to develop intelligence search engines for UAV
video.

To help users retrieve information, one promising method, called natural
language processing
[pdf], discovers user preferences and needs by
either extracting knowledge from what users are looking for or
interactively generating explanatory requests to focus users on the
information they are interested in. In particular, using techniques for
automatically generating natural language sentences allows the system to
produce a useful dialog with the user and guide preferences.

The use of natural language to improve the performance of the
intelligent search engines is one aspect of artificial
intelligence (AI).
Other AI technologies include object recognition
and statistical machine learning.

Developer of the Powerset AI search engine Barney Pell describes the
use of AI for intelligent search in the following way:

“Search engines try to train us to become good keyword
searchers. We dumb down our intelligence so it will be natural for the
computer…The big shift that will happen in society is that instead of
moving human expressions and interactions into what’s easy for the
computer, we’ll move computers’ abilities to handle expressions that are
natural for the human.”

An intelligent search engine being developed for photos is called
Riya, which looks “inside” photos to extract information about their
qualities using AI. Riya uses algorithms to calculate a photo’s shape
densities, patterns and textures and extract this information into a
visual signature. Each photo is represented by 6,000 numbers; Riya uses
AI to match one visual signature to another.

This technology could be applied to UAV photos and videos so that
related videos from different platforms or from a vast archive could be
matched within seconds. This would enable commanders to search and
retrieve valuable data in real-time on the battlefield.

Another application of AI in intelligence search is facial
recognition. The USAF UAV Battlelab is working on software
that can pick out face patterns in UAV video
. The software is based
on that used by the Nevada gaming industry to pick out problem
gamblers.

The UAV Battlelab tested the software using the photo of a USAF
captain’s face. The staff launched a Pointer UAV, which began beeping at
a clump of trees 2 miles away.

As the UAV approached the trees, the operators could detect a
vehicle underneath the trees. As the UAV got closer, the operators were
able to detect the captain sitting in the vehicle underneath the trees.
Yet the UAV had indicated the captain was there from 2 miles away
through foliage and a vehicle windshield using the facial recognition
software.

Blind Men and an Elephant:
Sensor Fusion

One area of UAV autonomy that could help with the data overload is
sensor fusion.

To better understand the concept of sensor fusion, it might help to
recall the story of
the blind men and the elephant
. As the the story goes, a group of
blind men examine an elephant and try to determine what it is. Each man
grabs a different part of the elephant. One grabs the tail and believes
the elephant is like a rope. Another grabs the trunk and believes that
the elephant is like a snake. Another grabs the leg and believes the
elephant is like a tree. Each has information about a part of the
elephant, but no one has the whole picture.

UAVs are like blind men. They are able to collect information about a
particular narrow area they are examining. UAV operators compare
looking through a UAV camera to looking through a soda straw. The video
is partial and needs to be combined with data from other UAVs to create a
complete and accurate picture. That is the goal of sensor fusion.

One application is to use a multi-sensor fusion algorithm that can
bring together information from multiple UAVs about a particular target
of interest in a unified display. Without sensor fusion, each UAV
platform would track the target separately, which creates redundant
capability and leads to less than optimal tracking results. Together,
the UAVs can provide a full and accurate picture based on which the
commander can take action.

In fact the USAF awarded a contract
on April 23/10 to Aurora Flight Sciences Corp. for research on
collaborative sensor fusion and management of multiple UAVs.

Working with researchers from MIT, the Office of Naval Research, the
Air Force Research Lab, and the Rome Labs, Aurora developed autonomy
technologies
for multi-unmanned vehicle coordination, sensor
management and sensor fusion, to enable coordinated search, track and
prosecute missions with various unmanned vehicles carrying a variety of
sensors.

Aurora’s multi-vehicle system can function either fully
autonomously, in a distributed way, or with humans in the loop at the
command level (to supervise the unmanned vehicles and make strategic
decisions with the aid of a centralized planning interface) and/or at
the sensor level (to assist with target identification and tracking).

Future Shock

What does the future hold for the UAV data explosion? It seems that
the US military will continue to demand more eyes in the sky to detect
potential threats. And the need to sort through and integrate that
information in a way that enables commanders in the field to save US
lives and defeat the enemy will become more acute.

The industry will need to develop smart technologies that automate
the process of archiving, tagging, retrieving, managing, and displaying
UAV videos and other info gathered by increasingly sophisticated
sensors. Machine-to-machine interfaces will need to become much more
sophisticated to handle the data deluge.

Of course, there are other sensors beside UAVs collecting
information about the battlespace. How to bring those thousands of
ground, air and sea-based sensors, as well as human intelligence,
together in a usable format will be a huge challenge for the military.

Perhaps the US military could take a cue from the commercial sector,
which is facing a similar problem of data overload. The world is
expected to
create 1,200 exabytes
(billion gigabytes) of information in 2010.

To tackle this data tsunami, industry is turning to such
technologies as business analytics – performing statistical operations
for forecasting or uncovering correlations – machine-learning, and
visualization software; these are all technologies that have military
applications.

The need is urgent, the solution complex. The coming years will see
if the US military will be able to adopt UAV data management solutions
used in the commercial world, while retaining the ability to protect
information that needs to be protected. US soldiers’ lives depend on it.

Source: http://www.defenseindustrydaily.com/uav-data-volume-solutions-06348/