Prevent Situational Data from Blocking Situational Awareness

Situational Awareness (SA) came into common usage in the mid-nineties, as a way to describe the integration of technologies like GPS, Blue Force Tracking (BFT) and persistent ISR (intelligence, surveillance and reconnaissance) into an integrated, understandable picture that would provide the warfighter with a full awareness of threats and opportunities in operations. The concept at the heart of situational awareness is as old as human civilization. The sixth-century Chinese philosopher of war, Sun Tzu, wrote in The Art of War, “If you know your enemies and know yourself, you will not be imperilled in a hundred battles.”

In Sun Tzu’s time, the primary obstacle to achieving real situational awareness was a paucity of information; communication was slow, enemy intelligence was late and inaccurate—and even the location and strength of friendly forces was largely a guess, so awareness of the situation generally occurred long after critical decisions were made and actions undertaken. On the modern connected battlefield, a flood of information is available from digital mobile communications, real-time video, signals intelligence and a host of other sources. Today, rather than a paucity, we have an overabundance. The challenge now is how to process, analyze and put it all into actionable form. Then we need to distribute that data quickly enough to be useful—true situational awareness. The key here is making this flood of information understandable.

The human capacity to process information in real time is great, but the data load is far greater. The bandwidth of the eye-brain interface simply cannot keep up. The architects of the U.S. Army’s abortive Future Combat System (FCS) found this when they built three live autonomous aerial vehicle feeds into their platoon-level situational awareness system. In user tests, commanders usually ended up turning off or ignoring at least one of the feeds. It was just too much to take in.

Clearly, information technology has a role to play. Preventing the torrent of situational data from getting in the way of actual situational awareness, improvements in the video handling space, particularly in image processing and visualization, have the potential to increase the amount and quality of usable data on the battlefield. A recent white paper from our experts at GE Intelligent Platforms explains, “The image processor segment is where data extraction from raw video occurs. Image registration, detection of moving objects and target tracking occur in this processing area.”

Products like GE’s ADEPT3100 Video Tracker separate “interesting” visual objects from “uninteresting” visual noise, making temporal and spatial measurements of those interesting objects.

Visualization processors provide the interface to the eye-brain interface, a means by which content-rich video and associated metadata are packaged for human consumption. Visualization processing includes image enhancement, mosaicking/stitching of distributed apertures, graphical overlay, image fusion, image stabilization and advanced analytics, such as target classification and motion analysis. For example, the application of high-performance visualization technology using GE’s  IPS511 Image Processing System allows multiple images to be composited on one display.

Todd Stiefler

Todd joined GE from the world of Washington politics, and in no time at all has moved on to his second assignment, which sees him managing business development for the services GE is increasingly looking to offer to customers, including the Proficy SmartSignal predictive analytics software.

More Posts