In a system invented by UB computer scientists, surveillance photos like this one are annotated with voice and mouse-click information for storage and retrieval.





Software sorts photos and video for defense agencies


by Jessica Ancker

If you think it's hard to keep track of the snapshots you brought back from summer vacation, just think what a chore it would be to label and sort thousands of them year after year.

That's exactly what U.S. defense agencies must do with surveillance photos and video images collected from all over the world.

Help may soon be on the way from University at Buffalo research scientist Rohini K. Srihari, Ph.D., and her team at UB's Center of Excellence for Document Analysis and Recognition (CEDAR).

The group has created a computerized annotation and retrieval system for photographs, and is working on a second system for video. Srihari's computer programs are capable of integrating visual and linguistic information to analyze, organize, and retrieve images. The highly sophisticated programs can understand spoken language, keyboarded commands, and mouse clicks.

The prototype "Show and Tell" program, funded by the Defense Department's Advanced Research Projects Agency, was developed for government surveillance photos. A human image analyst first uses a mouse to mark and label roads, which serve to divide the digitized picture into sections for reference. Then the analyst talks to the computer, describing important features.

The analyst might say, "The large structure in the left half of section D is a missile silo," or, using both speech and a mouse, "This [click] building is a school." More than one analyst can add comments to each image.

"Show and Tell" then transcribes the speech and processes it through natural- language understanding, image interpretation, and spatial-reasoning systems to determine which items in the photo are being described.

Then, in response to spoken commands, the program can retrieve images and highlight features in them that were labeled earlier. It can even draw new conclusions about an image. For instance, the program could respond to the command "Show me the building closest to the road" even if none of the buildings had been labeled with that information.

The program's ability to understand spoken language makes it easier and quicker to use. "It just would not be practical to give the analysts a keyboard and ask them to type away--especially when they're under time pressure," Srihari notes.

Srihari also envisions commercial medical applications for "Show and Tell." For example, the program could allow one or more doctors to describe pathologies shown in an X ray. "Show and Tell" could then catalog and retrieve the X rays for treatment, teaching, reference, or research.

"Our customers are the defense agencies, but all of the projects have civilian applications as well," she points out.

"The defense agencies capture enormous amounts of video. What they're looking for is a way to skim through to find relevant information so they can develop their policies and responses," she explains.

One problem is that continuous video segments can be many hours long. To address this, the CEDAR team developed a segmentation program to divide up the video in a logical manner based on the voice annotation. The program might insert a divider whenever activity occurs, such as when a helicopter takes off, for example.

The dividers give image analysts a convenient way to fast-forward through hours of video to find important features. As in "Show and Tell," the analysts then describe and annotate the images.

One way the program learns to recognize synonyms and related words is by reading documents and determining which words appear most frequently. From reading a report on vehicles, for example, the program might learn that Jeeps and trucks are both types of vehicles. It could then use that information to respond more intelligently to commands.

A prototype will be unveiled by autumn. Srihari says that while the code for each program will be different, the conceptual issues are similar.

"The research goal has been to exploit linguistic context in computer vision-- helping the computer figure out what it is seeing based on textual clues provided by the human user."