Friday, October 7, 2011

Software simplifies system specification

National Instruments' NI Week in Austin, TX was a great chance to learn how designers of vision-based systems used the company's LabVIEW graphical programming software to ease the burden of software development.

But as useful as such software is, I couldn't help but think that it doesn't come close to addressing the bigger issues faced by system developers at a much higher, more abstract level.

You see, defining the exact nature of any inspection problem is the most taxing issue that system integrators face. And only when that has been done can they set to work choosing the lighting, the cameras, and the computer, and writing the software that is up to the task.

It's obvious, then, that software like LabVIEW only helps tackle one small part of this problem. But imagine if it could also select the hardware, based simply on a higher-level description of an inspection task. And then optimally partition the software application across such hardware.

From chatting to the NI folks in Texas, I got the feeling that I'm not alone in thinking that this is the way forward. I think they do, too. But it'll probably be a while before we see a LabVIEW-style product emerge into the market with that kind of functionality built in.

In the meantime, be sure to check out our October issue (coming online soon!) to see how one of NI's existing partners -- Coleman Technologies -- has used the LabVIEW software development environment to create software for a system that can rapidly inspect dinnerware for flaws.

Needless to say, the National Instruments software didn't choose the hardware for the system. But perhaps we will be writing an article about how it could do so in the next few years.

Wednesday, October 5, 2011

Spilling the military vision beans

While there are many fascinating application challenges that have been resolved by machine-vision systems, there are many that go unreported.

That's because the original equipment manufacturers (OEMs) that create such vision-based machines are required to sign non-disclosure agreements (NDAs) with their customers to restrict what information can be revealed.

Oftentimes, it’s not just the specifications of the machine that are required to be kept under wraps. These NDAs also restrict the disclosure of the challenge that needed to be addressed before the development of the system even commenced.

Now, you might think that the development of vision systems for the military marketplace might be an even more secretive affair. After all, building a vision system to protect those in battle would initially appear to be much more imperative than keeping quiet about a machine that inspects food or fuel cells.



While the specifics of military designs are almost impossible to obtain legally, that's not true, however, for depictions of the systems that the military would like to see developed in the future.

Often such descriptions are found in extensive detail on numerous military procurement sites, even down to the sorts of software algorithms and hardware implementations that are required to be deployed.

Could it be that in doing so, though, the military minds are handing over potentially constructive information to research teams in rogue states? If they are, then surely they are making a mockery of the very International Traffic in Arms Regulations (ITAR), which control the export and import of defense-related materials and services.

Thursday, September 29, 2011

Imaging is all in the mind

In the 1983 science-fiction movie classic Brainstorm, a team of scientists invents a helmet that allows sensations and emotions to be recorded from a person's brain and converted to tape so that others can experience them.

While this seemed quite unbelievable thirty years ago, it now appears that scientists at the University of California-Berkeley are bringing these futuristic ideas a little closer to reality!

As farfetched as it might sound, the university team in professor Jack Gallant's laboratory has developed a system that uses functional magnetic resonance imaging (fMRI) and computational algorithms to "decode" and then "reconstruct" visual experiences such as watching movies.

UC-Berkeley's Dr. Shinji Nishimoto and two other research team members served as guinea pigs to test out the system, which required them to remain still inside an MRI scanner for hours at a time.

While they were in the scanner, they watched two separate sets of movie trailers, while the fMRI system measured the blood flow in their occipitotemporal visual cortexes. On a computer, the images of the blood flow taken by the scanner were then divided into sections, after which they were fed into a computer that learned which visual patterns in the movie corresponded with particular brain activity.

Brain activity evoked by a second set of clips was then used to test a movie reconstruction algorithm developed by the researchers. This was done by feeding random YouTube videos into the computer program. The 100 clips that the program decided were closest to the clips that the subject had probably seen based on the brain activity were then merged to produce a continuous reconstruction of the original clips.

The researchers' ideas might one day lead to the development of a system that could produce moving images that represent dreams and memories, too. If they do achieve that goal, however, I can only hope that the images are just as blurry as the ones that they have produced already. Anything sharper might be a little embarrassing!

Tuesday, September 27, 2011

Mugging more effective than infrared imaging

All technology can be used for both good and evil purposes. Take infrared cameras, for example. While they can be used to provide a good indication of where your house might need a little more insulation, they can also be used by crooks to capture the details of the PIN you use each time you slip your card into an ATM to withdraw cash.

That, at least, is the opinion of a band of researchers from the University of California at San Diego (San Diego, CA, USA) who have apparently now demonstrated that the secret codes typed in by banking customers on ATMs can be recorded by a digital infrared camera due to the residual heat left behind on their keypads.

According to an article on MIT’s Technology Review web site, the California academics showed that a digital infrared camera can read the digits of a customer's PIN number on the keypad more than 80% of the time if used immediately; if the camera is used a minute later, it can still detect the correct digits about half the time.

Keaton Mowery, a doctoral student in computer science at UCS, conducted the research with fellow student Sarah Meiklejohn and professor Stefan Savage.

But even Mowery had to admit that the likelihood of anyone attacking an ATM in such a manner was low, partly due to the $18,000 cost of buying such a camera or its $2000 per month rental fee. He even acknowledged that mugging would prove a lot more reliable means to extract money from the ATM user, albeit the technique isn't quite as elegant as using an imaging system to do so.

Friday, September 23, 2011

Sick bay uses high-tech imaging

While the exploitation of vision systems has made inspection tasks more automated, those systems have also reduced or eliminated the need for unskilled workers.

But such workers won't be the only ones to suffer from the onslaught of vision technology -- pretty soon even skilled folks in professions such as medicine might start to see their roles diminished by automation systems, too.




As a precursor of things to come, take a look at a new system developed by researchers at the Leicester University as a means of helping doctors to noninvasively diagnose disease.

Surrounding a conventional hospital bed, thermal, multispectral, hyperspectral, and ultrasound imagers gather information from patients. Complementing the imaging lineup is a real-time mass spectrometer that can analyze gases present in a patient's breath to detect for signs of disease.

Professor Mark Sims, the University of Leicester researcher who led the development of the system, said that its aim was to replace a doctor's eyes with imaging systems, and his nose with breath analysis systems.

Even though nearly all the technologies employed in the system have been used in one way or another, Sims said that they have never all been used in an integrated manner.

Clearly, though, if this instrumentation were coupled to advanced software that could correlate all the information captured from a patient with a database of known disease traits, one would have a pretty powerful tool through which to diagnose disease.

The doctors, of course, would then have to find something else to occupy their time. But just think of the cost savings that could be made.

Monday, September 19, 2011

More to follow...

Starting this month, the Vision Insider blog will offer you the reader an opinionated twice a week update about industry trends, market reports, new products and technologies. While many of these will be staff-written, we will be using this forum to allow you our readers to opine on subjects such as machine vision and image processing standards, software, hardware, and tradeshows you find useful.

If you have any strong opinions that you feel we could publish (without of course being taken to court), I would be only too pleased to hear from you. And, of course, if you disagree with what I have said, you are only too free to leave a comment using our easy to use feedback form.

Friday, August 19, 2011

A new journey

Conard Holton has accepted a new position as Associate Publisher and Editor in Chief of a sister publication, Laser Focus World. Andy Wilson, founding editor of Vision Systems Design, will be taking over the role of editor in chief. Andy has been the technical mainstay of Vision Systems Design since its beginning fifteen years ago, writing many of the articles that have established it as the premier resource for machine vision and image processing.